We’re pretty sure we make our decisions in agreement with our interests. Yet, that’s rarely the case. Call us irrational! That’s precisely what Dan Ariely’s conference at USI 2017 was about. But what if this irrational behavior was also what made our intelligence anything but artificial? 

Let’s start on a topic that perfectly illustrates our irrational behavior: climate change. Everybody knows we should be putting every effort into solving the problem, yet we simply don’t. What are your views on the subject?


From a social science perspective, climate change is probably the most difficult problem there is. If you look at the world and ask, “what causes people the maximum amount of apathy?” it would be climate change. All the things that usually cause apathy come together here: it will happen in a long time, it will start to happen somewhere else, and to other people, and the progress is slow so you don’t see it. It doesn’t have a face— not somebody who is suffering yet. And anything we would do as individuals is a drop in the bucket.

The rational explanation to the problem is to say that it’s all about information. That’s what my talk at USI is about: this idea that if people only knew, everything would be fine. The reality is, that this is not the case.

I’m not saying that knowledge is not the key to knowledge. But the sad reality is: knowledge is not the key to action. For example we know that we should save money, eat fewer croissants, and that texting and driving is dangerous. We know lots of things and don’t take actions. The real question is: “what compels us to act?”. Which brings us back to all these things I mentioned earlier—long-off future, doesn’t have a face, a drop in the bucket… All those things widen the gap between knowledge and action.

So what can we do? One thing is regulation. Let’s look at the things that people don’t seem to be doing and just decide that we will force a trend. Pensions, for example: force people to save for retirement because otherwise they won’t. We attempt to forbid people from texting and driving. Global warming, I think needs to be in that category. Now, as we all know that’s not easy to do! The problem has social coordination between nations, every nation prefers to be the only one not doing anything.

 “The sad reality is: knowledge is not the key to action.”

And if one nation is holding out, others will too.

We can talk about that problem separately, but at some point there will have to be regulation, and sadly by then, it might be too late… Question is: what can we do until then? The phrase I think that best describes the strategy is “getting people to do the right thing for the wrong reason.”

For example: pride. Since the Prius started, people have used it to show that they’re environmentally conscious. It’s a boost to their ego. But by boosting their egos, they are also environmentally correct.

I’ll give another example: one of the main contributors to global warming is the amount of trash produced. There is a company in North Carolina called WasteZero. They go to municipalities and say “let’s switch the trash from these big containers to plastic bags.” Sounds counterproductive! But let’s make these plastic bags yellow, and let’s charge people $3 per bag. That’s very expensive! The usual default reaction people have with trash is to take something and just throw it in the bin. But when they see this yellow thing in their house, they get upset because it’s so expensive! And all of the sudden, they find it interesting to recycle! It’s not so much a financial solution as it is an annoyance solution. You could charge people a flat rate of $300 for trash collection and people wouldn’t respond. It’s that momentary annoyance. I think we need to do a combination of ego, social proof, reminders, and all kinds of things. Let’s think about another solution: imagine that your power meter at home was in the middle of the kitchen so you would see it every day. Or imagine if you had to put cash in to operate it. It’s really about saliency. So, I think we could find solutions if we address this problem from another perspective.

Photo de Dan Ariely à la conférence USI 2017, sur le comportement irrationnel de l'homme


Do we have the same pattern when we need to make a decision which has a small impact—such as buying a bar of chocolate—vs. a large impact—like buying a house or a car ?

No. What we find is a global pattern that has an inverse U shape. We make lots of mistakes and don’t pay enough attention when it comes to small decisions, like buying coffee or groceries. However, we spend a lot of time making medium-sized decisions—buying a stereo, a camera, eyeglasses… Then, when it comes to huge decisions—buying a house, deciding about cancer treatment—we, again, don’t spend enough time.

Think about the last time you bought, say, a digital camera. It was maybe 300€ and you spent perhaps three hours making the decision: an hour per hundred euros. Then, say, you decide to buy a house for a million euros. How many hours do you spend on that decision? More than three, sure! But not 10,000! Proportionally, certainly not as much as you did for the medium decision.

Another example: when we get bad news from physicians about an illness, we spend very little time deciding on a treatment plan. Some data showed that more than 70% of American men, when told that they have prostate cancer, immediately decide with the doctor what to do. This mean that if they meet the surgeon, they get surgery, if they meet a chemotherapist, they do chemotherapy. They just say, “Doctor, tell me what to do,” and then they do it. That’s a big decision!

But as the decision becomes more important, it also becomes more daunting. It’s so difficult that it’s too much for us. It seems that there is a middle point which is the sweet spot; it’s not too difficult, we can handle it, and we can make a good decision. But if it gets too big, it’s too much.



Read the complete summary of Dan Ariely’s talk at USI 2017



We’re definitely not as rational as an artificial intelligence would be. Are algorithms the future of human decision-making?

The type of learning algorithms we have right now are good at understanding how we make decisions, do things and then do it faster for us. That worries me because there are lots of things that we do a certain way and that I don’t want us to keep doing! For example, if I just looked at what people get to eat, how much they exercise… then helping repeating that attitude isn’t such a good advice. Algorithms don’t have an objective function yet. They aren’t good at understanding what is the objective function of humanity. That’s one category.

Then, there is the category of algorithms that are just trying to optimise. For example: exercise. If you don’t know when to exercise, an algorithm can find the right time for you. Those algorithms don’t have as an objective function either for things like happiness or optimal sleep. I’m not saying we aren’t going to get there at some point, but when I talk to people working on artificial intelligence, it worries me sometimes that they have a very mechanistic view of humanity. Whereas in fact we should pause and ask ourselves what we’re actually trying to achieve. Think about something like Waze, which I love! It’s a wonderful thing with learning optimization and so on. It gives you the shortest time to get to work, but does the shortest also mean the least stressful? Not necessarily. Maybe the scenic route would make more sense. Maybe not having to stop and start all the time and being able to listen to music would be a better approach. We have a very interesting objective function as human beings: we want to feel that we’re contributing, have a sense of mission and purpose, and we want to feel relaxed and thoughtful, and we don’t want to just do nothing!

All this to say, I think that there is a lot of interesting potential in artificial intelligence. But if I think about my field, I believe we need to inject a better objective function.  

“IA experts have a very mechanistic view of humanity. Whereas in fact we should pause and ask ourselves what we’re actually trying to achieve


So, in short, you think that we are using artificial intelligence in too rational a way?

Yes. Think about the driverless car. Are you necessarily going to be better off sitting in a car not driving? It’s possible. It’s possible that you would say that you’re getting back some precious time. But the reality is that focusing on driving, learning how to shift gears… these things have value. I’m not sure, for example, that I’m better off with an automatic gear than with a manual gear. It seems like it’s less work, but is less work also the ultimate a goal?

We went to the Louvre together before the conference. This is a clear curation of people who wanted to work very hard. Look at our desire to create and consume art. We clearly want to be stimulated and to create, but we’re actually going to the opposite direction saying we want everything to be done for us and automated. Another example is meals. I wrote about the IKEA effect. On any particular night, if you asked me if I would rather have a robot chef cooking dinner for me or do it myself (plus the dishes!), I’d say the robot is kind of a good deal! But, you know what? When I cook, I really enjoy it! And I feel that I’ve taken care of my family and so on… So, I don’t want to spend hours driving every day, but I think in all of these situations, the ultimate question is: What are we substituting? And, are we always maximizing people’s sense of purpose, mission, joy, etc? I’m not so sure…

“We clearly want to be stimulated and to create, but we’re actually going to the opposite direction saying we want everything to be done for us and automated.”

When working on a project, we often face the decision to either go faster, with a risk of decreasing quality, or to do it “the right way” with the risk of falling behind schedule. How can we better make these long- and short-term decisions? Which should we prioritize?

There is no generalized answer because sometimes the deadline is crucial and sometimes it’s the quality that is more crucial. However, I will tell you one thing: some people believe that they can focus more when they get closer to the deadline.


So is this procrastination?

It’s procrastination, but procrastination for a reason where you say, “I have a deadline on Thursday. Today’s Tuesday. So if I start working today, I’ll work very slowly. But if I start on Wednesday and know I have a deadline on Thursday, I’ll really focus.”

It turns out that there is no good evidence for that. People think they can focus more before a deadline, but there is no evidence that they can actually produce more. Say you have a deadline in 24 hours. You certainly will work many of those 24 hours before that deadline. Your allocation of time will change but you will not be more efficient per hour. People feel that the deadline will get them to concentrate, they think they’ll do better but it’s not actually the case.


Knowing all of that, do you still procrastinate?

I try not to. I procrastinate very little and I start things very early. First of all, I try to eliminate deadlines from my life. I don’t see the usefulness of deadlines. For example, I wrote a few books, and never accepted any deadline from the publisher. I didn’t sign a contract because with every contract comes a deadline. I just said “I’ll give it to you when I’ve finished.” I think to myself about weekly goals that they want me to meet and I set them up.

One thing I am doing now—as I’m working on a workbook for behavioral economics with two colleagues— is agreeing to submit a chapter per week, due every Friday. If I submit it on Friday, my two colleagues sing a karaoke song and send it to me, and if I don’t submit it, I send them a karaoke song.

 “I try to eliminate deadlines from my life. I don’t see the usefulness of deadlines.”

So this is doing the right thing for the wrong reason.


Leave a Reply

Your email address will not be published. Required fields are marked *