As a Computational Social Scientist, Professor Sandra Matz unveils a brand-new field of expertise, relating Big Data analyses to our psychology. Understanding our behavior from our clicks, our likes, or even our profile pictures is the promise of a much more personalized experience online—That is, as long as we keep things ethical and understand that transparency can be a good deal for everyone, users and companies alike.

 

Visuels_speakers_Matz

Can you walk us through your research, the methods you use and the impact they have on consumers?

Psychology is a behavioral science, and yet we never really looked at behavior in real-life because it was much too complicated. We couldn’t just follow people around all day to make detailed observations of what they were doing in their everyday lives. But this has changed radically over the last few years. Today, we no longer have to rely on their self-reports and the questions we ask them; we can observe their actual behavior by studying their digital footprints. That really started this whole research agenda of computational social science.

What my colleagues and I are interested in is whether we can use people’s digital footprints to understand something about their underlying psychological motivations. That means that we look at someone’s online profile (e.g. their Facebook Likes or Tweets) and try to translate their digital footprints into a psychological profile (e.g. personality). This works surprisingly well, actually.

Just by looking at your Facebook Likes, for example, we can predict your personality with higher accuracy than your work colleagues, friends, or even family members can.

I’m interested in the practical applications that this research holds. What does it mean to be able to predict the psychological profiles of millions of people with just a few clicks? How can we use such predictions to help people lead healthier and happier lives?

I initially focused on the context of health care and communication. I wanted to see if we could use psychological profiling to help people live healthier lives (by helping them better comply with prescriptions or use offers for preventive medical check-ups) but it turns out to be much more difficult to convince government bodies like the NHS (National Health Service) to work with academics than marketing companies!

So we switched focus and started testing our ideas in the context of marketing, but the idea is exactly the same: We’re trying to make message content more relevant and more engaging.

Now, how do you test this? How do you target people based on their personality to see if they respond more positively to messages that are tailored to their psychological profile? What we had done before is to predict the personality of individual people by getting access to their full history of Facebook Likes or Status Updates, and then running those pieces of information through an algorithm that spits out a personality profile for each user.

That’s great in theory, but the problem is that companies don’t have access to either the individual-level Facebook data or the algorithms. We therefore wanted to leverage existing targeting tools, accessible to anyone, and it turns out that Facebook already provides that service: it gives marketers the chance to target people based on “Interests”. That means I can reach out to a segment of users connected to the Facebook Like of “Socializing”, for example, which we know is associated with Extroversion.

 

How do you use someone’s psychological profile to make marketing more effective? Does this only benefit companies or can consumers benefit as well?

I would say that there’s generally two ways in which you can use this knowledge. One of them I call “product matching” and the other “message matching”.

Product matching, as the name suggests, means that we’re suggesting products to people that we think match their psychological profiles. If we advertise a party app to extroverted users and a crossword app to introverted users, we would expect marketing campaigns to be more effective and companies to make higher profits. However, since we live in a world of choice overload, I believe that there is also a huge benefit to the consumer.

There are simply too many products out there so some kind of filtering helps us find the things we are really interested in.

The question is, can we do that in a way that is based on personality? And does using personality offer advantages over other existing approaches (e.g. Amazon’s people who bought X also bought Y)?

I think using personality has a few advantages. One is that it’s more forward-looking because it doesn’t rely on customers having expressed specific interest in the past. The way it works now is that you have to search for a camera in order to get ads for related equipment or good offers. What if we could infer from your digital profile that you’re artistic and curious and therefore recommend cameras to you before you have even thought about it? Plus, my own research shows that people are happier if they spend money on things that match their personality. That’s simply because such purchases allow us to express who we are as individuals (and believe me, people love this!). So by using personality we should actually be able to not just help people find what they are looking for, but proactively match them to products that will make them happier.

The second mechanism is what I call “message matching”. Message matching doesn’t try to recommend different products but instead aims at making the communication part more personal and engaging. In fact, the idea behind it is as old as humankind. We all change the way we talk to the people around us based on who they are. You would not talk to a three-year old the same way you talk to a work colleague or your boss. And every talented car salesman will tell you that they try to gauge as much information about their counterpart in order to tailor their sales pitch to the characteristics of the potential buyer. It’s standard and natural. So natural in fact that we usually don’t realize we all do it all the time. Message matching based on psychological profiling is a way to replicate online this in-person habit that had gotten lost.

This approach has ethical issues though, the most prominent example being that of Cambridge Analytica. How do you deal with those issues in your research and what would you recommend to companies?

Yes, absolutely! The ethical implications of our research are huge, and we are fully aware of that. In fact, each of my papers contains a discussion on how these technologies could not only be used for good but also be abused to manipulate people and to exploit weaknesses in their character.

Talking about Cambridge Analytica specifically, it’s hard to say what actually happened and whether or not their campaigns had an influence on the election. There is simply no data to support that. And trusting the word of Alexander Nix doesn’t seem to be a promising alternative here. But to be honest, for me it doesn’t matter. What matters is that the technology exists and that even if it wasn’t used or didn’t have an effect on this US election, it might very well have in the next one.

Importantly, the dangerous part is not the technology itself, but what we actually do with it.

If a company like Cambridge Analytica uses it to play with people’s fears and to dissuade them from voting in the election, then this is certainly a big problem and a threat to democracy. But just image for a second that Hillary Clinton had used psychological targeting in her campaign to really understand what people care about, and to figure out which part of her political agenda was most relevant to each person. My guess is that she would have been celebrated in the same way Obama was celebrated for his data-driven campaigning in 2008. The big difference here is the intention behind the use of psychological targeting.

I believe that the same way it can pose a threat to democracy, psychological targeting could help revitalize it. Just look at the US election.

40% of people didn’t even go to vote. There’s a growing disinterest and disengagement in politics among the general population. A lot of people feel that their voices are not being heard, and that politicians do not even try to understand what’s important to them. If we could use psychological targeting to get people to care about political issues again, this would be a great gain for democracy. But of course it’s a very slippery slope. How much are we allowed to play with people’s motivations and psychological needs? Who is supposed to supervise activities in this area? And how do we detect the bad players more quickly? Those are all questions that we need to address as a society.

What do you think is missing in order to avoid manipulation? Is it a matter of legislation?

It’s a great question and one of the most difficult ones to answer. On a conceptual level, my answer is transparency. Transparency about the data that is being collected as well as the inferences that are being made with them. And in my opinion there’s different ways of achieving that.

One is certainly government regulation. The new European Data Protection Regulations (GDPR) are a good first step. They require companies to make their data collection and usage policies transparent. And not just that, they require them to make it transparent in such a way that the user actually understands it. In addition, users have the right to revoke access, giving them more control over their own data. What I would like to see moving forward, however, is more of an opt-in approach, where users have to actively agree to their data being used.

But I think there’s additional levers outside of regulation that we can pull. One is to convince companies that transparency is in their own interest. The simplest argument here is that companies have a lot to risk if they misuse consumer’s data behind their backs. Facebook is the best example. They got heavily punished in the wake of Cambridge Analytica for not making transparent to users what happened with their data. Now, instead of collecting data and running predictions behind users’ backs, my suggestion to companies is to make personalization part of their value proposition. If you can convince consumers that by using their data they can offer much better services, this is actually something that could set you apart from your competitors. Just think about Google. Nobody wants to go page 2 to find what they are looking for. By using people’s data Google is able to provide a much better service that people not only appreciate, but actually expect these days.

When I speak to companies, a lot of them are skeptical of such an approach because they fear that transparency will undermine profits. But research suggests the exact opposite: people engage with marketing content more if they have control over their personal data. I would go even further to say that it should be an active dialogue where companies communicate their predictions to users and then give them the chance to interact with their profiles. Why would companies do that? Because no prediction is perfect, so they will get the profiles wrong in a significant number of cases which is obviously bad for their targeting but also annoying for consumers. Now, if you give you customer the chance to correct it, they are better off and you are better off. Win-win.

What about the user? In a way, our digital experience is in our hands. Should we click with more caution?

I think users should have the same responsibility as companies. It’s not enough to blame Facebook for everything that has happened and expect everything to get better now that their policies are becoming stricter. Let’s not forget that people voluntarily gave away their data. In fact, many of the things that are now requested from Facebook have been in place for a long time. They made Likes private by default in 2014, and users already have the ability to say which Likes they do not want to be used in targeting. And of course every user has the right to change their privacy settings. Truth is that very few people do.
As I said before, I 100% agree that we need better regulations, but the best regulations are not going to be enough. They can help people to better protect themselves, but they don’t guarantee it. It’s like trying to secure a person’s front door with the safest looks on the market and a number of alarm systems. If that person then leaves the window open – as many people do – all the security measures become useless. We are very quick at blaming companies when our data is abused, but very few of us actually question our own behavior. The example I like to use here is smartphone apps (not Facebook’s!). Most apps, when you download them, ask you for permission to access an insane amount of information such as your pictures, your microphone, or your GPS. And most of us blindly agree to giving away this data. Does a calendar app really need access to all of this info? The answer is most likely no.

So I really believe that we need to do a much better job at education people – especially young people – about data. It’s actually the reason why my colleagues and I give so many public talks. We want to show people what’s possible and stimulate a discussion of how we want to respond to these new technologies.  

 

Full Program

Leave a Reply

Your email address will not be published. Required fields are marked *