Episode 4: The Fear Factor with Dr. Karen Renaud and Dr. Mark Dupuis
We have a fascinating episode lined up for you this week, as I’m delighted to be joined by Dr. Karen Renaud and Dr. Mark Dupuis.
Dr. Renault is an esteemed Professor and Computer Scientist from Abertay University, whose research focuses on all aspects of human centred security and privacy. Through her work, she says, she wants to improve the boundary where humans and cybersecurity meet. And Dr Dupuis is an Assistant Professor within the Computing and Software Systems division at the University of Washington Bothell. He also specializes in the human factors of cybersecurity primarily examining psychological traits and their relationship to the cybersecurity and privacy behaviour of individuals.
And together they are exploring the use of fear appeals in cybersecurity, answering questions like whether they work or are they more effective ways to drive behavioral change. They recently shared their findings in the Wall Street Journal, a brilliant article titled Why Companies Should Stop Scaring Employees About Security. And they’re here today to shed some more light on the topic. Karen, Mark, welcome to the podcast!
Tim Sadler: To kick things off, let’s discuss that Wall Street Journal article, in which you essentially concluded that fear and scaremongering just don’t work when it comes to encouraging people to practice safer cybersecurity behaviors. So why is this the case?
Dr Marc Dupuis: Well, I think one of the interesting things if we look at the use of fear, fear is an emotion. And emotions are inherently short-term type of effects. So in some research that I did, about eight years ago, one thing I looked at was trade effect – which is a generally stable, lifelong type of effect. And I tried to understand how it relates to how individuals, whether in an organizational setting or home setting, how they perceive a threat, that cybersecurity threat, as well as their belief in being able to take protective measures to try and address that threat.
And one of the interesting things from that research was, how important the role of self-efficacy was, but more, perhaps more importantly, the relationship between trade positive aspect and self-efficacy. And so a trade positive effect is generally feelings of happiness and positivity in one aspect. And so what this gets at is, the higher levels of positivity we have with respect to trade effect, the more confident we feel, and being able to take protective measures.
So how this relates to fear is, if we need people to take protective measures, and we know that their self-efficacy, their level of confidence, is related to positive effect, why then are we continually going down the road of using fear – a short term emotion to try and engender behavioral change? And so that was a, you know, interesting conversation that Karen and I had, and then we started thinking about well, let’s take a look at the role of fear specifically.
TS: Karen, what would you add to that?
Dr Karen Renaud: Well, you know, I had seen Mark’s background, and I’d always wanted to look at fear because I don’t like to be scared into doing things, personally. And I suspect I’m not unusual in that. And when we started to look at the literature, we just confirmed that businesses were trying to use a short-term measure to solve a long-term problem. Yeah. And so yeah, I was gonna say, why do you think that is? And you know, it almost seems
using fear is just such a sort of default approach and so many, in so many things, you know, when we think about how, I’m thinking about how people sell insurance, and you know, it’s the fear, to try and drive people to believe that, hey, your home’s gonna get burgled.
Tomorrow, you better get insurance so you can protect against the bad thing happening. And why do you think companies actually just go to fear as this almost carrot to get people to do what they’re supposed to do? It feels to me as if the thing that intuitively you think will work often doesn’t work. So you know, there’s nasty pictures they put on the side of cigarette packets actually are not very effective in stopping heavy smokers. So, whereas somebody who doesn’t smoke thinks, oh my gosh, this is definitely going to scare people, and we’re going to get behavioral change, it actually doesn’t work. So sometimes intuition is just wrong.
I think in this case, it’s a case of not really doing the research the way we did to say, actually, this is probably not effective, but going well, intuitively, this is going to work. You know, they used to, when I was at school, they used to call up kids to get them to study. Now, we know that that was really a bad thing to do. The children don’t learn when they’re afraid. So we should start taking those lessons from education and applying them in the rest of our lives as well.
TS: Yeah, I think it’s a really good call that it’s almost like we just generally, as society, need to do better at understanding actually how these kinds of fear appeals work and engage with people. And, then, maybe if we just go a layer deeper into this concept of fear tactics. You know, are people becoming immune to fear tactics? 2020 was a really bad year, a lot of people faced heightened levels of stress and anxiety as a result of the pandemic and all of that change. Do you think that this is playing a part in why fear appeals don’t work?
KR: Well, yeah, I think you’re right. The literature tells us that when people are targeted by a fear appeal, they can respond in one of two ways. They can either engage in a danger control response, which is kind of what the designer of the fear appeals recommends they do. For example, if you don’t make backups, you can lose all your photos if you get attacked. So, the person engaging in a danger control response will make the backup – they’ll do as they’re told.
But they might also engage in a fear control response, which is the other option people can take. In this case, they don’t like the feeling of fear. And so they act to stop feeling it. They attack the fear, rather than the danger. They might go into denial or get angry with you. The upshot is they will not take the recommended action. So if cybersecurity is all you have to worry about, you might say, “Okay, I’m going to engage in that danger control response.”
But we have so many fear appeals to deal with anyway. And this year, it’s been over the top. So if you add fear appeals to that folks will just say, “I can’t be doing with this. I’m not going to take this on board.” So I think you’re absolutely right. And people are fearful about other things, as well as just COVID. And so you know, adding the layer to that. But what we also thought about was how ethical it actually is to add to people’s existing levels of anxiety and fear…
TS: And do you think that this, sort of, compounds? Do you think there’s a correlation between if people are already feeling naturally kind of anxious, stressed about a bunch of other stuff that actually adding one more thing to feel scared about is even less likely to have the intended results on changing their behavior?
MD: Yeah, I mean, I think so. I think it just burns people out. And you kind of get this repeated messaging. You know, one thing I think about, just because we in the States just got through this whole election cycle, and maybe we’re still in this election cycle, but where all these political ads are using fear time and time and time again. And especially with political ads. But I think, in general, people do start to tune out and they want to. They just want to be done with it.
And so it’s one of these things that, I think, just loses its efficacy, and people just kind of have had enough. I have a three and a half year old son. And you know, my daughter was very good at listening to us when we said, “This is dangerous, don’t do this.” But my son, I’m like, I’m like, “Don’t get up there. You’re gonna crack your head open, and don’t do this.” And he ignores me, first of all, and then he does it anyway. And he doesn’t crack his head open. And he says, “See, Daddy, I didn’t crack my head open.” And I’m like, no. But it gets to another point; if we scare people and we try to get them scared enough to do something. But when they don’t do it and if nothing bad happens, it only reinforces the idea that “Oh, it can’t be this bad anyway.”
KR: Yeah, you’re right. Because of the cause and the effects. If you divulge your email address or your password somewhere, and the attack is so far apart, a lot of the time you don’t make that connection even.
But it’s really interesting. If you look way back during the first Second World War, Germany decided to bomb the daylights out of London. And the idea was to make the Londoners so afraid that the British would capitulate. But what happened was a really odd thing. They became more defiant. And so we need to get a look back at that sort of thing. And somebody called McCurdy who wrote a book about this — she said people got tired and afraid of being afraid. And so they just said, “No, I don’t care how many bombs you’re throwing on us. We’re just not going to be afraid.” Now, one day if people are having so many fear appeals thrown at them, they’re losing their efficacy.
TS: A very timely example talking about the Blitz in World War II, as I just finished reading a book about exactly that, which is the resilience of the British people through that particular period of time.
And as you say, Karen, I knew very little about this topic, but it absolutely had the unintended consequence of bringing people together. It was like a rallying cry for the country to say, “We’re not going to stand for this, we are going to fight it.”
And I guess everything you’re saying is reinforced by the research you conducted as well, which completely makes sense. I’m going to read from some notes here. And in the research paper you surveyed CISOs about their own use of fear appeals in their organization. How Chief Information Security Officers actually engage with their employees, and it said 55% were against using fear appeals, with one saying, fear is known to paralyse normal decision making and reactions. And 36% thought that fear appeals were acceptable, with one saying that fear is an excellent motivator. And not a single CISO ranked scary messages as the most effective technique. What were your thoughts on these findings? Were you surprised by them?
MD: We were, I think, surprised that many were against the use of fear appeals. You look at these individuals that are the chief person responsible for the security, information security of the organization. And here they’re coming out and telling us, yeah, we don’t believe in using fear appeals. And there’s multiple reasons for this one, maybe they don’t believe in the efficacy of it. But I think it’s also because we don’t know how effective it’s going to be, but we do know that it can also damage the employee employer relationship.
And as well as some ethical issues related to it, you start to add up the possible negative ramifications of using fear appeals. And it was interesting, even going back to that example, during World War II, you think about why this was effective in what England was doing. It’s because they were in this together, they have this sense of this communal response of, you know. We’re sick of being scared, we’re in this together, we’re gonna fight in this together, and I think maybe CISOs are starting to see that, to try and help make the employee/employer relationship more positive and empower their employees rather than trying to scare them and hurt that relationship.
TS: And there was one really interesting finding, which was that you found the longest serving CISOs – i.e. those with more experience – were more likely to approve the use of cybersecurity fear appeals. Why do you think that is? Is fear, maybe kind of an old school way of thinking about cybersecurity?
KR: I think as a CISO, it’s really difficult to stay up to date with the latest research, the latest way of thinking. They spend a lot of time keeping their finger on the pulse of cyber threat models, the compromises hackers are coming with. But if you go and look at the research, the attitudes towards users are slowly changing.
And maybe the people who approve of fear appeals aren’t that aware of that. Or it might be they just become so exasperated by the behavior of their employees over the years that they just don’t have the appetite for slower behavioral change mechanisms.
You know, and I understand that exasperation. But I was really quite heartened to see that the others said no, this is not working – especially the younger ones. So you feel that cultural change is happening.
TS: One thing I was gonna ask was, there’s this interesting concept of, you know, the CISOs themselves, and whether they use fear appeals in their organization. Do you think that’s somewhat a function of how fear appeals are used to them, if that makes sense? Like they have a board that they’re reporting to, they have a boss, they have stakeholders that they’ve got to deliver results for – namely, keep the organization secure, keep our data secure, keep our people secure. Do you think there’s a relationship between how fear appeals are used to them in terms of how they use that then to others in their organization?
MD: I think that’s an interesting question. I mean, I think that’s always possible. And I, you know, I think a lot of times people default to what they know and what they’re comfortable with, and what they’ve experienced and so on. And maybe that’s why we see some of the CISOs that have been in that role longer to default to that. And, you know, some of them might be organizational structural as well. Like I said, if they are constantly being bombarded with fear appeals by those that they report to, then, maybe they are more likely to engage in fear appeals. That question is a little unclear. But I do think it’s an interesting question because it, again, intuitively it makes sense. I can have a conversation with someone and, you know, if I want to use fear appeals, I don’t have to make a case for them. The case is almost intuitively made in and of itself. But trying to do the counter and say, well, maybe fear appeals don’t work, it’s a much bigger leap to try and make that argument than I think to try and say, “Well, yeah, let’s scare someone into doing something, of course, that’s gonna work, right.”
TS: I think it’s an interesting point. I think it’s just really important that we also remember, certainly in the context of using fear appeals, that there is a role beyond the CISO, as well. And it’s the role the board plays, it’s the culture of the organization, and how you set those individuals up for success. Like, on one hand as a CISO, the sky is always falling. There is always some piece of bad news or something that’s going wrong, or something you’re defending. And I think it’s again, maybe there’s something in that for thinking about how organizations can kind of empower CISOs, so that they can then go on to empower their people.
And so shifting gears slightly, we’ve spoken a lot about why fear appeals are maybe not a good idea, and how they are limited in their effectiveness. But what is the alternative? What advice would you give to the listeners on this podcast about how they can improve employee cybersecurity behavior through other means, especially as so many are now working remotely?
KR: Well, going back to what Mark was saying, we think the key really is self efficacy. You’ve got to build confidence in people, and without making them afraid.
A lot of the cybersecurity training that you get in organizations is a two-hour session that brings everyone into a room and we talk with them. Or maybe people are required to do this online. This is not self efficacy. This is awareness. And there’s a big difference. So the thing is, you can’t deliver cybersecurity knowledge and self efficacy like a COVID vaccination. It’s a long-term process and employers really have to engage with the fact that it is a long-term process, and just keep building people’s confidence and so on.
What you said earlier about the whole community effect, up to now cybersecurity has been a solo game. And it’s not a tennis solo game, right. It’s a team sport. And we need to get all the people in the organization helping each other to spot phishing messages or whatever. But you know, make it a community sport, number one. And everybody supports each other in building that level of self efficacy that we all need.
TS: I love that. And, yeah, I think we said it earlier. But you know, just this concept of teamwork, and coming together, I think is so, so important. Mark, would you add anything to that in terms of just these alternative means to fear appeals that leaders and CISOs can think about using with their employees?
MD: Yeah, I mean, it’s not gonna be one size fits all. But I think whatever approach we use, as Karen said, we really do need to tap into that self efficacy. And by doing that, people are going to feel confident and empowered to be able to take action.
And we need to think about how people are motivated to take action, you know. So fear is scaring them personally, about consequences they may face like termination or fines or something else. But if you start thinking about developing this and, as I mentioned before, this being in-this-together, this is developing an intrinsic motivation that “I’m not doing this, because I’m fearful of the consequences”, so much. It’s more “I’m doing this because, you know, we’re all in this together.” We want to make this better for everyone. We want to have a good company, we want to be able to help each other. And we want people to take the actions that are necessary to make sure that we are secure, and we’re here to be able to talk about it.
TS: Yeah, it’s exactly what both of you are saying that if somebody feels that they can’t, if they don’t have that self efficacy, they’re not going to raise things, they’re not going to bring it forward. And ultimately, that’s when disasters happen, and things can go really bad.
And then, I love the idea of, you know, it makes complete sense that if you are striking fear into the hearts of people, it’s not necessarily going to have the desired outcome 100% of the time, but isn’t it a little bit of fear needed? I mean, when I say this, of course, it has to be used ethically. But when I’m thinking about just the nature of what organizations are facing today, and we’ve just heard about the Solar Winds hack, and there are a number of others as well. These things are pretty scary, and the techniques that are being used are pretty scary. So isn’t a little bit of fear required here? And is there any merit to using that to make people understand the severity and the consequences of what’s at stake?
MD: Yeah, I think there’s a difference between fear and providing people with information that might inherently have scary components to it. And, so what I mean by that is, when people are often using fear appeals, they’re doing it to scare people into complying with some specific goal. But instead we should provide information to people – which we should, we should let people know that there are some possible things that can happen or some possible consequences – but not with the goal of scaring them, but more with the goal of empowering them by giving them information. They, again, tap into that self efficacy, more so than anything else, because then they know that there’s some kind of threat out here. They’re not scared, but they know there’s a threat. And if they feel empowered through knowledge, and through that self efficacy, then they’re more likely to take that action, as opposed to designing a message that’s just designed to scare them into compliance.
TS: From your experience, can either when you think of any really good examples of how companies or any campaigns that have maybe built this kind of self efficacy or empowered people without having to use fear as the motivating factor?
KR: And I think I mentioned one of them in the paper. So there’s an organization that I’m familiar with and they had a major problem with phishing. They appointed one person and if anybody had a suspicious message, they say “you were quite right to report this to me, thank you so much for being part of the security perimeter of this organization. But email looks fine, you can click.” Overtime, this is actually built up efficacy. They don’t have phishing problems anymore, in that organization, because they have this person. And it’s almost an informal thing he does but he’s building up self efficacy slowly but surely, across the organization, because nobody ever gets made to feel small by reporting or made to feel humiliated. We’re all participating. We’re all part of this, that that is the best example I’ve seen of actually how this has worked.
TS: Yeah, I really like that. It’s like, when people do risk audits, they will say that the time the alarm should sound is when there’s nothing on the risk register. When the risk registers may be getting 510 entries every single week, you know, that people actually do have that confidence to come forward. And also they’re paying attention, right? They’re actually aware of these things.
And where I want to go next is to talk about this is a side of things in the cybersecurity vendor world. You know, many companies that are trying to provide solutions to organizations do rely quite heavily on this concept of fear, uncertainty and doubt. It’s even got its own acronym right? FUD. And, essentially, FUD is used so heavily. As the saying goes “bad news sells” – we see scary headlines, the massive data breaches dominate the media landscape. So I think it’s fair to say eliminating fraud is going to be tough. And there is a lot of work to do here. In your opinion, who is responsible for changing the narrative? And what advice would you give to them for how they can start doing this?
MD: I think it definitely, you know, starts in things such as having these conversations and trying to, I guess, place a little uncertainty or doubt into those decision makers and CISOs about how effective fear is. It’s kind of flipping the script a little bit. And maybe part of it is we need a new acronym, to say, well give this a try, or this is why we think this is going to work, or this is what the research shows. And this is what your peer organizations are doing, and they find it very effective. Their employees feel more empowered.
So, I think a lot of it is just beginning with those conversations and trying to flip the script a little bit to start to help CISOs know. Well, you know, it’s always easy to criticize something, but then the bigger question is, okay, if, if we’re taking the use of fear and its effectiveness for granted, then what are we going to replace it with?
And a lot of it, we know that self efficacy is the major player there but what’s that going to look like? And I think Karen gave a great example looking at what an organization is doing, which is increasing improving levels of self efficacy. It’s creating that spirit of we’re all in this together and it’s less about a formalised punitive type of system. And so looking at ways to tap into that and for one organization, it might be you have a slightly different approach, but I think the concepts and stuff will be the same.
TS: Again, it ties in a really important point, which is just more understanding is needed, I think, by the lay person, or the people that are putting this out.
And, and then I think, Marc, to your point just about this being collective responsibility. I mean, I see it as a great opportunity as well, because I think everyone would welcome some more positivity and optimism, right? And if we can actually bring that to the security community, which is, you know, generally a fearful community, focusing on defense and threat actors. The language, the aesthetic, everything is generally negative, fearful, scary. I think there’s a great opportunity here, which is that, you know, doesn’t have to be that way and that we can come together. And we can have a much more positive dialogue and a much more positive response around it.
There was something that I wanted to touch on. Karen, you speak about, in your research, this concept of “Cybersecurity Differently.” And you explain, and I’m going to quote you verbatim here – “It’s so important that we change mindsets from the human-as-a-problem to human-as-solution in order to improve cybersecurity across the sociotechnical system.” What do you mean by that? And what are the core principles of Cybersecurity Differently?
KR: When you treat your users as a problem, right, then that informs the way you manage them. And, so, then what you see in a lot of organizations because they see their employees’ behaviors as a problem. They’ll train them, they’ll constrain them, and then they’ll blame them when things go wrong. So that’s the paradigm.
But what you’re actually doing is excluding them from being part of the solution. So, it creates the very problem you’re trying to solve. What you want is for everyone to feel that they’re part of the security defense of the organization. I did this research with Marina Timmerman, from the University of Darmstadt, technical University Darmstadt. And so the principles are:
One we’ve been speaking about a lot: encouraged collaboration and communication between colleagues, so that people can support each other. We want to encourage everyone to learn. It should be a lifelong learning thing, not just something that IT departments have to worry about.
It isn’t solo, as I’ve said before, you have to build resilience as well as resistance. So currently, a lot of the effort is on resisting anything that somebody could do wrong. But you don’t then have a way of bouncing back when things do go wrong, because all the focus is on sort of resistance. And, you know, a lot of the time we treat security awareness training and policies like a-one-size-fits-all. But that doesn’t refer to people’s expertise. It doesn’t go to the people and say, “Okay, here’s what we’re proposing, is this going to be possible for you to do these things in a secure way?” And if not, how can we support you to make what you’re doing more secure.
Then, you know, people make mistakes. Everyone focuses on if a phishing message comes to an organization, people focus on the people who fell for it. But there were many, many more people who didn’t fall for it. And so what we need to do is examine the successes, what can we learn from the people? Why did they spot that phishing message so that we can encourage that in the people who did happen to make mistakes?
I didn’t get these ideas, just out of the air. I got them from some very insightful people. One of them was Sidney Dekker, who has applied this paradigm in the safety field. What’s interesting was that he got Woolworths in Australia to allow him to apply the paradigm in some of their stores. They previously had all these signs up all over the store – “Don’t leave a mess here” and “Don’t do this” – and they had weekly training on safety. He said, right, we’re taking all the signs out. Instead, what we’re gonna do is just say, you have one job, don’t let anyone get hurt. And the stores that applied the job got the safety prize for Woolworths that next year.
So, you know, just the idea that everyone realized it was their responsibility. And it wasn’t all about fear, you know, rules and that sort of thing. So I thought if he could do this in safety, where people actually get harmed for life or killed, surely we can do this in cyber?!
And then I found a guy who ran a nuclear submarine in the United States. His name is David Marquet. He applied the same thing in his nuclear submarine which you would also think, oh, my goodness, a nuclear submarine. There’s so much potential for really bad things to happen! But he applied the same sort of paradigm shift – and it worked! He won the prize for the best run nuclear submarine in the US Navy. So it’s about being brave enough to go actually, you know, what we’re doing is not working, and every year it’s not working Maybe it’s time to think well, can we do something different?
But like you said, Marc, we need a brave organization to say, okay, we’re gonna try this. And we haven’t managed to find one yet. But we will, we will!
TS: And that’s one of the things I wanted to close out on. I spoke to you at the beginning of this podcast is how much I love the article in the Wall Street Journal, but also just the mission that both of you are on – to improve, what I see really is the relationship between people and the cybersecurity function. And my question to you is, again, touches on that concept of how much progress have we actually made? And then, to close, how optimistic are you that we can actually flip the script and stop using fear appeals?
MD: Yeah, I feel like we’ve made a lot of progress, but not nearly enough. So, you know, there’s, and part of the challenge, too, is, none of this stuff is static, right? All this stuff is constantly changing; the cybersecurity threats out there change, we’re talking, so much, about phishing today, and social engineering is going to be something different next year. And so it’s always this idea of playing catch-up. But also, you know, having the fortitude to take that step out there to take that leap of faith that maybe we can do something else besides using fear.
MD: I think I am optimistic that it can be done. We can make a lot of progress. For it to actually be done to, you know, 100%… I don’t know that we’ll ever get to that point. But I feel like we can make a lot of progress. And looking at part of this is recognizing the fact that – you’re mentioning the socio technical side of this – this isn’t just a technical problem, right? And a lot of times the people we throw into cybersecurity positions have this very strong technical background but they’re not bringing in other disciplines. Perhaps from the arts, from literature, from the humanities, and from design, we can bring new considerations to try and look at this as a very holistic multidisciplinary problem. If the problem is like that, well, then solutions definitely have to be as well.
We have to acknowledge that and start trying to get creative with the solutions. And we need those brave organizations to try these different approaches. I think they’ll be pleased with the results because they’re probably spending a lot of time and money right now, to try and make the organization more secure. They’re telling their bosses, the CISOs are telling their bosses, well, this is what we’re doing. We’re scaring them. But the results don’t always speak for themselves.
TS: And, Karen, what would you add to that?
KR: Well, I just totally concur with everything Mark said, I think he’s rounded this off very nicely. I ran a study recently – it was really unusual study – where we put old fashioned typewriters in coffee shops and all over, and we put pieces of paper in. We just typed something along the top that said, “When I think about cybersecurity, I feel…” and we got unbelievable stuff back from people going: “I don’t understand it, I’m uncertain.” Lots and lots of negative responses – so there’s a lot of negative emotion around cyber. And that’s not good for cybersecurity. So I’d really like to see something different. And, you know, the old saying, If you keep doing the same thing without getting results, there’s something wrong. We see it’s not working, this might be the best way of changing and making it work.
TS: I completely agree. I completely agree. Thank you both so much for that great discussion. I really enjoy getting deeper as well, and hearing your thoughts on all of this. As you say, I think it’s a win-win scenario on so many counts. More positivity means better outcomes for employees. And I think it means better outcomes for the security function.
If you enjoyed our show, please rate and review it on Apple, Spotify, Google or wherever you get your podcasts. And remember you can access all the RE:Human Security Layer podcasts here.