Spear Phishing
How to Prevent and Avoid Falling for Email Spoofing Attacks
By Maddie Rosenthal
Friday, January 22nd, 2021
Email spoofing is a common way for cybercriminals to launch phishing attacks — and just one successful phishing attack can devastate your business. That’s why every secure organization has a strategy for detecting and filtering out spoofed emails. Do you? This article will walk you through some of the best methods for preventing email spoofing. And, if you’re wondering how to prevent your email address or domain from being spoofed…the first step is to enable DMARC. But, even that isn’t enough. We explain why in this article: Why DMARC Isn’t Enough to Stop Impersonation Attacks.  Security awareness training Email spoofing is a common tactic in social engineering attacks such as spear phishing, CEO fraud, and Business Email Compromise (BEC). Social engineering attacks exploit people’s trust to persuade them to click a phishing link, download a malicious file, or make a fraudulent payment. That means part of the solution lies in educating the people being targeted.  It’s important to note that cyberattacks target employees at every level of a company — which means cybersecurity is everyone’s responsibility. Security awareness training can help employees recognize when such an attack is underway and understand how to respond.  In our article “What Is Email Spoofing?” we looked at how an email’s header can reveal that the sender address has been spoofed.  Looking “under the hood” of an email’s header is a useful exercise to help employees understand how email spoofing works. You can see if the email failed authentication processes like SPF, DKIM, and DMARC, and check whether the “Received” and “From” headers point to different domains. But it’s not realistic to expect people to carefully inspect the header of every email they receive. So what are some other giveaways that might suggest that an email spoofing scam is underway? The email doesn’t look how you expect. The sender might be “paypal.com.” But does the email really look like PayPal’s other emails? Most sophisticated cybercriminals use the spoofed company’s branding — but some can make mistakes. The email contains spelling and grammar errors. Again, these mistakes aren’t common among professional cybercriminals, but they still can occur. The email uses an urgent tone. If the boss emails you, urgently requesting that you pay an invoice into an unrecognized account — take a moment. This could be CEO fraud. You must get your whole team on board to defend against cybersecurity threats, and security awareness training can help you do this. However, Tessian research suggests that the effectiveness of security training is limited.  Email provider warnings Your mail server is another line of defense against spoofing attacks. Email servers check whether incoming emails have failed authentication processes, such as SPF (Sender Policy Framework), DKIM (DomainKeys Identified Mail), and DMARC (Domain-based Message Authentication, Reporting, and Conformance). Many email providers will warn the user if an email has failed authentication. Here’s an example of such a warning from Protonmail:
As part of your company’s security awareness training, you can urge employees to pay close attention to these warnings and report them to your IT or cybersecurity team. However, it’s not safe to rely on your email provider. A 2018 Virginia Tech study looked at how 35 popular email providers handled email spoofing. The study found: All except one of the email providers allowed fraudulent emails to reach users’ inboxes. Only eight of the providers provided a warning about suspicious emails on their web apps.  Only four of the providers provided such a warning on their mobile apps. Authentication protocols As noted by the Virginia Tech study, email providers often allow fraudulent emails through their filters — even when they fail authentication. But, perhaps more importantly, whether a fraudulent email fails authentication in the first place is out of your hands. For example, SPF lets a domain owner list which email servers are authorized to send emails from its domain. And DMARC enables domain owners to specify whether recipient mail servers should reject, quarantine, or allow emails that have failed SPF authentication.  So, for domain owners, setting up SPF, DKIM, and DMARC records is an essential step to prevent cybercriminals and spammers from sending spoofed emails using their domain name. But as the recipient, you can’t control whether the domain owner has properly set up its authentication records. You certainly don’t want your cybersecurity strategy to be dependent on the actions of other organizations.  Email security software Effective email spoofing attacks are very persuasive. The email arrives from a seemingly valid address — and it might contain the same branding, tone, and content you’d expect from the supposed sender. This makes email spoofing attacks one of the hardest cybercrimes to detect manually. Humans aren’t good at spotting the subtle and technical indicators of a well-planned email spoofing attack. Legacy solutions like Secure Email Gateways and native tools like spam filters aren’t either.  The best approach to tackling spoofing — or any social engineering attack — is intelligent technology. Email security solutions powered by machine learning (ML) automates the process of detecting and flagging spoofed emails, making it easier, more consistent, and more effective. Here’s how Tessian Defender solves the problem of email spoofing: Tessian’s machine learning algorithms analyze each employee’s email data. The software learns each employee’s email style and maps their trusted email relationships. It learns what “normal” looks like so it can spot suspicious email activity. Tessian performs a deep inspection on inbound emails. By checking the sender’s IP address, email client, and other metadata, Tessian can detect indications of email spoofing and other threats.  If it suspects an email is malicious, Tessian alerts employees using easy-to-understand language. Want to learn more? Here are some resources: Tessian Defender Data Sheet Customer Stories Report: To Prevent Spear Phishing Look for Impersonation If you’d rather talk to someone about your specific challenges, you can talk to an expert at Tessian.  
Human Layer Security, Podcast,
Episode 4: The Fear Factor with Dr. Karen Renaud and Dr. Mark Dupuis
By Laura Brooks
Wednesday, January 20th, 2021
We have a fascinating episode lined up for you this week, as I’m delighted to be joined by Dr. Karen Renaud and Dr. Mark Dupuis. Dr. Renault is an esteemed Professor and Computer Scientist from Abertay University, whose research focuses on all aspects of human centred security and privacy. Through her work, she says, she wants to improve the boundary where humans and cybersecurity meet. And Dr Dupuis is an Assistant Professor within the Computing and Software Systems division at the University of Washington Bothell. He also specializes in the human factors of cybersecurity primarily examining psychological traits and their relationship to the cybersecurity and privacy behaviour of individuals.  And together they are exploring the use of fear appeals in cybersecurity, answering questions like whether they work or are they more effective ways to drive behavioral change. They recently shared their findings in the Wall Street Journal, a brilliant article titled Why Companies Should Stop Scaring Employees About Security. And they’re here today to shed some more light on the topic. Karen, Mark, welcome to the podcast! Tim Sadler: To kick things off, let’s discuss that Wall Street Journal article, in which you essentially concluded that fear and scaremongering just don’t work when it comes to encouraging people to practice safer cybersecurity behaviors. So why is this the case? Dr Marc Dupuis: Well, I think one of the interesting things if we look at the use of fear, fear is an emotion. And emotions are inherently short-term type of effects. So in some research that I did, about eight years ago, one thing I looked at was trade effect – which is a generally stable, lifelong type of effect. And I tried to understand how it relates to how individuals, whether in an organizational setting or home setting, how they perceive a threat, that cybersecurity threat, as well as their belief in being able to take protective measures to try and address that threat.  And one of the interesting things from that research was, how important the role of self-efficacy was, but more, perhaps more importantly, the relationship between trade positive aspect and self-efficacy. And so a trade positive effect is generally feelings of happiness and positivity in one aspect. And so what this gets at is, the higher levels of positivity we have with respect to trade effect, the more confident we feel, and being able to take protective measures. 
So how this relates to fear is, if we need people to take protective measures, and we know that their self-efficacy, their level of confidence, is related to positive effect, why then are we continually going down the road of using fear – a short term emotion to try and engender behavioral change? And so that was a, you know, interesting conversation that Karen and I had, and then we started thinking about well, let’s take a look at the role of fear specifically. TS: Karen, what would you add to that? Dr Karen Renaud: Well, you know, I had seen Mark’s background, and I’d always wanted to look at fear because I don’t like to be scared into doing things, personally. And I suspect I’m not unusual in that. And when we started to look at the literature, we just confirmed that businesses were trying to use a short-term measure to solve a long-term problem. Yeah. And so yeah, I was gonna say, why do you think that is? And you know, it almost seems using fear is just such a sort of default approach and so many, in so many things, you know, when we think about how, I’m thinking about how people sell insurance, and you know, it’s the fear, to try and drive people to believe that, hey, your home’s gonna get burgled.  Tomorrow, you better get insurance so you can protect against the bad thing happening. And why do you think companies actually just go to fear as this almost carrot to get people to do what they’re supposed to do? It feels to me as if the thing that intuitively you think will work often doesn’t work. So you know, there’s nasty pictures they put on the side of cigarette packets actually are not very effective in stopping heavy smokers. So, whereas somebody who doesn’t smoke thinks, oh my gosh, this is definitely going to scare people, and we’re going to get behavioral change, it actually doesn’t work. So sometimes intuition is just wrong. I think in this case, it’s a case of not really doing the research the way we did to say, actually, this is probably not effective, but going well, intuitively, this is going to work. You know, they used to, when I was at school, they used to call up kids to get them to study. Now, we know that that was really a bad thing to do. The children don’t learn when they’re afraid. So we should start taking those lessons from education and applying them in the rest of our lives as well. 
TS: Yeah, I think it’s a really good call that it’s almost like we just generally, as society, need to do better at understanding actually how these kinds of fear appeals work and engage with people. And, then, maybe if we just go a layer deeper into this concept of fear tactics. You know, are people becoming immune to fear tactics? 2020 was a really bad year, a lot of people faced heightened levels of stress and anxiety as a result of the pandemic and all of that change. Do you think that this is playing a part in why fear appeals don’t work?  KR: Well, yeah, I think you’re right. The literature tells us that when people are targeted by a fear appeal, they can respond in one of two ways. They can either engage in a danger control response, which is kind of what the designer of the fear appeals recommends they do. For example, if you don’t make backups, you can lose all your photos if you get attacked. So, the person engaging in a danger control response will make the backup – they’ll do as they’re told.  But they might also engage in a fear control response, which is the other option people can take. In this case, they don’t like the feeling of fear. And so they act to stop feeling it. They attack the fear, rather than the danger. They might go into denial or get angry with you. The upshot is they will not take the recommended action. So if cybersecurity is all you have to worry about, you might say, “Okay, I’m going to engage in that danger control response.”  But we have so many fear appeals to deal with anyway. And this year, it’s been over the top. So if you add fear appeals to that folks will just say, “I can’t be doing with this. I’m not going to take this on board.” So I think you’re absolutely right. And people are fearful about other things, as well as just COVID. And so you know, adding the layer to that. But what we also thought about was how ethical it actually is to add to people’s existing levels of anxiety and fear…
TS: And do you think that this, sort of, compounds? Do you think there’s a correlation between if people are already feeling naturally kind of anxious, stressed about a bunch of other stuff that actually adding one more thing to feel scared about is even less likely to have the intended results on changing their behavior? MD: Yeah, I mean, I think so. I think it just burns people out. And you kind of get this repeated messaging. You know, one thing I think about, just because we in the States just got through this whole election cycle, and maybe we’re still in this election cycle, but where all these political ads are using fear time and time and time again. And especially with political ads. But I think, in general, people do start to tune out and they want to. They just want to be done with it.  And so it’s one of these things that, I think, just loses its efficacy, and people just kind of have had enough. I have a three and a half year old son. And you know, my daughter was very good at listening to us when we said, “This is dangerous, don’t do this.” But my son, I’m like, I’m like, “Don’t get up there. You’re gonna crack your head open, and don’t do this.” And he ignores me, first of all, and then he does it anyway. And he doesn’t crack his head open. And he says, “See, Daddy, I didn’t crack my head open.” And I’m like, no. But it gets to another point; if we scare people and we try to get them scared enough to do something. But when they don’t do it and if nothing bad happens, it only reinforces the idea that “Oh, it can’t be this bad anyway.” KR: Yeah, you’re right. Because of the cause and the effects. If you divulge your email address or your password somewhere, and the attack is so far apart, a lot of the time you don’t make that connection even.  But it’s really interesting. If you look way back during the first Second World War, Germany decided to bomb the daylights out of London. And the idea was to make the Londoners so afraid that the British would capitulate. But what happened was a really odd thing. They became more defiant. And so we need to get a look back at that sort of thing. And somebody called McCurdy who wrote a book about this — she said people got tired and afraid of being afraid. And so they just said, “No, I don’t care how many bombs you’re throwing on us. We’re just not going to be afraid.” Now, one day if people are having so many fear appeals thrown at them, they’re losing their efficacy. TS: A very timely example talking about the Blitz in World War II, as I just finished reading a book about exactly that, which is the resilience of the British people through that particular period of time. And as you say, Karen, I knew very little about this topic, but it absolutely had the unintended consequence of bringing people together. It was like a rallying cry for the country to say, “We’re not going to stand for this, we are going to fight it.”  And I guess everything you’re saying is reinforced by the research you conducted as well, which completely makes sense. I’m going to read from some notes here. And in the research paper you surveyed CISOs about their own use of fear appeals in their organization. How Chief Information Security Officers actually engage with their employees, and it said 55% were against using fear appeals, with one saying, fear is known to paralyse normal decision making and reactions. And 36% thought that fear appeals were acceptable, with one saying that fear is an excellent motivator. And not a single CISO ranked scary messages as the most effective technique. What were your thoughts on these findings? Were you surprised by them?
MD: We were, I think, surprised that many were against the use of fear appeals. You look at these individuals that are the chief person responsible for the security, information security of the organization. And here they’re coming out and telling us, yeah, we don’t believe in using fear appeals. And there’s multiple reasons for this one, maybe they don’t believe in the efficacy of it. But I think it’s also because we don’t know how effective it’s going to be, but we do know that it can also damage the employee employer relationship.  And as well as some ethical issues related to it, you start to add up the possible negative ramifications of using fear appeals. And it was interesting, even going back to that example, during World War II, you think about why this was effective in what England was doing. It’s because they were in this together, they have this sense of this communal response of, you know. We’re sick of being scared, we’re in this together, we’re gonna fight in this together, and I think maybe CISOs are starting to see that, to try and help make the employee/employer relationship more positive and empower their employees rather than trying to scare them and hurt that relationship. TS: And there was one really interesting finding, which was that you found the longest serving CISOs – i.e. those with more experience – were more likely to approve the use of cybersecurity fear appeals. Why do you think that is? Is fear, maybe kind of an old school way of thinking about cybersecurity?  KR: I think as a CISO, it’s really difficult to stay up to date with the latest research, the latest way of thinking. They spend a lot of time keeping their finger on the pulse of cyber threat models, the compromises hackers are coming with. But if you go and look at the research, the attitudes towards users are slowly changing. And maybe the people who approve of fear appeals aren’t that aware of that. Or it might be they just become so exasperated by the behavior of their employees over the years that they just don’t have the appetite for slower behavioral change mechanisms. You know, and I understand that exasperation. But I was really quite heartened to see that the others said no, this is not working – especially the younger ones. So you feel that cultural change is happening. TS: One thing I was gonna ask was, there’s this interesting concept of, you know, the CISOs themselves, and whether they use fear appeals in their organization. Do you think that’s somewhat a function of how fear appeals are used to them, if that makes sense? Like they have a board that they’re reporting to, they have a boss, they have stakeholders that they’ve got to deliver results for – namely, keep the organization secure, keep our data secure, keep our people secure. Do you think there’s a relationship between how fear appeals are used to them in terms of how they use that then to others in their organization? MD: I think that’s an interesting question. I mean, I think that’s always possible. And I, you know, I think a lot of times people default to what they know and what they’re comfortable with, and what they’ve experienced and so on. And maybe that’s why we see some of the CISOs that have been in that role longer to default to that. And, you know, some of them might be organizational structural as well. Like I said, if they are constantly being bombarded with fear appeals by those that they report to, then, maybe they are more likely to engage in fear appeals. That question is a little unclear. But I do think it’s an interesting question because it, again, intuitively it makes sense. I can have a conversation with someone and, you know, if I want to use fear appeals, I don’t have to make a case for them. The case is almost intuitively made in and of itself. But trying to do the counter and say, well, maybe fear appeals don’t work, it’s a much bigger leap to try and make that argument than I think to try and say, “Well, yeah, let’s scare someone into doing something, of course, that’s gonna work, right.”
TS: I think it’s an interesting point. I think it’s just really important that we also remember, certainly in the context of using fear appeals, that there is a role beyond the CISO, as well. And it’s the role the board plays, it’s the culture of the organization, and how you set those individuals up for success. Like, on one hand as a CISO, the sky is always falling. There is always some piece of bad news or something that’s going wrong, or something you’re defending. And I think it’s again, maybe there’s something in that for thinking about how organizations can kind of empower CISOs, so that they can then go on to empower their people.  And so shifting gears slightly, we’ve spoken a lot about why fear appeals are maybe not a good idea, and how they are limited in their effectiveness. But what is the alternative? What advice would you give to the listeners on this podcast about how they can improve employee cybersecurity behavior through other means, especially as so many are now working remotely?  KR: Well, going back to what Mark was saying, we think the key really is self efficacy. You’ve got to build confidence in people, and without making them afraid.  A lot of the cybersecurity training that you get in organizations is a two-hour session that brings everyone into a room and we talk with them. Or maybe people are required to do this online. This is not self efficacy. This is awareness. And there’s a big difference. So the thing is, you can’t deliver cybersecurity knowledge and self efficacy like a COVID vaccination. It’s a long-term process and employers really have to engage with the fact that it is a long-term process, and just keep building people’s confidence and so on.  What you said earlier about the whole community effect, up to now cybersecurity has been a solo game. And it’s not a tennis solo game, right. It’s a team sport. And we need to get all the people in the organization helping each other to spot phishing messages or whatever. But you know, make it a community sport, number one. And everybody supports each other in building that level of self efficacy that we all need. TS: I love that. And, yeah, I think we said it earlier. But you know, just this concept of teamwork, and coming together, I think is so, so important. Mark, would you add anything to that in terms of just these alternative means to fear appeals that leaders and CISOs can think about using with their employees? MD: Yeah, I mean, it’s not gonna be one size fits all. But I think whatever approach we use, as Karen said, we really do need to tap into that self efficacy. And by doing that, people are going to feel confident and empowered to be able to take action.  And we need to think about how people are motivated to take action, you know. So fear is scaring them personally, about consequences they may face like termination or fines or something else. But if you start thinking about developing this and, as I mentioned before, this being in-this-together, this is developing an intrinsic motivation that “I’m not doing this, because I’m fearful of the consequences”, so much. It’s more “I’m doing this because, you know, we’re all in this together.” We want to make this better for everyone. We want to have a good company, we want to be able to help each other. And we want people to take the actions that are necessary to make sure that we are secure, and we’re here to be able to talk about it.  TS: Yeah, it’s exactly what both of you are saying that if somebody feels that they can’t, if they don’t have that self efficacy, they’re not going to raise things, they’re not going to bring it forward. And ultimately, that’s when disasters happen, and things can go really bad. And then, I love the idea of, you know, it makes complete sense that if you are striking fear into the hearts of people, it’s not necessarily going to have the desired outcome 100% of the time, but isn’t it a little bit of fear needed? I mean, when I say this, of course, it has to be used ethically. But when I’m thinking about just the nature of what organizations are facing today, and we’ve just heard about the Solar Winds hack, and there are a number of others as well. These things are pretty scary, and the techniques that are being used are pretty scary. So isn’t a little bit of fear required here? And is there any merit to using that to make people understand the severity and the consequences of what’s at stake? MD: Yeah, I think there’s a difference between fear and providing people with information that might inherently have scary components to it. And, so what I mean by that is, when people are often using fear appeals, they’re doing it to scare people into complying with some specific goal. But instead we should provide information to people – which we should, we should let people know that there are some possible things that can happen or some possible consequences – but not with the goal of scaring them, but more with the goal of empowering them by giving them information. They, again, tap into that self efficacy, more so than anything else, because then they know that there’s some kind of threat out here. They’re not scared, but they know there’s a threat. And if they feel empowered through knowledge, and through that self efficacy, then they’re more likely to take that action, as opposed to designing a message that’s just designed to scare them into compliance.
TS: From your experience, can either when you think of any really good examples of how companies or any campaigns that have maybe built this kind of self efficacy or empowered people without having to use fear as the motivating factor? KR: And I think I mentioned one of them in the paper. So there’s an organization that I’m familiar with and they had a major problem with phishing. They appointed one person and if anybody had a suspicious message, they say “you were quite right to report this to me, thank you so much for being part of the security perimeter of this organization. But email looks fine, you can click.” Overtime, this is actually built up efficacy. They don’t have phishing problems anymore, in that organization, because they have this person. And it’s almost an informal thing he does but he’s building up self efficacy slowly but surely, across the organization, because nobody ever gets made to feel small by reporting or made to feel humiliated. We’re all participating. We’re all part of this, that that is the best example I’ve seen of actually how this has worked.  TS: Yeah, I really like that. It’s like, when people do risk audits, they will say that the time the alarm should sound is when there’s nothing on the risk register. When the risk registers may be getting 510 entries every single week, you know, that people actually do have that confidence to come forward. And also they’re paying attention, right? They’re actually aware of these things.  And where I want to go next is to talk about this is a side of things in the cybersecurity vendor world. You know, many companies that are trying to provide solutions to organizations do rely quite heavily on this concept of fear, uncertainty and doubt. It’s even got its own acronym right? FUD. And, essentially, FUD is used so heavily. As the saying goes “bad news sells” – we see scary headlines, the massive data breaches dominate the media landscape. So I think it’s fair to say eliminating fraud is going to be tough. And there is a lot of work to do here. In your opinion, who is responsible for changing the narrative? And what advice would you give to them for how they can start doing this? MD: I think it definitely, you know, starts in things such as having these conversations and trying to, I guess, place a little uncertainty or doubt into those decision makers and CISOs about how effective fear is. It’s kind of flipping the script a little bit. And maybe part of it is we need a new acronym, to say, well give this a try, or this is why we think this is going to work, or this is what the research shows. And this is what your peer organizations are doing, and they find it very effective. Their employees feel more empowered. So, I think a lot of it is just beginning with those conversations and trying to flip the script a little bit to start to help CISOs know. Well, you know, it’s always easy to criticize something, but then the bigger question is, okay, if, if we’re taking the use of fear and its effectiveness for granted, then what are we going to replace it with?  And a lot of it, we know that self efficacy is the major player there but what’s that going to look like? And I think Karen gave a great example looking at what an organization is doing, which is increasing improving levels of self efficacy. It’s creating that spirit of we’re all in this together and it’s less about a formalised punitive type of system. And so looking at ways to tap into that and for one organization, it might be you have a slightly different approach, but I think the concepts and stuff will be the same.
TS: Again, it ties in a really important point, which is just more understanding is needed, I think, by the lay person, or the people that are putting this out.  And, and then I think, Marc, to your point just about this being collective responsibility. I mean, I see it as a great opportunity as well, because I think everyone would welcome some more positivity and optimism, right? And if we can actually bring that to the security community, which is, you know, generally a fearful community, focusing on defense and threat actors. The language, the aesthetic, everything is generally negative, fearful, scary. I think there’s a great opportunity here, which is that, you know, doesn’t have to be that way and that we can come together. And we can have a much more positive dialogue and a much more positive response around it.  There was something that I wanted to touch on. Karen, you speak about, in your research, this concept of “Cybersecurity Differently.” And you explain, and I’m going to quote you verbatim here – “It’s so important that we change mindsets from the human-as-a-problem to human-as-solution in order to improve cybersecurity across the sociotechnical system.” What do you mean by that? And what are the core principles of Cybersecurity Differently? KR: When you treat your users as a problem, right, then that informs the way you manage them. And, so, then what you see in a lot of organizations because they see their employees’ behaviors as a problem. They’ll train them, they’ll constrain them, and then they’ll blame them when things go wrong. So that’s the paradigm.  But what you’re actually doing is excluding them from being part of the solution. So, it creates the very problem you’re trying to solve. What you want is for everyone to feel that they’re part of the security defense of the organization. I did this research with Marina Timmerman, from the University of Darmstadt, technical University Darmstadt. And so the principles are:  One we’ve been speaking about a lot: encouraged collaboration and communication between colleagues, so that people can support each other. We want to encourage everyone to learn. It should be a lifelong learning thing, not just something that IT departments have to worry about.  It isn’t solo, as I’ve said before, you have to build resilience as well as resistance. So currently, a lot of the effort is on resisting anything that somebody could do wrong. But you don’t then have a way of bouncing back when things do go wrong, because all the focus is on sort of resistance. And, you know, a lot of the time we treat security awareness training and policies like a-one-size-fits-all. But that doesn’t refer to people’s expertise. It doesn’t go to the people and say, “Okay, here’s what we’re proposing, is this going to be possible for you to do these things in a secure way?” And if not, how can we support you to make what you’re doing more secure.  Then, you know, people make mistakes. Everyone focuses on if a phishing message comes to an organization, people focus on the people who fell for it. But there were many, many more people who didn’t fall for it. And so what we need to do is examine the successes, what can we learn from the people? Why did they spot that phishing message so that we can encourage that in the people who did happen to make mistakes?  I didn’t get these ideas, just out of the air. I got them from some very insightful people. One of them was Sidney Dekker, who has applied this paradigm in the safety field. What’s interesting was that he got Woolworths in Australia to allow him to apply the paradigm in some of their stores. They previously had all these signs up all over the store – “Don’t leave a mess here” and “Don’t do this” – and they had weekly training on safety. He said, right, we’re taking all the signs out. Instead, what we’re gonna do is just say, you have one job, don’t let anyone get hurt. And the stores that applied the job got the safety prize for Woolworths that next year. So, you know, just the idea that everyone realized it was their responsibility. And it wasn’t all about fear, you know, rules and that sort of thing. So I thought if he could do this in safety, where people actually get harmed for life or killed, surely we can do this in cyber?! And then I found a guy who ran a nuclear submarine in the United States. His name is David Marquet. He applied the same thing in his nuclear submarine which you would also think, oh, my goodness, a nuclear submarine. There’s so much potential for really bad things to happen! But he applied the same sort of paradigm shift – and it worked! He won the prize for the best run nuclear submarine in the US Navy. So it’s about being brave enough to go actually, you know, what we’re doing is not working, and every year it’s not working Maybe it’s time to think well, can we do something different?  But like you said, Marc, we need a brave organization to say, okay, we’re gonna try this. And we haven’t managed to find one yet. But we will, we will! TS: And that’s one of the things I wanted to close out on. I spoke to you at the beginning of this podcast is how much I love the article in the Wall Street Journal, but also just the mission that both of you are on – to improve, what I see really is the relationship between people and the cybersecurity function. And my question to you is, again, touches on that concept of how much progress have we actually made? And then, to close, how optimistic are you that we can actually flip the script and stop using fear appeals? MD: Yeah, I feel like we’ve made a lot of progress, but not nearly enough. So, you know, there’s, and part of the challenge, too, is, none of this stuff is static, right? All this stuff is constantly changing; the cybersecurity threats out there change, we’re talking, so much, about phishing today, and social engineering is going to be something different next year. And so it’s always this idea of playing catch-up. But also, you know, having the fortitude to take that step out there to take that leap of faith that maybe we can do something else besides using fear. 
MD: I think I am optimistic that it can be done. We can make a lot of progress. For it to actually be done to, you know, 100%… I don’t know that we’ll ever get to that point. But I feel like we can make a lot of progress. And looking at part of this is recognizing the fact that – you’re mentioning the socio technical side of this – this isn’t just a technical problem, right? And a lot of times the people we throw into cybersecurity positions have this very strong technical background but they’re not bringing in other disciplines. Perhaps from the arts, from literature, from the humanities, and from design, we can bring new considerations to try and look at this as a very holistic multidisciplinary problem. If the problem is like that, well, then solutions definitely have to be as well.  We have to acknowledge that and start trying to get creative with the solutions. And we need those brave organizations to try these different approaches. I think they’ll be pleased with the results because they’re probably spending a lot of time and money right now, to try and make the organization more secure. They’re telling their bosses, the CISOs are telling their bosses, well, this is what we’re doing. We’re scaring them. But the results don’t always speak for themselves.  TS: And, Karen, what would you add to that? KR: Well, I just totally concur with everything Mark said, I think he’s rounded this off very nicely. I ran a study recently – it was really unusual study – where we put old fashioned typewriters in coffee shops and all over, and we put pieces of paper in. We just typed something along the top that said, “When I think about cybersecurity, I feel…” and we got unbelievable stuff back from people going: “I don’t understand it, I’m uncertain.” Lots and lots of negative responses – so there’s a lot of negative emotion around cyber. And that’s not good for cybersecurity. So I’d really like to see something different. And, you know, the old saying, If you keep doing the same thing without getting results, there’s something wrong. We see it’s not working, this might be the best way of changing and making it work. TS: I completely agree. I completely agree. Thank you both so much for that great discussion. I really enjoy getting deeper as well, and hearing your thoughts on all of this. As you say, I think it’s a win-win scenario on so many counts. More positivity means better outcomes for employees. And I think it means better outcomes for the security function.   If you enjoyed our show, please rate and review it on Apple, Spotify, Google or wherever you get your podcasts. And remember you can access all the RE:Human Security Layer podcasts here. 
Human Layer Security, Podcast
6 Cybersecurity Podcasts to Listen to Now
Tuesday, January 19th, 2021
If you’re interested in cybersecurity, this list is for you.  We’ve collated six of the best cybersecurity podcasts — where engaging hosts provide breaking news, intelligent analysis, and inspiring interviews. The CyberWire Daily Launched: December 2015 Average episode length: 25 minutes Release cycle: Daily As one of the most prolific and productive cybersecurity news networks, CyberWire has access to world-class guests, top research, and breaking news. The CyberWire Daily brings listeners news briefings and a great variety of in-depth cybersecurity content. The CyberWire Daily showcases episodes from across CyberWire’s podcast catalog, including Career Notes, in which security leaders discuss their life and work; Research Saturday, where cybersecurity researchers talk about key emerging threats; and Hacking Humans, which focuses on social engineering. Here are some great recent episodes: Deep Instinct’s Shimon Oren talks about his research on the worrying re-emergence of the Emotet malware Craig Williams, head of outreach at Cisco’s Talos Unit, discusses the perils of malicious online ads (malvertising) Ann Johson, Microsoft’s Corporate VP Cybersecurity Business Development, discusses her career journey —- from lawyer to cybersecurity executive Unsupervised Learning Launched: January 2015 Average episode length: 25 minutes Release cycle: Weekly Originally called “Take 1 Security,” Daniel Miessler’s Unsupervised Learning podcast is an insightful look at long-running themes and emerging issues in cybersecurity. Miessler has provided thoughtful written commentary on cybersecurity for over two decades. His podcast’s format varies: most weeks involve a run-down of the week’s cybersecurity headlines, but some episodes feature an essay, interview, or a book review.  Some standout episodes over the past year have included: An analysis of Verizon’s all-important annual data breach report  An interview with General Earl Matthews on election security A spoken essay about how the US should address its ransomware problem WIRED Security Launched: November 2020 Average episode length: 8 minutes Release cycle: Every weekday WIRED Security is part of WIRED’s “Spoken Edition” range of podcasts, and it’s a little different from the other podcasts on our list. Each episode features a reading of a recently-published WIRED article about cybersecurity. We love this podcast because it’s short and snappy (episodes generally range from 4 to 12 minutes long), released daily, and provides free access to WIRED’s incredible in-depth journalism.  Some great episodes from the past few months include:  A recap of 2020’s worst hacks (there were many to choose from) An analysis of the critical — and possibly permanent — security flaws among Internet of Things devices  A look at how Russia could be exploiting poor cybersecurity practices among remote workers RE: Human Layer Security Launched: December 2020 Average episode length: 22 minutes Release cycle: Weekly RE: Human Layer Security is an exciting new podcast hosted by Tessian CEO Tim Sadler. Sadler talks to business and technology leaders about their experiences running and securing some of the world’s leading organizations. The show flips the script on cybersecurity and addresses the human factor. Join world-class business and technology leaders as they discuss how and why companies must protect people – not just machines and data – to stop threats and empower employees. Guests have included:  Howard Schultz, Starbucks former CEO, on why culture trumps strategy when building and protecting a business Stephane Kasriel, Upwork former CEO, on how companies can embrace remote working Tim Fitzgerald, CISO at ARM, on why security should serve people’s interests and empower employees to take care of themselves New episodes launch every Wednesday. Don’t miss out!  Security Now Launched: August 2005 Average episode length: 2 hours Release cycle: Weekly Security Now is the oldest podcast on our list, but it has truly stood the test of time. Now entering its 16th year, the podcast still has a vast listener base — and continues to provide timely and insightful analysis on important cybersecurity topics. Every Monday, Security Now provides a detailed breakdown of all (and we mean all) the weeks’ security and privacy headlines. If you’re ever feeling out of the loop, spending a couple of hours listening to hosts Steve Gibson and Leo Laporte will bring you back up to date. Recent discussions on Security Now have included: How SolarWinds shareholders are launching a class-action lawsuit following the company’s disastrous hack Why WhatsApp users are flocking to Signal following a privacy policy update How swatters are using IoT devices to misdirect emergency services teams  The Many Hats Club Launched: November 2017 Average episode length: 45 minutes Release cycle: Sporadic The Many Hats Club is a coalition of people from across the information security community, including coders, engineers, and hackers — whether blackhat, whitehat, or greyhat. The Many Hats Club podcast is a great way to get to know the next generation of infosec professionals. Host CyberSecStu interviews a great range of guests about a broad range of topics, including hacking, privacy, and cybersecurity culture. Recent highlights include: A conversation about DDoS mitigation and mental health with security researcher Notdan A discussion about women in infosec with cybersecurity commentator Becky Pinkard  A strictly NSFW interview with the controversial McAfee founder John McAfee What’s your favorite cybersecurity podcast? Let us know by tagging us on social media! And, if it’s RE: Human Layer Security, make sure you follow it on Spotify or subscribe on Apple Podcasts so you never miss an episode. 
Spear Phishing
CISA Warns of New Attacks Targeting Remote Workers
Thursday, January 14th, 2021
tl;dr: The Cybersecurity and Infrastructure Security Agency (CISA) has warned of a string of successful phishing attacks exploiting weak cyber hygiene in remote work environments to access companies’ cloud services via employees’ corporate laptops and personal devices.*  According to the report, “the cyber actors designed emails that included a link to what appeared to be a secure message and also emails that looked like a legitimate file hosting service account login. After a targeted recipient provided their credentials, the threat actors then used the stolen credentials to gain Initial Access to the user’s cloud service account. … A variety of tactics and techniques—including phishing, brute force login attempts, and possibly a “pass-the-cookie” attack—to attempt to exploit weaknesses in the victim organizations’ cloud security practices.” 
Once the hackers had access an employee’s account, they were able to: Send other phishing emails to contacts in the employee’s network.  Modify existing forwarding rules so that emails that would normally automatically be forwarded to personal accounts were instead forwarded directly to the hacker’s inbox.  Create new mailbox rules to have emails containing specific keywords (i.e. finance-related terms) forwarded to the hacker’s account. This type of malicious activity targeting remote workers isn’t new. Henry Trevelyan Thomas, Tessian’s VP of Customer Success has seen many instances this year. “The shift to remote work has resulted in people needing more flexibility, and personal accounts provide that—for example, access to home printers or working from a partner’s computer. Personal accounts are easier to compromise as they almost always have less security controls, are outside organizations’ secure environments, and your guard is down when logging on to your personal account. Attackers have realized this and are seeing it as a soft underbelly and entry point into a full corporate account takeover.” Learn more about Account Takeover (ATO), and take a look at some real-life examples of phishing attacks we spotted last year.  CISA recommends the following steps for organizations to strengthen their cloud security practices: Establish a baseline for normal network activity within your environment Implement MFA for all users, without exception Routinely review user-created email forwarding rules and alerts, or restrict forwarding Have a mitigation plan or procedures in place; understand when, how, and why to reset passwords and to revoke session tokens Consider a policy that does not allow employees to use personal devices for work. At a minimum, use a trusted mobile device management solution. Consider restricting users from forwarding emails to accounts outside of your domain Focus on awareness and training. Make employees aware of the threats—such as phishing scams—and how they are delivered. Additionally, provide users training on information security principles and techniques as well as overall emerging cybersecurity risks and vulnerabilities. Establish blame-free employee reporting and ensure that employees know who to contact when they see suspicious activity or when they believe they have been a victim of a cyberattack. This will ensure that the proper established mitigation strategy can be employed quickly and efficiently. For more practical advice on how to avoid falling for a phishing scam, download Tessian’s guide to Remote Work and Cybersecurity. What Tessian’s Experts Say
Free resources to help keep your employees and organization secure.
*Note: the activity and information in this Analysis Report is not explicitly tied to any one threat actor or known to be specifically associated with the advanced persistent threat actor attributed with the compromise of SolarWinds Orion Platform software and other recent activity.
Spear Phishing
What is CEO Fraud? How to Identify CEO Email Attacks
Thursday, January 14th, 2021
Typically, the attacker will target an employee at a target organization and trick them into transferring them money. A CEO fraud email will usually urgently request the employee to pay a supplier’s “invoice” using new account details. Cybercriminals use sophisticated techniques and meticulous research to make the attack as persuasive as possible.  Why do cybercriminals impersonate CEOs and other high-level executives? Two reasons: Power: CEOs have the authority to instruct staff to make payments. Status: Employees tend to do what CEOs ask. No-one wants to upset the boss. CEO fraud vs. other types of cybercrime There’s some confusion about CEO fraud and how it relates to other types of cybercrime. Let’s clear a few things up before looking at CEO fraud in more detail. CEO fraud is related to the following types of cybercrime: Social engineering attack: Any cyberattack in which the attacker impersonates someone that their target is likely to trust. Phishing: A social engineering attack conducted via email (there are other forms of phishing, such as “smishing” and “vishing” via SMS and phone). Spear phishing: A phishing attack targeting a named individual. Business Email Compromise (BEC): A phishing attack conducted via a hacked or spoofed corporate email account. CEO fraud is not to be confused with “whaling”: a phishing attack where the cybercriminal targets — rather than impersonates — a CEO or other senior company employee. More on that in this article: Whaling: Examples and Prevention Strategies. How do CEO fraud attacks work? There are three main ways cybercriminals can compromise a CEO’s email account: Hacking: Forcing entry into the CEO’s business email account and using it to send emails. Spoofing: Sending an email from a forged email address and evading authentication techniques. Impersonation: Using an email address that looks similar to a CEO’s email address. A CEO fraud attack usually involves one of the following types of cybercrime: Wire transfer phishing: The attacker asks the target to pay an invoice. Gift certificate phishing: The attacker asks the targets to buy them gift certificates Malicious payload: The email contains a malware attachment Like all social engineering attacks, CEO fraud attacks exploit people’s feelings of trust and urgency. When the CEO is “in a meeting” or “at a conference” and needs an urgent favor, employees don’t tend to second-guess them.  Here’s how a CEO fraud email might look. Now, for the sake of the example, imagine your boss is Thomas Edison. Yes, that Thomas Edison.
There are a few things to note about this CEO fraud email: Note the subject line, “Urgent request,” and the impending payment deadline. This sense of urgency is ubiquitous among CEO fraud emails. The fraudster uses Thomas’s casual email tone and his trademark lightbulb emoji. Fraudsters can do a great impersonation of a CEO by scraping public data (plenty is available on social media!) or by hacking their email and observing their written style. Cybercriminals do meticulous research. Thomas probably is in Florida. “Filament Co.” might be a genuine supplier and an invoice might even actually be due tomorrow. There’s one more thing to note about the email above. Look at the display name — it’s “Thomas Edison”. But anyone can choose whatever email display name they want. Mobile email apps don’t show the full email address, leaving people vulnerable to crude “display name impersonation” attacks. That’s why it’s so important to examine the sender’s email address and make sure it matches the display name. Remember: on mobile, you’ll have to take an extra step to view the email address. But, it’s worth it.  It’s important to note that the difference between the display name and email address won’t always be easy to spot. Why? Because fraudsters can create look-a-like email addresses via “domain impersonation”. Let us explain. An email domain is the part of the email address after the “@” sign. A cybercriminal impersonating Bill Gates, for example, might purchase a domain such as “micros0ft.com” or “microsoft.co”.  Likewise, using “freemail impersonation”, a more unsophisticated attacker might simply set up an email account with any free email provider using the CEO’s name (think “[email protected]”). We explain domain impersonation in more detail – including plenty of examples – in this blog: Inside Email Impersonation: Why Domain Name Spoofs Could be Your Biggest Risk. How common is CEO fraud? It’s undeniable that cybercrime is on the increase. FBI statistics show that the total losses from cybercrime tripled between 2015-2019. Business Email Compromise (BEC) has also “increased, grown in sophistication, and become more targeted” due to the COVID-19 pandemic, according to Interpol. But what about CEO fraud itself? CEO fraud once dominated the cybercrime landscape. However, there is some evidence that cybercriminals are moving away from CEO fraud and towards a broader range of more sophisticated social engineering attacks. The FBI’s Internet Crime Complaint Center (IC3) estimates the global losses associated with BEC at over $26 billion in the period from 2016-19 and cites a 100% increase in BEC between 2018-19.  But this figure doesn’t distinguish CEO fraud from other types of BEC. The IC3’s 2019 cybercrime report suggests while CEO fraud previously dominated BEC, cybercriminals now impersonate a broader range of actors, including vendors, lawyers, and payroll departments. These days, employees don’t only have to be wary of CEO fraud attacks. They also need to watch out for more advanced cybercrime techniques like Account Takeover (ATO), deepfakes, and ransomware. But CEO fraud is still a big deal. In December 2020, the Bank of Ireland warned of an increase in Brexit-related CEO fraud attacks. The bank’s staff were reportedly dealing with two to three CEO fraud attacks per week, with some attacks compromising millions of euros. Want to know how to protect yourself and your business from CEO fraud? Read our article: How to Prevent CEO Fraud Attacks.
Spear Phishing
How to Prevent CEO Fraud: 3 Effective Solutions
Thursday, January 14th, 2021
CEO fraud is a type of cybercrime in which the attacker impersonates a CEO or other company executive. The fraudster will most often use the CEO’s email account — or an email address that looks very similar to the CEO’s — to trick an employee into transferring them money. That means that, like other types of Business Email Compromise (BEC), CEO fraud attacks are very difficult for employees and legacy solutions like SEGs to spot. But, there are still ways to prevent successful CEO fraud attacks. The key? Take a more holistic approach by combining training, policies, and technology. If you want to learn more about BEC before diving into CEO fraud, you can check out this article: Business Email Compromise: What it is and How it Happens. You can also get an introduction to CEO Fraud in this article: What is CEO Fraud? 1. Raise employee awareness Security is everyone’s responsibility. That means everyone – regardless of department or role –  must understand what CEO fraud looks like. Using real-world examples to point out common red flags can help.
It’s important to point out the lack of spelling errors. Poor spelling and grammar can be a phishing indicator, but this is increasingly unlikely among today’s more sophisticated cybercrime environment. Also, notice the personal touches — Sam’s familiar tone, his references to Kat working from home, and his casual email sign-off. Fraudsters go to great efforts to research their subjects and their targets, whether via hacking or simply using publicly available information. These persuasive elements aside, can you spot the red flags? Let’s break them down: The sender’s email address: The domain name is “abdbank.com” (which looks strikingly similar to abcbank.com, especially on mobile). Domain impersonation is a common tactic for CEO fraudsters. The sense of urgency: The subject line, the ongoing meeting, the late invoice. Creating a sense of urgency is near-universal in social engineering attacks. Panicked people make poor decisions. The authoritative tone: “Please pay immediately”: there’s a reason cybercriminals impersonate CEOs — they’re powerful, and people tend to do what they say. Playing on the target’s trust: “I’m counting on you”. Everyone wants to be chosen to do the boss a favor. Westinghouse’s “new account details”: CEO fraud normally involves “wire transfer phishing” — this new account is controlled by the cybercriminals. Your cybersecurity staff training program should educate employees on how to recognize CEO fraud, and what to do if they detect it. Check the sender’s email address for discrepancies. This is a dead giveaway of email impersonation. But remember that corporate email addresses can also be hacked or spoofed. Feeling pressured? Take a moment. Is this really something the CEO is likely to request so urgently? New account details? Always verify the payment. Don’t pay an invoice unless you know the money’s going to the right place. Looking for a resource that you can share with your employees? We put together an infographic outlining how to spot a spear phishing email. While these are important lessons for your employees, there’s only so much you can achieve via staff training. Humans are often led by emotion, and they’re not good at spotting the small giveaways that might reveal a fraudulent email. Sometimes, even security experts can’t! More on this here: Pros and Cons of Phishing Awareness Training. 
2. Implement best cybersecurity practice Beyond staff training, every thriving company takes an all-round approach to cybersecurity that minimizes the risk of serious fallout from an attack. Here are some important security measures that will help protect your company’s assets and data from CEO fraud: Put a system in place so employees can verify large and non-routine wire transfers, ideally via phone Protect corporate email accounts and devices using multi-factor authentication (MFA) Ensure employees maintain strong passwords and change them regularly Buy domains that are similar to your company’s brand name to prevent domain impersonation Regularly patch all software Closely monitor financial accounts for irregularities such as missing deposits Deploy an email security solution All the above points are crucial cybersecurity controls. But let’s take a closer look at that final point — email security solutions. 3. Deploy intelligent inbound email security Because CEO fraud attacks overwhelmingly take place via email (along with 96% of all phishing attacks), installing email security software is one of the most effective steps you can take to prevent this type of cybercrime. But not just any email security solution. Legacy solutions like SEGs and spam filters and Microsoft and Google’s native tools generally can’t spot sophisticated attacks like CEO fraud. Why? Because they rely almost entirely on domain authentication and payload inspection. Social engineering attacks like CEO fraud easily evade these mechanisms. Tessian is different.   Tessian Defender uses machine learning (ML), anomaly detection, behavioral analysis, and natural language processing (NLP) to detect a variety of signals indicative of CEO fraud. Tessian’s machine learning algorithms analyze your company’s email data. The software learns every employee’s normal communication patterns and maps their trusted email relationships — both inside and outside your organization. Tessian inspects both the content and metadata of inbound emails for any signals suggestive of CEO fraud. For example, suspicious payloads, anomalous geophysical locations, out-of-the-ordinary IP addresses and email clients, keywords that suggests urgency, or unusual sending patterns.  Once it detects a threat, Tessian alerts employees that an email might be unsafe, explaining the threat in easy-to-understand language.
Click here to learn more about how Tessian Defender protects your team from CEO fraud and other email-based cybersecurity attacks. You can also explore our customer stories to see how they’re using Tessian Defender to protect their people on email and prevent social engineering attacks like CEO Fraud.
Customer Stories, Spear Phishing
How Tessian Is Preventing Advanced Impersonation Attacks in Manufacturing
By Maddie Rosenthal
Tuesday, January 12th, 2021
Company: SPG Dry Cooling Industry: Manufacturing Seats: 368 Solutions: Defender About SPG Dry Cooling SPG Cooling is an innovative, global leading manufacturer of air-cooled condensers that has been providing exceptional quality equipment to coal, oil, and gas industrial plants for over a century. They employee a global workforce and have over 1,000 customer references. We talked to Thierry Clerens, Global IT Manager at SPG Dry Cooling, to learn more about the problems Tessian helps solve and why he chose Tessian Defender over other solutions.  Problem: The most advanced threats can slip past other controls  Phishing is a big problem across all industries.  But, because inbound email attacks are becoming more and more sophisticated and hackers continue using tactics like domain impersonation and email spoofing, Thierry knew he needed to implement a new solution that could stop the phishing emails that might slip past his O365 controls and trained employees. He cited one specific incident where a hacker impersonated a company in SPG Cooling’s supply chain and attempted to initiate a wire transfer.  How? A tiny, difficult-to-spot change in the domain name.  “They created a fake domain with exactly the same name as the real user. But the top-level domain .tr was missing at the end. So it was just .com. No user – not even IT! – is looking at the domain name that closely. They tried to get us to deliver money to another account,” Thierry explained. While the attack wasn’t successful (SPG Dry Cooling has strong policies and procedures in place to confirm the legitimacy of requests like this) he wanted to level-up his inbound email security and help users spot these advanced impersonation attacks. So, he invested in Tessian. Thierry explained why. 
Tessian Defender analyzes up to 12 months of historical email data to learn what “normal” looks like. It then uses natural language processing, behavioral analysis, and communication analysis to determine if a particular email is suspicious or not in real-time. To learn more, read the data sheet.  Problem: You can’t train employees to spot all phishing attacks Tessian also helps employees get better at spotting malicious emails with in-the-moment warnings (written in plain English) that reinforce training by explaining exactly why an email is being flagged. Here is an example:
This feature is especially important to Thierry, who values phishing awareness training but understands it has to be ongoing.  “We like to empower our users and we like that, with Tessian, our users learn and become better and better and better. That’s what we’re trying to do at SPG Dry Cooling. We’re trying to train and educate our users as much as possible. We’re trying to be innovative in the ways that we get our users, our company, our members, everybody, to better themselves,” he said. In evaluating solutions, he wanted something that would protect his people, while also empowering them to make smarter security decisions. He found that in Tessian, explaining that “the most interesting feature for me is the user education. You have to train your users. You have to help them get better at spotting threats by helping them understand the threats. Tessian does that.” Problem: It’s nearly impossible for IT teams to manually investigate all potential inbound threats Before Tessian, Thierry and his team had to manually investigate all emails that employees flagged as suspicious. With limited time and resources – and given the fact that “some are really good and are even hard for IT people to find” – it was nearly impossible for them to keep up. 
Thierry explained that Tessian extends the capabilities of his team. How?  It automatically detects and prevents threats Domains can be added to the denylist in a single click, before they even land in employee’s mailboxes Tessian dashboards make it easy for IT to see trends and create targeted security campaigns to help educate users.  Tessian was also easy to deploy. “As a part of our proof of concept, Tessian started ingesting historical data about employee’s IP addresses, what emails they normally send, who they normally communicate with. We saw how it was helping in just a few weeks. After that, we connected Tessian to Office 36. It took just 15 minutes,” he said.  Learn more about how Tessian prevents human error on email Powered by machine learning, Tessian’s Human Layer Security technology understands human behavior and relationships. Tessian Guardian automatically detects and prevents misdirected emails Tessian Enforcer automatically detects and prevents data exfiltration attempts Tessian Defender automatically detects and prevents spear phishing attacks Importantly, Tessian’s technology automatically updates its understanding of human behavior and evolving relationships through continuous analysis and learning of an organization’s email network. That means it gets smarter over time to keep you protected, wherever and however your work. Interested in learning more about how Tessian can help prevent email mistakes in your organization? You can read some of our customer stories here or book a demo.
Spear Phishing
What is a Malicious Payload and How is it Delivered?
Tuesday, January 12th, 2021
The term “payload” traditionally refers to the load carried by a vehicle — for example, the passengers in an aircraft or the cargo in a truck. But, in computing, “payload” refers to the content of a message.  When you send an email, you’re transmitting several pieces of data, including a header, some metadata, and the message itself. In this scenario, the message is the payload — it’s whatever content you want the recipient to receive. The term “malicious payload” comes into play when we talk about cybersecurity specifically.  In a cyberattack, a malicious payload is whatever the attacker wants to deliver to the target — it’s the content that causes harm to the victim of the attack. Oftentimes, it’s a URL that leads to a malicious website or an attachment that deploys malware. We talk more about malicious websites in this article: How to Identify a Malicious Website. How is a malicious payload delivered? Malicious payloads first need to find their way onto a target’s device. How? There are a couple of methods hackers use to do this. Social engineering attacks DNS hijacking  The most common way to deliver a malicious payload is via social engineering attacks like phishing, spear phishing, CEO Fraud, and other types of advanced impersonation attacks.  If you’re not sure what social engineering is – or if you want real-world examples of attacks – you can check out this article: 6 Real-World Examples of Social Engineering Attacks. Here’s how a typical phishing attack typically starts… Suppose your office has ordered some printer ink. You get an email from someone claiming to be “FedEx” that says: “click here to track your order.” Since you are – in fact – expecting a delivery, you click the link. The link appears to lead to FedEx’s order-tracking page, but the page causes a file to download onto your computer. This file is the malicious payload.  While email is the most common delivery vector for malicious payloads, they can also appear via vishing (via phone or VoIP) and smishing (via SMS) attacks. Another way to deliver a malicious payload is via DNS hijacking. Here, the attacker forces the target’s browser to redirect to a website where it will download the payload in the form of a malware file. Types of malicious payloads Malicious payloads can take a number of forms. The examples below are all types of “malware” (malicious software). Virus: A type of malware that can replicate itself and insert its code into other programs. Ransomware: Encrypts data on the target computer, rendering it unusable, and then demands a ransom to restore access. Spyware: A program that tracks user activity on a device — including which websites the user visits, which applications they use, and which keys they press (and, therefore, the user’s passwords). Trojan: Any file which appears to be innocent but performs malicious actions when executed. Adware: Hijacks the target computer and displays annoying pop-up ads, affecting performance. But a payload doesn’t need to come in the form of a file. “Fileless malware” uses your computer’s memory and existing system tools to carry out malicious actions — without the need for you to download any files. Fileless malware is notoriously hard to detect. Malicious payload vs. zero payload Not all phishing attacks rely on a malicious payload. Some attacks simply persuade the victim to action a request. Keep reading for examples.  Suppose someone claiming to be a regular supplier sends you an email. The email claims that there’s been a problem with your recent payment. With a malicious payload attack, the email might contain an attachment disguised as your latest invoice.  With a zero payload attack, the email may encourage you to simply initiate a wire transfer or manually update account details to divert the payment from the genuine supplier to the hacker.   Zero payload attacks can be just as devastating as malicious payload attacks, and traditional antivirus and anti-phishing software struggles to detect them. Case study: KONNI Malware, August 2020 Let’s look at a real-world example of a malicious payload attack. This example demonstrates how easy it can be to fall victim to a malicious payload. On August 14, 2020, the United States Cybersecurity and Infrastructure Security Agency (CISA) issued a warning that: “cyber actors using emails containing a Microsoft Word document with a malicious Visual Basic Application (VBA) macro code to deploy KONNI malware”  So, in this example, the malicious payload is a .doc file, delivered via a spear phishing email. The .doc file contains the “KONNI” malware. When the target opens the malicious payload, the KONNI malware is activated. It uses a “macro” (simple computer code used to automate tasks in Microsoft Office) to contact a server and download further files onto the target computer. The KONNI malware can perform different attacks, including: Logging the user’s keystrokes Taking screenshots Stealing credentials from web browsers Deleting files These actions would allow cybercriminals to steal crucial information — such as passwords and payment card details — and to cause critical damage to your device. How to stop malicious payloads You should take every reasonable step to ensure malicious payloads do not make their way onto your devices. Email security is a crucial means of achieving this. Why? Because email is the threat vector security and IT leaders are most concerned about. It’s also the most common medium for phishing attacks and a key entry-point for malicious payloads. If you want to learn more about preventing phishing, spear phishing, and other types of inbound attacks that carry malicious payloads, check out these resources: Must-Know Phishing Statistics: Updated 2021 How to Identify and Prevent Phishing Attacks What is Spear Phishing? How to Identify a Malicious Website What Does a Spear Phishing Email Look Like? And, if you want to stay-up-to-date with cybersecurity news, trends, and get the latest insights (and invites to events!) before anyone else, subscribe to our newsletter. 
Human Layer Security
21 Virtual Cybersecurity Events To Attend in 2021
Friday, January 8th, 2021
Our list of 21 cybersecurity events to attend in 2021 features premier cybersecurity summits, like the International Cybersecurity Forum in France and National Cyber Summit in the US, alongside intimate and industry-specific events (and webinars) you won’t want to miss. Many of these events are hosted online, but a lot of organizers are planning to host their conferences face-to-face. Watch out for last-minute changes as the COVID-19 situation continues to evolve. FloCon 2021 Date: January 12-14, 2021 Location: Online FloCon focuses on using big data to fight cybersecurity threats. FloCon demonstrates the latest research on how data analytics can be applied to any large dataset to improve networked system security. This event is perfect for operational analysts, tool developers, researchers, security professionals, and anyone interested in leveraging the power of big data to enhance cybersecurity. Cost to attend: Standard: $500. Government: $100. Academic: $125. Student: $50 10 Incredible Ways You Can Be Hacked Through Email, and How to Stop the Bad Guys Date: January 14, 2021 Location: Online Email remains the threat vector cybersecurity leaders are most concerned about. 2020 has seen a huge spike in email-based phishing and other cyberattacks. 2021 should be the year you lock down your company’s email system against intruders. Join Roger Grimes and Kevin Mitnick of KnowBe4 at this Secureworld webinar, as they talk participants through 10 ways cybercriminals can use email to trick users, launch malware, or hijack communications. Want to know more about protecting your business from email-based cyberattacks? Read our article on Email Security Best Practice. Cost to attend: Free. How to Hack a Human Date: January 26, 2021, 1:00pm EST Location: Online In this webinar, Tessian’s VP of Information Security, Trevor Luke, is joined by Katie Paxton-Fear, PhD Student and Ethical Hacker and Anne Benigsen, CISO at Banker’s Bank of West. They’ll discuss how our growing digital footprints make us more vulnerable than ever to social engineering attacks and BEC.  You’ll learn: What personal and work-related information many of us unwittingly share online How hackers use this information to socially engineer a personalized attack What you can do to reduce your company’s hackability Based in EMEA? You can register for this event instead, starting at 12:00pm GMT on January 27. Cost to attend: Free. RSA Conference 2021 Date: January 27, 2021 Location: Online The RSA Conference (RSAC) brings together expert speakers from across the global cybersecurity community, including Adam Hickey, Deputy Assistant Attorney General at the US Department of Justice, Target’s Product Security Director Jennifer Czaplewski, and Cybereason CTO Israel Barak. 2021 sees the RSAC operating 100% online for the second year running, with sessions on analytics, ransomware response, and machine learning security solutions. You can read our take on last year’s conference in this first-hand account, all about last year’s theme: The Human Element.  Cost: Early Bird: $79. Standard: $99. Showcase: Free. RegTech Live: an FStech Conference Date: March 3, 2021 Location: Online   RegTech Live is back for its third year, where – once again – industry leaders will be discussing the latest developments in tech, the biggest trends for 2021, and the emerging technologies those in the financial sector and insurance need to keep their eyes on.  While you can view a full list of speakers here, you can expect to hear from experts from BNY Mellon, UCL, NAtWest, the Financial Conduct Authority, and Tessian. Spoiler Alert: We’ll be speaking about How to Hack a Human.  Cost: Free for those in the financial sector and insurance, £395 + VAT for technology providers  Human Layer Security Summit Date: March 3, 2021 Location: Online   On March 3, Tessian will be hosting its first Human Layer Security (HLS) Summit of 2021. Want to be the first to receive an invitation and hear about the agenda and speakers? Sign up to our newsletter! First-timer? Check out our HLS On-Demand page for a collection of last year’s best panel discussions, interviews, and presentations.  Cost to attend: Free. CyberCon London 2021 Date: March 9, 2021 Location: Kimpton Fitzroy, London  CyberCon London features high-profile speakers bringing CTOs, CISOs, and IT directors up-to-date knowledge and practical advice on dealing with cyberthreats. Agenda items include panel sessions on fraud, remote working, and the costs of cybercrime —  plus lectures from world-renowned cybersecurity tsar Dr Jacqui Taylor and blockchain expert Aviya Arika. Cost to attend: Standard: £895 + tax. Super Early Bird: £595 + tax. Early Bird: £695 + tax. Fifth International Workshop on Security, Privacy and Trust in the Internet of Things (SPT-IoT) Date: March 22-26, 2021 Location: Online  The SPT-IoT workshop is part of the International Conference on Pervasive Computing and Communications (PerCom 2021), a conference organized by the IEEE Computer Society. The workshop brings together academics, researchers, and industry leaders to share ideas and advice on security within Internet of Things (IoT) devices.  IoT is a booming industry — but the security risks mean that manufacturers and developers are incurring an increasingly significant regulatory burden. Security leaders in the IoT sector should take every opportunity to learn about implementing better cybersecurity. Cost to attend: Free International Cybersecurity Forum (FIC) 2021 Date: April 6-8, 2021 Location: Grand Palais, Lille, France The International Cybersecurity Forum (Forum International Cybersecurite, or FIC) is one of the largest cybersecurity events in Europe, featuring over 450 speakers, 33 round tables, and 24 conferences, plus plenaries, demonstrations, and cybersecurity masterclasses. The 2021 program features sessions on information mapping, secure home working, and the emerging “cyberwar” between state powers. Speakers include privacy advocate Max Schrems, Jolicloud CEO Tariq Krem, and European Commission Vice President Margaritis Schinás. It’s hoped that FIC 2021 will go ahead as a face-to-face event, but remote participation is also available. Cost to attend: TBC Cybersecurity Digital Summit for Healthcare and Life Sciences 2021 Date: April 13-14, 2021 Location: Online 2020 saw some high-profile data breaches among healthcare companies, including the December cyberattack on the UK’s National Health Service and the devastating November attack on Blackbaud, which acted as a vendor to dozens of healthcare providers. Cybersecurity is absolutely crucial in this most tightly-regulated of industries, and healthcare professionals should learn as much as they can about emerging cyber threats. Cyber Security Hub’s Summit for Healthcare and Life Sciences Summit is a two-day event where industry leaders will advise healthcare professionals on how to keep patient data safe throughout 2021. Cost to attend: Free Third-Party & Supply Chain Cyber Security Summit Date: April 14-15, 2021 Location: Online Securing your own company’s end-points, devices, and networks is just part of the cybersecurity battle. You also need to ensure that your suppliers, vendors, and other third parties are secure and can take good care of your company’s data. In our article on What is Account Takeover (ATO)?, we look at the devastating attacks that can emerge from your supply chain. This two-day event from the Growth Innovation Agility (GIA) Global Group features speakers from Yandex, ENISA, GlaxoSmithKline, and Huawei. You’ll learn how much of your company’s data is really under its control, and how to manage risk when working with third parties. Cost to attend: Free 11th ACM Conference on Data and Application Security and Privacy (CODASPY) Date: April 26-28, 2021 Location: Online (Possible in-person enrolment available in the US, exact location TBC) This conference, organized by the Association for Computing Machinery (ACM) Special Interest Group on Security, Audit, and Control (SIGSAC), brings together academics and industry leaders to discuss security and privacy in software development. Applications, including mobile apps, are a key vulnerability of many systems. Read our article on zero-day vulnerabilities to learn about how hackers exploit software weaknesses. Software developers attending CODASPY will learn about cutting-edge research in the cybersecurity of software applications. Cost to attend: Free IAPP Global Privacy Summit Date: April 27-28, 2021 Location: Washington DC The International Association of Privacy Professionals (IAPP) is a globally-respected coalition of lawyers, developers, consultants, and other experts. The IAPP Global Privacy Summit features over 4000 attendees, at least 125 exhibitors, and more than 250 expert speakers. Privacy and cybersecurity are intertwined, and your business neglects on to the detriment of the other. Applying privacy-focused principles means collecting less personal information, deleting it when necessary, and — of course — storing it securely. The IAPP summit will feature sessions on data breach response, compliance with data protection laws such as the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA), and privacy engineering. Cost: TBC CyberUK 2021 Date: 11-12 May, 2021 Location: International Convention Centre Wales, Newport, Wales CyberUK is hosted by the UK’s National Cyber Security Centre (NCSC), a government unit that advises on cybersecurity. It’s one of the UK’s most important cybersecurity events and is a “must-attend” for industry leaders. The program for 2021 has not yet been set, but expect a full and varied range of talks, demos, and workshops. The last CyberUK agenda included sessions on identifying supply chain risks, building the cybersecurity profession, and using machine learning to boost defences. Cost to attend: Free for public sector employees. Private sector employees — Early bird: £849 + tax. Standard rate: £999 + tax RSA Conference San Francisco The theme for this year’s fully virtual RSAC? Resilience.  While a full list of speakers hasn’t been released, you can see what Linda Gray Martin, VP of RSA Conference, has to say about what you should expect in this video. “See” you there! Did you know 2021 marks the 30th anniversary of this event? Infosecurity Europe 2021 Date: June 8-10, 2021 Location: Olympia London, Hammersmith, London. Infosecurity Europe features an eclectic range of exhibitors and networking opportunities for cybersecurity leaders across all industries. Many key players in cybersecurity are exhibiting in 2021, including Avast, Bitdefender, and SolarWinds. Public bodies, including the UK Department for Digital, Culture, Media and Sport (DCMS) and Nation Cyber Security Centre (NCSC), are also represented. Cost to attend: TBC National Cyber Summit 2021 Date: June 8-10, 2021 Location: Huntsville, Alabama The National Cyber Summit focuses on education, collaboration, and innovation, bringing together experts from government, academia, and industry to deliver an innovative, diverse, and accessible event. Speakers will include Robert Powell, Senior Advisor for Cybersecurity at NASA, Katie Arrington, Chief of Information Security Acquisition and the US Department of Defense, and Merritt Baer, Principal Security Architect at Amazon Web Services. Cost to attend: Full Access: Standard — $570, Onsite — $610. Student, Teacher/Faculty: Standard — $175,  Onsite — $200. Government: Free. Regulatory Compliance Conference Date: June 13-16 Location: Hyatt Regency, San Diego, California With nations worldwide passing ever-stricter privacy and security laws, your business should take every opportunity to learn how best to remain compliant. Join “the nation’s top risk-based thinkers” to discuss the most pressing issues in regulatory compliance. This conference from the American Bankers Association features over 50 sessions to help banking and fintech organizations comply with consumer protection and data security regulations. Want to know more about balancing your security and compliance obligations? Read Security vs. Compliance: What’s The Difference? Cost to attend: TBC British Legal Technology Forum 2021 Date: July 6, 2021 Location: Billinghurst, London The British Legal Technology Forum is Europe’s biggest legal technology conference and exhibition, featuring 2,500 square meters of exhibition space. BLTF 2021 is a crucial event for legal professionals, featuring talks from Prof. Richard Susskind, President of the Society for Computers & Law, and Bruna Pellicci, CTO at Linklaters.  Bonus: Tessian is the headline sponsor!  Want to learn more about how Tessian helps lock down email and prevent breaches for some of the world’s top law firms? Read our customer stories.  Cost to attend: Free International Conference on Cyber Security (ICCS) 2021 Date: July 19-22, 2021 Location: Fordham University, New York The International Conference of Cyber Security (ICCS), a collaboration between the FBI and Fordham University, is among the world’s premier cybersecurity events. Esteemed speakers from around the world will discuss how to address cyber threats in the private, government, academic, and law enforcement sectors. The 2021 agenda remains a work-in-progress, but previous ICCS events have featured presentations from the Director of National Intelligence (DNI), FBI, CIA, and NSA. Registration is limited to just 300 attendees. Cost to attend: $995. Cyber Security Tutorial (CST) and Law Enforcement Workshop (LEW): an extra $75 per session. Cybersecurity Digital Summit for EMEA 2021 Date: October 19-20, 2021 Location: Online  This Cybersecurity Digital Summit, hosted by Cyber Security Hub, is a two-day event focusing on the main threats affecting the Europe, Middle-East, and Africa (EMEA) region. The summit follows on from Cyber Security Hub’s events focusing on the Americas and Asia Pacific (APAC) regions. According to Cyber Security Hub’s publicity, the EMEA region “seems to set the course for the regulatory framework that APAC (Asia Pacific) and the Americas are adopting.” Whether you’re a cybersecurity professional working in the EMEA region — or you’re based elsewhere and hoping to understand the threats emerging from EMEA — this event is for you. Cost to attend: Free We’ll be updating this throughout 2021. For the latest updates – including industry insights, new research, and company news – subscribe to our newsletter.
Customer Stories, DLP
Why Caesars Entertainment Chose Tessian as Their Complete Outbound Email Security Solution
By Maddie Rosenthal
Thursday, January 7th, 2021
Company: Caesars Entertainment UK Industry: Entertainment Seats: 250 Solutions: Guardian and Enforcer  About Caesars Entertainment UK  In 2006, Caesars Entertainment – the world’s largest casino entertainment company, best known for properties such as Caesars Palace, Planet Hollywood, and Harrahs – acquired London Clubs International. The current seven casinos in the UK form Caesars Entertainment UK. While the organization is passionate about delivering exceptional gaming entertainment and proud to offer customers unrivaled networks and benefits, they’re also active in the community, sponsoring and supporting a number of charities, including YGAM, GamCare, and The Gordon Moody Association. To help prevent both accidental data loss and malicious data exfiltration, Caesars has deployed Tessian Guardian and Enforcer as a complete outbound email security solution to protect 250 employees. Tessian solves three key problems for Caesars, which we explore in the Q&A interview below. Or, you can keep reading for a summary of the discussion.  1. An honest mistake on email almost caused a data breach Oftentimes, cybersecurity solutions are purchased retroactively, meaning after a breach has occurred. But, for Charles Rayer, Group IT Director at Caesars Entertainment UK, Tessian was a proactive investment, elicited by a near-miss. Here’s what happened: A customer relations advisor was sending emails to the casino’s VIPs. But, in one email, the employee accidentally attached the wrong document, which was a spreadsheet containing personal information related to some of their top 100 customers.   Luckily, they also spelled the email address incorrectly, so it was never actually sent. Nonetheless, it was a wake-up call for Charles and his team.
So, what would the consequences have been if the email had actually gone through? Charles explained, saying, “We’re covered by the GDPR and the Sarbanes-Oxley Act because we’re a public listing with US parent companies which means, had the email been sent, we would have had to report it which is a long process. And, even though we had security solutions in place, we would have most likely recieved a fine.  But for us, the biggest issue would have been the reputational damage. If that personal information did fall into the wrong hands, what would they do with it? Would they use it for their own personal benefit? Would they use it against us?”  With Tessian Human Layer Security Intelligence, Charles now has clear visibility of misdirected emails – what he previously considered an “iceberg threat” – and, because Tessian Guardian automatically prevents emails from being sent to the wrong person, Charles feels confident that a simple mistake won’t cost Caesars its reputation.  “It’s an issue of human error. We truly believe people are 100x more likely to accidentally mishandle data than to do it deliberately. So how do you solve it? There are thousands of solutions that categorize emails, look for strings of numbers, and identify keywords based on rules. But they don’t help in this situation. Tessian does. It knows – and continues learning – what conversations you normally have with people and can pick-up when something’s off. That’s the feature that really stood out to us.” Charles said.  To learn more about how Tessian Guardian uses historical email analysis, real-time analysis, natural language processing, and employee relationship graphs to detect and prevent misdirected emails, download the data sheet.  2. Other solutions triggered 10x as many false positives as real events  While – prior to deploying Tessian  – Charles didn’t have any technology in place to prevent misdirected emails, he did have a solution in place to prevent unauthorized emails. But, because it triggered so many false positives, he and his security team were drowning in alerts, making it impossible to investigate even a fraction of the alleged incidents in real time.  It was also disruptive for employees to interact with day-to-day. “I would say on average, we saw 10x as many false positives as real incidents of data exfiltration. Some days you’d have 100 incidents logged, and not one of them would be of merit. It was a deluge of junk, with the occasional useful bit of information,” he explained.  Charlies pointed out that Tessian, on the other hand, flags just 5-6 unauthorized emails a day company-wide with a false positive rate that’s marginal now, and will only get smaller as it continues to learn from employee behavior and relationships. Yes, that means it gets smarter over time.  How? Enforcer analyzes historical email data to understand what “normal” content, context, and communication patterns look like. The technology uses this understanding alongside real-time analysis to accurately predict whether or not outbound emails are data exfiltration attempts.  That means Charles and his team can actually investigate each and every incident and, when employees do see a warning, they interact with it instead of ignoring it.
Want to learn more about how Tessian Enforcer’s machine learning algorithms get smarter over time? You can get more information here.  3. Employees in the entertainment industry handle highly sensitive data – but not all of them As Charles pointed out, employees working in the entertainment industry – especially those who work in customer service – handle a lot of sensitive information. That means that mistakes – like sending a misdirected email or emailing a contract to a personal email address to print at home – can have big consequences. It also means employees may be motivated to exfiltrate data for a competitive advantage or financial gain.  Charles has seen all of the above.  “Not just our sector, but all sectors in the entertainment industry are based around customer service and personal contact. That means we have to know a lot about our customers. And that information is valuable. It’s information people want which means we have to make sure we protect it,” he explained.  But, not all employees have access to the same type of information. Customization, therefore, was important to Charles, who said, “We have a number of employees who don’t actually have access to sensitive information and a number of employees who don’t email anyone external. So there’s no point deploying across the entire company. We wanted to focus on people who deal with customers.  Likewise, not everyone who has been onboarded is in the same internal email group, which means we have to apply different controls and rules to different people. We can do all of this easily with Tessian.” While Tessian does offer 100% automated threat prevention, we know that for security strategies to be truly effective, technology and in-house policies have to work together. With Tessian Constructor, security leaders can create personalized rules and policies for individuals and groups.  Learn more about how Tessian prevents human error on email Powered by machine learning, Tessian’s Human Layer Security technology understands human behavior and relationships. Tessian Guardian automatically detects and prevents misdirected emails Tessian Enforcer automatically detects and prevents data exfiltration attempts Tessian Defender automatically detects and prevents spear phishing attacks Importantly, Tessian’s technology automatically updates its understanding of human behavior and evolving relationships through continuous analysis and learning of an organization’s email network. That means it gets smarter over time to keep you protected, wherever and however your work. Interested in learning more about how Tessian can help prevent email mistakes in your organization? You can read some of our customer stories here or book a demo.
Human Layer Security, Spear Phishing
Must-Know Phishing Statistics: Updated 2021
By Maddie Rosenthal
Thursday, January 7th, 2021
Phishing attacks aren’t a new threat. In fact, these scams have been circulating since the mid-’90s. But, over time, they’ve become more and more sophisticated, have targeted larger numbers of people, and have caused more harm to both individuals and organizations. That means that this year – despite a growing number of vendors offering anti-phishing solutions – phishing is a bigger problem than ever. The problem is so big, in fact, that it’s hard to keep up with the latest facts and figures. That’s why we’ve put together this article. We’ve rounded up the latest phishing statistics, including: The frequency of phishing attacks The tactics employed by hackers The data that’s compromised by breaches The cost of a breach The most targeted industries The most impersonated brands  Facts and figures related to COVID-19 scams Looking for something more visual? Check out this infographic with key statistics.
If you’re familiar with phishing, spear phishing, and other forms of social engineering attacks, skip straight to the first category of 2020 phishing statistics. If not, we’ve pulled together some of our favorite resources that you can check out first to learn more about this hard-to-detect security threat.  How to Identify and Prevent Phishing Attacks What is Spear Phishing? Spear Phishing Demystified: The Terms You Need to Know Phishing vs. Spear Phishing: Differences and Defense Strategies How to Catch a Phish: A Closer Look at Email Impersonation CEO Fraud Email Attacks: How to Recognize & Block Emails that Impersonate Executives Business Email Compromise: What it is and How it Happens Whaling Attacks: Examples and Prevention Strategies  The frequency of phishing attacks According to Verizon’s 2020 Data Breach Investigations Report (DBIR), 22% of breaches in 2019 involved phishing. While this is down 6.6% from the previous year, it’s still the “threat action variety” most likely to cause a breach.  The frequency of attacks varies industry-by-industry (click here to jump to key statistics about the most phished). But 88% of organizations around the world experienced spear phishing attempts in 2019. Another 86% experienced business email compromise (BEC) attempts.  But, there’s a difference between an attempt and a successful attack. 65% of organizations in the United States experienced a successful phishing attack. This is 10% higher than the global average.  The tactics employed by hackers 96% of phishing attacks arrive by email. Another 3% are carried out through malicious websites and just 1% via phone. When it’s done over the telephone, we call it vishing and when it’s done via text message, we call it smishing. According to Symantec’s 2019 Internet Security Threat Report (ISTR), the top five subject lines for business email compromise (BEC) attacks: Urgent Request Important Payment Attention Hackers are relying more and more heavily on the credentials they’ve stolen via phishing attacks to access sensitive systems and data. That’s one reason why breaches involving malware have decreased by over 40%.
According to Sonic Wall’s 2020 Cyber Threat report, in 2019, PDFs and Microsoft Office files were the delivery vehicles of choice for today’s cybercriminals. Why? Because these files are universally trusted in the modern workplace.  When it comes to targeted attacks, 65% of active groups relied on spear phishing as the primary infection vector. This is followed by watering hole websites (23%), trojanized software updates (5%), web server exploits (2%), and data storage devices (1%).  The data that’s compromised by breaches The top five “types” of data that are compromised in a phishing attack are: Credentials (passwords, usernames, pin numbers) Personal data (name, address, email address) Internal data (sales projections, product roadmaps)  Medical (treatment information, insurance claims) Bank (account numbers, credit card information) While instances of financially-motivated social engineering incidents have more than doubled since 2015, this isn’t a driver for targeted attacks. Just 6% of targeted attacks are motivated by financial incentives, while 96% are motivated by intelligence gathering. The other 10% are simply trying to cause chaos and disruption. While we’ve already discussed credential theft, malware, and financial motivations, the consequences and impact vary. According to one report: Nearly 60% of organizations lose data Nearly 50% of organizations  have credentials or accounts compromised Nearly 50% of organizations are infected with ransomware Nearly 40% of organizations are infected with malware Nearly 35% of organizations experience financial losses
The cost of a breach According to IBM’s Cost of a Data Breach Report, the average cost per compromised record has steadily increased over the last three years. In 2019, the cost was $150. For some context, 5.2 million records were stolen in Marriott’s most recent breach. That means the cost of the breach could amount to $780 million. But, the average breach costs organizations $3.92 million. This number will generally be higher in larger organizations and lower in smaller organizations.  Losses from business email compromise (BEC) have skyrocketed over the last year. The FBI’s Internet Crime Report shows that in 2019, BEC scammers made nearly $1.8 billion. That’s over half of the total losses reported by organizations. And, this number is only increasing. According to the Anti-Phishing Working Group’s Phishing Activity Trends Report, the average wire-transfer loss from BEC attacks in the second quarter of 2020 was $80,183. This is up from $54,000 in the first quarter. This cost can be broken down into several different categories, including: Lost hours from employees Remediation Incident response Damaged reputation Lost intellectual property Direct monetary losses Compliance fines Lost revenue Legal fees Costs associated remediation generally account for the largest chunk of the total.  Importantly, these costs can be mitigated by cybersecurity policies, procedures, technology, and training. Artificial Intelligence platforms can save organizations $8.97 per record.  The most targeted industires While the Manufacturing industry saw the most breaches from social attacks (followed by Healthcare and then Professional services), employees working in Wholesale Trade are the most frequently targeted by phishing attacks, with 1 in every 22 users being targeted by a phishing email last year.   According to a different data set, the most phished industries vary by company size. Nonetheless, it’s clear Manufacturing and Healthcare are among the highest risk industries. The industries most at risk in companies with 1-249 employees are: Healthcare & Pharmaceuticals Education Manufacturing The industries most at risk in companies with 250-999 employees are: Construction Healthcare & Pharmaceuticals Business Services The industries most at risk in companies with 1,000+ employees are: Technology Healthcare & Pharmaceuticals Manufacturing The most impersonated brands Earlier this year, Check Point released its list of the most impersonated brands. These vary based on whether the attempt was via email or mobile, but the most impersonated brands overall for Q1 2020 were: Apple Netflix Yahoo WhatsApp PayPal Chase Facebook Microsoft eBay Amazon The common factor between all of these consumer brands? They’re trusted and frequently communicate with their customers via email. Whether we’re asked to confirm credit card details, our home address, or our password, we often think nothing of it and willingly hand over this sensitive information. But, after the outbreak of COVID-19 at the end of Q1, hackers changed their tactics and, by the end of Q2, Zoom was the most impersonated brand in email attacks. Read on for more COVID-related phishing statistics.
Facts and figures related to COVID-19 scams Because hackers tend to take advantage of key calendar moments (like Tax Day or the 2020 Census) and times of general uncertainty, individuals and organizations saw a spike in COVID-19 phishing attacks starting in March. But, according to one report, COVID-19 related scams reached their peak in the third and fourth weeks of April. And, it looks like hackers were laser-focused on money. Incidents involving payment and invoice fraud increased by 112% between Q1 2020 and Q2 2020. It makes sense, then, that finance employees were among the most frequently targeted employees. In fact, attacks on finance employees increased by 87% while attacks on the C-Suite decreased by 37%.
What can individuals and organizations do to prevent being targeted by phishing attacks? While you can’t stop hackers from sending phishing or spear phishing emails, you can make sure you (and your employees) are prepared if and when one is received. You should start with training. Educate employees about the key characteristics of a phishing email and remind them to be scrupulous and inspect emails, attachments, and links before taking any further action. Review the email address of senders and look out for impersonations of trusted brands or people (Check out our blog CEO Fraud Email Attacks: How to Recognize & Block Emails that Impersonate Executives for more information.) Always inspect URLs in emails for legitimacy by hovering over them before clicking Beware of URL redirects and pay attention to subtle differences in website content Genuine brands and professionals generally won’t ask you to reply divulging sensitive personal information. If you’ve been prompted to, investigate and contact the brand or person directly, rather than hitting reply We’ve created several resources to help employees identify phishing attacks. You can download a shareable PDF with examples of phishing emails and tips at the bottom of this blog: Coronavirus and Cybersecurity: How to Stay Safe From Phishing Attacks. But, humans shouldn’t be the last line of defense. That’s why organizations need to invest in technology and other solutions to prevent successful phishing attacks. But, given the frequency of attacks year-on-year, it’s clear that spam filters, antivirus software, and other legacy security solutions aren’t enough. That’s where Tessian comes in. By learning from historical email data, Tessian’s machine learning algorithms can understand specific user relationships and the context behind each email. This allows Tessian Defender to not only detect, but also prevent a wide range of impersonations, spanning more obvious, payload-based attacks to subtle, social-engineered ones. To learn more about how tools like Tessian Defender can prevent spear phishing attacks, speak to one of our experts and request a demo today.
Human Layer Security, Podcast
Episode 3: Security For The People, Not To The People, With Tim Fitzgerald
By Laura Brooks
Wednesday, January 6th, 2021
In this episode of the RE: Human Layer Security podcast, Tim Sadler is joined by Tim Fitzgerald, the chief information security officer at ARM and former chief security officer at Symantec.  Now, Tim believes that people are inherently good. And to think of employees as the weakest link when it comes to cybersecurity is undeserving. Tim thinks employees just want to do a good job. Sometimes mistakes happen, which can compromise security. But rather than blaming them, Tim urges leaders to first ask themselves, whether they’ve given their people the right tools, and they’ve armed them with the right information to help them avoid these mistakes in the first place. In this interview, we talked about the importance of changing behaviours, how businesses can make security part of everybody’s job, and how to get boards on board.  And if you want to hear more Human Layer Security insights, all podcast episodes can be found here.  Tim Sadler: As the CISO of ARM, then what are some of the biggest challenges that you face? And how does that affect the way you think about your security strategy?  Tim Fitzgerald: I guess our challenges are, you know, not to be trite, but they’re sort of opportunities as well. That by far, the biggest single challenge we have is ARM’s ethos around information sharing. As I noted, we have a belief, that I think it has proven out to be true over the 30+ years that ARM has been in business, that the level of information sharing has allowed ARM to be extraordinarily successful and innovative.  So there’s no backing up from that as an ethos of the company. But that represents a huge amount of challenge because we give a tremendous amount of personal freedom for how people can access our information and our systems, as well as how they use our data to share both internally with our peers, but also with our customers who we’re very deeply embedded with, you know. We don’t sell a traditional product where we, you know, they buy it, we deliver it to them, and then we’re done. The vast majority of our customers spend years with us developing their own product based on our intellectual property. And so that the level of information sharing that happens in a relationship like that is, is quite difficult to manage, to be candid. TS: Yeah, it really sounds like you’ve been balancing or having to think about not just the effectiveness of your security strategy or your systems but also that impact to the productivity of employees. So has Human Layer Security been part of your strategy for a long time at ARM or even in your career before ARM? TF: In my career before ARM, at Symantec. Symantec was a very different company, you know, more of a traditional software sales company. It also had 25,000 people who thought they knew more about security than I did. So that presented a unique challenge in terms of how we work with that community, but even at Symantec, I was thinking quite hard about how we influence behaviour.  And ultimately, what it comes down to, for me is that I view my job and human security as somewhere between a sociology and a marketing experiment, right? We’re really trying to change people’s behaviour in a moment, not universally and not their personal ethos. But will they make the right decision in this moment, to do something that won’t create security risk for us?  You know, I sort of label that sort of micro transactions. We get these small moments in time, where we have an opportunity to interact with and influence behaviour. And I’ve been sort of evolving that strategy as I thought about it at ARM. It’s a very different place in many respects, but trying to think about, not just how we influence their behaviour in that moment in time, but actually, can we change their ethos? Can we make responsible security decision-making part of everybody’s job? And I know that there’s not a single security person who will say they’re not trying to do that, right. But actually, that turns out to be a very, very hard problem.  The way that we think about this at ARM is that we have, you know, a centralized security team and I guess, ultimately, security is my responsibility at ARM. But we very much rely on what we consider to be our extended employee, or extended security team, which is all of our employees. Essentially, our view is that they can undo all of the good that we do behind them. But I think one of the things that’s unique about how we look at this at ARM is, you know, we very much take the view that people aren’t the weakest link. That they don’t come with good intent, or they don’t want to be good at their job or that they’re going to take shortcuts just to, you know, get that extra moment of productivity, but actually that everybody wants to do a good job. And our job is to arm them with both the knowledge and the tools to be able to keep themselves secure rather than trying to secure around them.
And, just to finish that thought, we do both, right? I mean, we’re not going to stop doing all the other stuff we do to kind of help protect our people in ways that they don’t even know exist. But the idea for us, here, is actually that we have rare opportunities to empower employees to take care of themselves.  One of the things we really like about Tessian is that this is something we’ve done for our employees, not to our employees. It’s a tool that is meant to keep them out of trouble.  TS: Yeah, I think I think that’s a really, really good point. You know, I think a lot of what you’re talking about here, as well as just security culture, and really establishing a great security culture as a company. And I love that for employees rather than to employees. I mean, it sounds like this really, you know, you have to at the core of the organization, and be thinking about the concept of human error in the right way when thinking about security decision making. And I guess, thinking that people are always going to make mistakes. And as you said, it’s just because they, you know, they are people, and maybe walk us through a bit more about how you how you think or what advice you might have for some of the other organizations that are on the line today about how they might talk to, you know, their boards or their other teams about rationalising this risk internally and working with the fact that our employees are only human. TF: Yeah, for me, this has been the most productive dialogue we’ve had with our board and our executive around security. I think most of you on the phone will recognise that when you go in and you start talking about the various technical layers that we have, that are available to protect our system, the eyes glaze over pretty quickly. And they really just want to know whether or not it works.  The human security problem is one that you can get a lot of passion on. In parts, because, I think it’s an unrecognized risk in the boardroom. That while the insider – meaning sort of a traditional insider threat that we think about which is a person who’s really acting against our best interest – can be very, very impactful. At least at ARM, and certainly in my prior career, the vast majority of issues that we have, and that have caused us harm over the last several years have been caused by people who do not wish us harm. 
They’ve been people just trying to do their job, and making mistakes or doing the wrong thing, making a bad decision at a moment in time. And trying to figure out how we help them not to do that is a much more difficult problem than trying to figure out how to put in a firewall or putting DLP. So we really try to separate that conversation. There are a lot of things we do to try and catch that person who is truly acting against our best interest but that actually, in many ways, is a totally different problem. At ARM, what accounts for more than 70% of our incidents, and certainly more than 90% of our loss scenarios is people just doing the wrong thing. And making the wrong decision, not that they were actively seeking to cause ARM harm.  If I might just give a couple of examples because it helps bring it home. The two most impactful events that we’ve had in the last two years at ARM was somebody in our royalties, you know, we sell software, right? So every time somebody produces a chip, we get paid. So that’s a good thing for ARM. But having somebody who’s royalty forecast gives you a really good sense of what markets they intend to enter and where they tend to go as a company.  And most of our customers compete with each other because they’re all selling similar chips, software design into various formats. So having one customer having somebody else’s data would be hugely impactful. And in fact, that’s exactly what we did not that long ago. Somebody pulled down some pertinent information for a customer into a spreadsheet, and then fat fingered an email and sent it to the wrong customer. Right, they send it to Joan at Customer X instead of Joan at customer Y. And that turned out to be a hugely impactful event for us as a company, because this is a major relationship and we essentially disclosed a strategic roadmap from one customer to another. A completely avoidable scenario. And it is a situation where that employee was trying to do their best for their customer and ultimately made a mistake. TS: Thanks for sharing that example with us. I think it’s a really, really good point. And I think for a long time in security, we were talking about insider threats, and people immediately think about malicious employees and malicious insiders. And I think it’s absolutely true what you say that, the reality is that most of your employees are, you know, trustworthy and want to do the right thing. But they sometimes make mistakes. And when you’re doing something as often as, say, sending an email or sharing data, the errors can be disastrous, and they can be frequent as well… TF: …it’s the frequency that really gets us right? So insider threat – the really bad guy who’s acting against our best interest. We have a whole bunch of other mechanisms that, while still hard, we have some other mechanisms to try and find them. That’s an infrequent high impact. What we’re finding is that the person who makes a mistake is high frequency, medium to high impact. And so we’re just getting hammered on that kind of stuff. And the reason we came to Tessian in the first place was to address that exact issue. As far as I really believe in where you guys are going in terms of trying to address the risk associated with people making bad choices versus acting against our interest. TS: This concept of high frequency, I think, is super interesting. And one of the questions I was actually going to ask you was around that. Hackers and cyber attacks get all the attention because these are the scary things. And naturally, it’s what you know, boards want to talk about, and executives want to talk about. Accidents almost seem less scary. So they get less focus. But this frequency point of how often we share data. We send emails, and it’s, you know, it has analogies in other parts, other parts of our lives as well with like, we don’t think twice before we get in a car. But actually, you know, it’s very easy to have human error there. Things can also be really bad. Do you think we need to do more to educate our, again, our boards, our executive teams and our employees to actually sort of open their eyes to the fact that inadvertent human error or accidents can be just as damaging as, as attackers or cyber attacks?  TF: Yeah, it depends on the organization. But I would suggest that generally, we do need to do more. We, as an industry, we’ve had a lot of amazing things to talk about to get our board’s attention over the last 10 years. These major events, and loss scenarios, often perpetrated by big hacking groups, sometimes nation-sponsored, are very sexy to talk about that kind of stuff and use that as justification for the reason we need to, to invest in security.  And actually, there’s a lot of legitimacy behind that. Right. It’s not that that’s fake messaging. It’s just, it’s just part of the narrative. The other side of the narrative is that, you know, we spend more time on now than we do on nation-state type threats. Because what we’re finding is not only by frequency, but by impact right now, the vast majority of what we’re dealing with is avoidable events, based on human error, and perhaps predictable human error.  I very much chafe at the idea that we think of our employees as the weakest link, right? I think it sort of under serves people’s intent and how they choose to operate. So rather than that, we try to take a look in the mirror and say, what are we not providing these people in order to help them avoid these types of scenarios?  And I think if you change your perspective on that, rather than see people as an intractable problem, and therefore we can’t, you know, we can’t conquer this. If we start thinking about how we mobilise them as part of our overall cybersecurity strategy and defense mechanisms, it causes you to rethink whether or not you’re serving your populace correctly.  And I think in general, not only should we be talking to our senior executives and boards more, more clearly about where real risk exists, which for most companies is right in this zone. But we need to be doing more to help those people combat rather than casting blame or thinking that the average employee is not trustworthy, or will do the wrong thing.  You know, I’m an optimist. So I genuinely believe that’s not true. I think if we give people the opportunity to make a good decision, and we make the easiest path to get their job done, the secure path, they will take it. That is our job as security professionals.
TS: Yeah, I think the huge point there and you know, the word that was jumping out for me is this concept of empowerment. And I think it is strange sometimes when you look at a lot of security initiatives that companies deploy, and how we almost don’t factor in that concept of the impact it will have on an employee’s productivity.  And I guess at Tessian, we’re great believers that, you know, the greatest technology we’ve created has really empowered society. So it’s made people’s lives better. And we think that security technology should not only keep people safe, but it should do it in a way that empowers them to do that best work. When you were sort of thinking about how to solve this problem of inadvertent human error on email people sending emails to the wrong people, or dealing with the issue of phishing and spear phishing. What consideration did you have for other solutions that were out there? You know, what did Tessian address for you that you couldn’t quite address with those other platforms?  TF: Yeah, a couple things. So coming from Symantec as you might expect, I used all of their technology extensively and one of the best products Symantec offers is their DLP solution. So I’m very, very familiar with that. And I would argue we had one of the more advanced installations in the world running internally at Symantec. So I’m extremely familiar with the capability of those technologies. I think what I learned in my time and doing that is when used correctly in a finite environment, a finite data set, that type of solution would be very, very effective in keeping that data where it’s supposed to be and understanding movement in that ecosystem. When you try and deploy that, broadly, it has all the same problems, as everything else is, you start to run into the inability of the DLP system to understand where that data is supposed to be. Is this person supposed to have it based on their role and their function? It’s not a smart technology like that. So you end up trying to write these very, very complex rules that are hard to manage. What I liked about Tessian is that it gave us an opportunity to use the machine learning in the background, to try and develop context about whether or not something that somebody was doing was, was either a typical, or perhaps just by the very nature, and maybe it’s not a typical, maybe it’s actually part of a bad process. But by their very nature of the type of information they’re sending around and the characteristics of information, we can get a sense of whether or not what they’re doing is causing us a risk. So it doesn’t require recipes, completely prescriptive about what we’re looking for. It allows us to learn with the technology and with the people on what normal patterns of behaviour look like, and therefore intervene when it matters and not, and not sort of having to react every time another bell goes off.  To be clear, we still use DLP in very limited circumstances. But what we found is that was not really a viable option for us, particularly in the email stream. To be able to accurately identify when people were doing things that were risky, versus, you know, moving a very specific data set that we didn’t want them to.  TS: Yeah, that makes a tonne of sense. And then if you’re thinking about the future, and sort of, you know, what you hope Tessian can actually become, you know, where, where does it go from here? What’s the opportunity for, for Tessian as a Human Layer Security platform?  TF: Yeah, I recall back to talking to you guys, I guess, last spring, and one of the things I was poking at was, you have all this amazing context of what people are doing an email, and that’s where people spend most of their time. It’s where most of the risk comes from for most organizations. So how can we turn that into beyond just you know, making sure someone doesn’t fat finger and email address, or they’re not sending a sensitive file where it’s not supposed to go? Or, you know, the other use cases that come along with Tessian? Can we take the context that we’re gaining through how people are using email, and create more of those moments in time to connect with them to become more predictive? Where we start to see patterns of behaviour of individuals that would suggest to us that they are either susceptible to certain types of risk, or, you know, are likely to take a particular action in the future, there’s a tremendous amount of knowledge that can be derived from that context, particularly if you start thinking about how you can put that together with what would traditionally be kind of the behavioural analytics space. Can we start to mesh together what we know about the technology and the machines with real human behaviour and, therefore, have a very good picture that would help us? It would help us not only to find those actual bad guys who were in our environment that we know were there, but also to get out in front of people’s behaviour, rather than reacting to it after it happened. And that, for me, that’s kind of the holy grail of what this could become. If not predictive, at least start leading us towards where we think risk exists, and allowing us an opportunity to intervene before things happen. TS: That’s great, Tim, thanks so much for sharing that with us. TS: It was great to understand how Tim has built up his security strategy, so that it aligns with and also enhances the overall ethos of the company. More information sharing equals a more innovative and more successful business. I particularly liked Tim’s point, when he said that businesses should make the path of least resistance the most secure one. And by doing that, you can enable people to make smart security decisions and build a more robust security culture within an organization.  As Tim says, It’s security for the people, not to the people. And that’s going to be so important as ways of working change. If you enjoyed our show, please rate and review it on Apple, Spotify, Google or wherever you get your podcasts. And remember you can access all the RE:Human Security Layer podcasts here.