Compliance Data Exfiltration Spear Phishing
September Cybersecurity News Roundup
25 September 2020
We’re back with another monthly roundup of cybersecurity news. Cybercriminals have once again been busy, with several high-profile data breaches and ransomware attacks occurring throughout September. And – rather unsurprisingly – social media platforms Twitter and TikTok have made the cut for the third month running. Here are the top cybersecurity stories from September 2020, including links to further information. Need to catch-up? Check out headlines from July and top stories from August on our blog. Researchers Predict That CEOs Will Be Personally Liable for Cyber-Physical Attacks Research and advisory firm Gartner (who recently named Tessian a Cool Vendor) predicted this month that 75% of CEOs could hold personal liability for “cyber-physical” attacks by 2024. Cyber-physical attacks aim to impact the “real world,” including critical infrastructure, internet of things devices, and healthcare equipment. Such attacks can result in physical injury and death. Gartner predicts that that cyber-physical attacks will cause up to $50 billion of damage by 2023 So what if Gartner is right? It would mean that if a company suffers a cyberattack resulting in physical harm — and it turns out that the company has not implemented appropriate cybersecurity measures — the company’s CEO could have to pay fines with their own money. 
Gartner’s research tells us what every effective business leader already knows — an effective cybersecurity program is an essential requirement for every organization. If a cyberattack occurs, the buck stops with the company’s senior executives. Argentinian Government Faces $4 Million Ransom Following Cyberattack On September 6, Argentina temporarily stopped allowing people to cross its borders after the Netwalker ransomware hit the country. The attackers encrypted government migration data and demanded 355 Bitcoins (around $4 million) to unencrypt it. This cyberattack led to chaos across border checkpoints — but the Argentinian government told domestic news website Infobae that it had no intention of negotiating with the hackers. Ransomware continues to cause havoc worldwide, and it appears the problem is only getting worse. Research by SonicWall recorded approximately 121 million ransomware attacks in the first half of 2020. Personal Information of 46,000 US Military Veterans Breached The US Veterans Association (VA) announced this month that the personal information of around 46,000 military veterans had been “accessed by unauthorized users.” The cybercriminals aimed to “divert payments” intended for healthcare providers. The VA’s financial services team wrote to the affected individuals to advise on how to mitigate the effects of the breach and offer free access to credit monitoring services. The VA serves veterans all over the US. Strict new data breach laws in several jurisdictions — including New York, Washington DC, and Oregan — mean that the VA could face huge fines given the breach’s context. Want to know more about US data security laws? Read our guidance for security leaders. Thousands of COVID-19 Patients’ Data Leaked Due to “Human Error” A massive data breach occurred in Wales this month when the personal information of 18,105 coronavirus patients was leaked following an “individual human error.” The breach affected every Welsh resident who tested positive for COVID-19 between February 27 and August 30. Public Health Wales said that the data included the “initials, date of birth, geographical area, and sex” of the affected individuals. In nearly 11% of people, though, the data also included the name of the nursing home or other healthcare setting in which the individual lived. The data was uploaded onto a public server, where it was accessible and searchable for around 20 hours. It was viewed 56 times throughout this period.  Human error is a key cause of data breaches. Statistics show that around 88% of data breaches start with human error, and almost half of all employees believe they have made an error at work leading to security repercussions. Chinese Company Holds Data About 2.4 million Influential People An academic at Fulbright University, Vietnam, has uncovered a vast Chinese database containing personal information of around 2.4 million people and their families. It looks like these individuals are “people of interest” to the Chinese Communist Party (CCP). The company responsible for maintaining this huge database “provides big data analytics as well as other functionality to support Chinese military and intelligence analysts,” according to a research paper. The research also suggests that the CCP uses the data for “intelligence, military, security, and state operations in information warfare and influence targeting.”  The database is believed to provide a way for the CCP to influence people in target sectors. It may be one of many such databases maintained by Chinese companies. Much of the information in the database has been gleaned from publicly-available sources. The Chinese database is yet another important reason you should consider limiting the amount of personal information you put online. You can learn more about how hackers are using open-source recon for deepfakes and other social engineering attacks from Elvis M. Chan, Supervisory Special Agent at the FBI and Nina Schick, Author of “Deep Fakes and the Infocalypse: What You Urgently Need to Know”, who both joined us at Tessian Human Layer Security Summit. You can access their session “Safeguarding the 2020 Elections, Disarming Deepfakes via HLS On-Demand.  Twitter Provides Enhanced Security For US Election Following its spear phishing incident this July, Twitter has announced enhanced account security for certain “high-profile accounts” throughout the US election. Twitter said that various types of accounts, including those belonging to US politicians, campaign officials, and political journalists, would receive the security enhancements from September 17. So what’s changing? First, affected users must create “strong passwords,” of at least ten characters in length. They will need to confirm password reset requests via email. The affected users will also be “strongly encouraged” to enable two-factor authentication (2FA). But that’s not all. Recall that the July spear phishing incident involved “internal support tools” — it wasn’t primarily an issue with users’ account passwords. To address this, Twitter also states that it will improve internal monitoring of the affected accounts, including by using “more sophisticated detections and alerts,” “increased login defenses,” and “expedited account recovery” processes. Want to know how to avoid the issues Twitter faced this July? Read our guidance on “vishing” attacks. Energy Companies Advised to Create Cyberattack Response Plans The US Federal Energy Regulatory Commission (FERC) and the North American Electricity Reliability Corporation (NERC) have released a report advising energy providers on creating an Incident Response and Recovery (IRR) plan for cyberattacks. The report is based around an existing cybersecurity framework: the National Institute of Standards and Technology (NIST) Special Publication 800-61, also known as the Computer Security Incident Handling Guide.  Governments appear to be increasingly concerned about the cybersecurity of critical infrastructure. This concern is well-founded — in 2019, 90% of security professionals surveyed across the utilities, energy, health, and transport sectors reported that their organizations had faced at least one successful cyberattack. Much of the advice to energy providers is good practice across all sectors. FERC and NERC recommend a four-part framework, consisting of security controls relating to preparation, detection and analysis, containment and eradication, and post-incident activity.
UK Agency Warns Schools and Universities About Ransomware Attacks As students worldwide return to schools, colleges, and universities, education providers are most concerned with defending against a COVID-19 outbreak. But the UK’s National Cyber Security Centre (NCSC) gave a stark warning about a different type of threat: ransomware. The NCSC’s alert describes “recent trends observed in ransomware attacks” targeting the education sector, which the agency says are increasingly common. The guidance follows a series of ransomware attacks against universities in the UK, US, and Canada this July. The agency warns that cybercriminals are exploiting out-of-date software and are accessing remote desktop protocol (RDP) software using credentials stolen via phishing attacks. It also warns that phishing emails are being used to deploy ransomware. So how does the NCSC recommend education providers protect themselves? The same ways all cyber-secure organizations protect themselves — including ”disrupting ransomware attack vectors” by implementing phishing defenses, and “enabling effective recovery” by keeping backups of data. Implementing DMARC is also essential to prevent brand impersonation and successful spear phishing attacks. And, according to Tessian research, 40% of the top 20 US universities aren’t using DMARC records.  TikTok Ban Delayed Following ByteDance Sale On September 21, US President Trump said he had approved the sale of part of ByteDance, the parent company of video-sharing platform TikTok, to Oracle and Wal-Mart. The deal temporarily averts harsh restrictions on TikTok set out by the US Department of Commerce three days earlier. The sale results from an executive order issued by President Trump in August, stating that the TikTok app “captures vast swaths of information from its users, including… location data and browsing and search histories.” TikTok maintains that this activity is standard industry practice. The US companies could take a collective 20% stake in ByteDance, with Oracle hosting TikTok user data in Oracle Cloud. Some analyses suggest that security-conscious nations and businesses are increasingly likely to implement these sorts of “data localization” measures. Trump had previously assured the public that TikTok would be “totally controlled” by the US firms. However, the president assured a press conference that the companies would be using “separate clouds and very, very powerful security.” That’s all for this month. If we missed anything, please email [email protected] and stay tuned for the next roundup. Don’t forget: You can easily share this on social media via the buttons at the top right of this post. 
Human Layer Security Spear Phishing
Tim Sadler on Hacking Humans Podcast: Ep 117 “It’s Human Nature”
24 September 2020
Tessian’s CEO and co-founder Tim Sadler joined Dave Bittner from the CyberWire and Joe Carrigan from the Johns Hopkins University Information Security Institute to talk about why people make mistakes and the importance of developing a strong security culture. While you can listen to the episode here, you can read a full transcript below. And, for more insights about The Psychology of Human Nature, read our report.
Dave Bittner: Joe, I recently had the pleasure of speaking with Tim Sadler. He’s been on our show before. He’s from an organization called Tessian, and they recently published a report called “The Psychology of Human Error.” Here’s my conversation with Tim Sadler. Tim Sadler: We commissioned this report because we believe that it’s human nature to make mistakes. The people control more sensitive data than ever before in the enterprise. So there’s customer data, financial information, employee information. And what this means is that even the smallest mistakes – like accidentally sending an email to the wrong person, clicking on a link in a phishing email – can cause significant damage to a company’s reputation and also cause major security issues for them. So we felt that businesses first need to understand why people make mistakes so that, in the future, they can prevent them from happening before these errors turn into things like data breaches. Dave Bittner: Well, let’s go through some of the findings together. I mean, it’s interesting to me that, you know, right out of the gate, the first thing that you emphasize here is that people do make mistakes. Tim Sadler: Absolutely, they do make mistakes, and I think that is human nature. We think about our daily lives and the things that we do; we factor in human error, and we factor in that we will make mistakes. And something I always come back to is if we think about something we do, you know, many of us do on a daily basis, which is, you know, driving a car, and we think about all of the assistive technology that we have in that car to protect us in the event that we do make a mistake because, of course, mistakes are expected. It’s kind of in our human nature. Dave Bittner: Well, let’s dig into some of the details here because there are some fascinating things that you all have presented. One of the things you dig into is the age factor. Now, this was interesting to me because I think we probably have some biases about who we think would be more likely to make mistakes, but you all uncovered some interesting numbers here. Tim Sadler: Yeah, completely. And, you know, just sharing some of those statistics that we found from this report, 65% of 18- to 30-year-olds admit to sending a misdirected email comparing to 34% who are over the age of 51. And we also found that younger workers were five times more likely to admit to errors that compromised their company’s cybersecurity than older generations, with 60% of 18- to 30-year-olds saying they’ve made such mistakes versus 10% of workers who are over 51. Dave Bittner: Now, what do you suppose is the disparity there? Do you have any insights as to what’s causing the spread? Tim Sadler: I think it is just speculation that I think there’s something interesting in just maybe thinking about the comfort level that younger workers might have with actually admitting mistakes or sharing that with others in the enterprise. You know, I think there’s something encouraging here, which is actually we’re seeing that if you were running a security team, you want your employees to come forward and tell you something has gone wrong, whether that’s a mistake that’s led to a bad thing or it’s a near miss. And I think that you also might find that, generally, younger people may tend to be less senior in the organization and, you know, may not have the same sense of stigma that maybe the older generations, who are more senior, may think there is. So if I tell my boss that, you know, I’ve just done something and there was a potentially bad outcome, they might feel like they may be in danger of compromising their position in the organization. Dave Bittner: Yeah, it’s a really interesting insight. I mean, that whole notion of the benefits of having a company culture that encourages the reporting of these sorts of things.
Tim Sadler: I think it’s so important. You know, I think – somebody, you know, correctly advised me, you almost need an everything’s-OK alarm in your business when you’re thinking about security. You know, if you have a risk register or if you are responsible for taking care of these incident reports, if you don’t see people reporting anything, it’s usually a more concerning sign than you have people coming forward who are openly admitting to the errors they’ve made that could lead to these security issues. It’s highly unlikely that you’ve got nothing on your risk register. That you’ve completely eliminated risk from your business. It’s more likely that actually you haven’t created the right culture that feels like it’s suitable or acceptable to actually come forward and admit mistakes. Tim Sadler: And I think this is really, really important. I think now more than ever, during this time where, you know, we have a global pandemic, a lot of people are working from home, and they’re kind of juggling the demands of their jobs with their personal lives – maybe they’re having to figure out childcare – there are lots of other things weighing in to an employee’s life right now. It’s really important to actually, I think, extend empathy and create an environment where your employees do feel comfortable actually sharing things, mistakes they’ve made or things that could pose security incidents. I think that’s how you make a stronger company, through that security culture. Dave Bittner: But let’s move on and talk about phishing, which your report digs into here. And then this was surprising to me as well. You found that 1 in 4 employees say that they’ve clicked on phishing emails. But interesting to me, there was a gap between men and women and, again, older folks and younger folks.  Tim Sadler: Yes, so we found in the report that men are twice as likely as women to click on links in a phishing email, which again I think is – I think we were as surprised as you are that that was something that came from the research that we conducted. Dave Bittner: And a much lower percentage of folks over 51 say that they’d clicked on phishing links. Tim Sadler: Yes. And, again, you know, because of the research, of course, we’re relying on people’s honesty about these kinds of things. Dave Bittner: Right. Tim Sadler: But it does seem that there are clear kind of demographic splits in terms of things like age and also gender in terms of, actually, the security outcomes that took place. Dave Bittner: I mean, that in particular seems counterintuitive to me, but when I read your report, I suppose it makes sense that, you know, people who have more life experience, they may be more wary than some of the folks who are just out of the gate. Tim Sadler: I think that does play into things. I think that younger generations who are coming into the workplace, who are maybe even used to – you know, they’ve had an email account maybe for most of their lives. In fact, I would say that they’re probably less used to using email because they’ve advanced to other communication platforms before they enter the workplace. But I do think that, you know, if you think about people who have had email accounts, you know, at school or at college, they’re going to be used to being faced with potential scams, potential phishing. They’ve maybe already been through many kind of forms of education training awareness, those kinds of things, before they’ve actually entered the world of work. Dave Bittner: Yeah, another thing that caught my eye here was that you found that tech companies were most fallible. And it seemed to be that the pace at which those companies run had something to do with it. Tim Sadler: Yeah, I think there’s something interesting here. And, again, just would say that this is speculation because we don’t have the specific data to dig further into this. But I think there’s something interesting with the concept that technology companies, as you say, if they’re, you know, high-growth startups, they tend to be maybe moving faster, where these kinds of things can slip off the radar in terms of the security focus or the security awareness culture they create. Tim Sadler: But the other thing – and I think something to be aware of – is sometimes technology companies have that kind of false sense of security that it’s all in check, right? ‘Cause they – you know, this is kind of their domain. They feel that it’s within their comfort zone, and then maybe they neglect, actually, how serious something like this could be, where they feel that, OK, we’ve actually – even if we’ve got an email system in place, in the instance of phishing – we’ve got an email system in place. We feel like it has the appropriate security controls. But then we miss out the elements of actually making sure that the person is aware or is trained, is provided with the assistive technology around them and then also feels that they’re part of a security culture where they can report these things. So I think that’s also an important factor, too. Dave Bittner: So one of the interesting results that came through your research here is the impact that stress and fatigue have on workers’ ability to kind of detect these things. Tim Sadler: Yeah, and this is a really, really important point. So 47% of employees cited distraction as the top reason for falling for a phishing scam. And 41% said that they sent an email to the wrong person because they were distracted. The interesting thing, I think, there is that – another stat that came out from this – 57% of people admitted that they were more distracted when working from home, which is, of course, a huge part of the population now. So this point about distraction seems to play a really important factor in actually the fallibility of people with regard to phishing. Tim Sadler: And then a further 93% of employees said that they were either tired or stressed at some point during the week. And 1 in 10 actually said that they feel tired every day. And then the sort of partner stat to that, which is important, is that 52% of employees said that they make more mistakes when they’re stressed. And of course, tiredness and being stressed play hand-in-hand. So these are really, really important things for companies to take note of, which is, you have to also think about the well-being of your employees with regard to how that impacts your security posture and your ability to actually prevent these kinds of human errors and mistakes from taking place. Dave Bittner: Right. Giving the employees the time they need to recharge and making sure that they’re properly tasked with things where they can meet those requirements that you have for them – I mean, that’s an investment in security as well. Tim Sadler: Completely. And I think what’s really difficult is that security is serious business. No one would doubt or question its importance. It is literally mission critical for companies to get right. Some companies take a draconian approach when it comes to security, and they penalize or they’re very heavy-handed with employees when they get things wrong. I think, again, it is really important to consider the security culture of an organization. And actually, creating a safe space for people to share their vulnerability from a security perspective – things that they may have done wrong – and actually then having a security team or security culture that helps that person with the error or the issue that may arise versus just creating a environment where if you do the wrong thing, then, you know, your job, your role might be in jeopardy. Tim Sadler: And again, it is a balance because you need to make sure that people are never being careless, and there is a responsibility that we all have in terms of the security posture of our organization. But what this report shows is that those elements are really important. You know, we don’t want to contribute to the distraction. We don’t want to contribute to the stress and tiredness of our employees. And even outside the security domain, if you do have an environment that doesn’t create a balance for your employees, you are at a higher risk of suffering from a security breach because of the likelihood of human error with your employees. Dave Bittner: All right, Joe, what do you think?
Joe Carrigan: I really liked that interview. Tim makes some really great points. The first thing he says is at Tessian, they believe that people are prone to mistakes, right? Of course we are, right? But why, in the real world, do we act like we’re not? That is what struck out to me immediately – the fact that Tim even needs to say this or that somebody needs to say this, that people are prone to mistakes. We act as if we’re not prone to mistakes. And then the driving analogy is a great analogy, right? If everybody does everything right in a car, nobody would ever have an accident. But as we all know, that is not the case. Dave Bittner: Accidents happen (laughter). Yeah. I think in public health, too – you know, I often use the example of, you can do everything right. You can wash your hands. You can, you know, be careful when you sneeze and clean surfaces and all that stuff. But still, no matter what, every now and then, you’re still going to get a cold. Joe Carrigan: Younger people are more likely to say that they’ve made mistakes than older people, and I agree with Tim’s speculation on the disparity of responses across age groups. Younger people have less to lose than an older person who might be more senior in the organization. I also think that an older person might be more experienced with what happens when you admit your mistakes. Joe Carrigan: And that comes to my next point, which is culture. And that is probably the single-most important thing in a company. And this is my opinion, of course – but this is so much more important when we get to security. It needs to be open and honest, and people need to absolutely not fear coming forward about their mistakes in security. This is something that I’ve dealt with throughout my career, even before I was doing security, with people making mistakes. If somebody tries to cover up a mistake, that makes the cleanup effort a lot more difficult. And it’s totally natural to try to do that. You’re like, oh, I made the mistake. I better correct it. If you don’t have the technical expertise to correct it, you’re actually making more work for the people who have to actually correct it. Dave Bittner: Yeah. I also – I think there’s that impulse to sort of try to ignore it and hope it goes away. Joe Carrigan: Right (laughter). That happens, too. I find this is interesting. Men are twice as likely to click on a link than women. Older users are less likely to click on a link. I think that comes from nothing but experience. You and I are older. We’ve had email addresses for years and years and years. I’ve been on the Internet longer than a lot of people have been alive. I know how this works. And younger people may not have that level of experience. Plus, I think younger people are just more trusting of other people. And as we get older, we, of course, become more jaded. Joe Carrigan: Tech companies have a false sense of security because this is their domain. That’s one of the things Tim said. I think that’s right. You know, that’s not going to happen to us; we’re a tech company. Things are still going to happen to you because, like Tim says very early in the interview, people make mistakes. Dave Bittner: All right. Well, again, our thanks to Tim Sadler from Tessian for joining us this week. We appreciate him taking the time. Again, the report is titled “The Psychology of Human Error.” And that is our show. Of course, we want to thank all of you for listening. Dave Bittner: We want to thank the Johns Hopkins University Information Security Institute for their participation. You can learn more at isi.jhu.edu. The “Hacking Humans” podcast is proudly produced in Maryland at the startup studios of DataTribe, where they’re co-building the next generation of cybersecurity teams and technologies. Our coordinating producer is Jennifer Eiben. Our executive editor is Peter Kilpe. I’m Dave Bittner. Joe Carrigan: And I’m Joe Carrigan. Dave Bittner: Thanks for listening.
Spear Phishing
6 Real-World Examples of Social Engineering Attacks
22 September 2020
Over the last several months, “social engineering” has been making headlines more and more frequently. But, before we dive into real-world examples of social engineering attacks, let’s define exactly what social engineering is. Social engineering attacks are a type of cybercrime wherein the attacker fools the target through impersonation. They might pretend to be your boss, your supplier, someone from our IT team, or your delivery company. Regardless of who they’re impersonating, their motivation is always the same — extracting money or data. So, what’s the biggest threat vector for social engineering attacks? Email. Why do hackers do it? According to Verizon’s 2020 data breach report, money. In fact, the rates of financially-motivated social engineering attacks doubled between 2018 and 2019 and continued to increase after the outbreak of COVID-19. In this article, we’ll look at six real-world examples of social engineering attacks — some big and some recent — all using different techniques. We’ll also tell you how to avoid falling victim to these sorts of attacks. 1.  $100 Million Google and Facebook Spear Phishing Scam The biggest social engineering attack of all time (as far as we know) was perpetrated by Lithuanian national Evaldas Rimasauskas against two of the world’s biggest companies: Google and Facebook. Rimasauskas and his team set up a fake company, pretending to be a computer manufacturer that worked with Google and Facebook. Rimsauskas also set up bank accounts in the company’s name. The scammers then sent phishing emails to specific Google and Facebook employees, invoicing them for goods and services that the manufacturer had genuinely provided — but directing them to deposit money into their fraudulent accounts. Between 2013 and 2015, Rimasauskas and his associates cheated the two tech giants out of over $100 million. How to Prevent Spear Phishing The Rimasauskas case is a classic example of a spear phishing scam. The attacker hacks or impersonates a trusted person and then “spears” specific individuals.  Spear phishing is more convincing than regular, “spray and pray” phishing because they’re highly targeted. An attacker might also be impersonating someone with whom the target communicates regularly. They may have a near-identical email address, with a very subtle change in the domain name (for example, [email protected] becomes [email protected]–name.com).  You can read more about email impersonation on our blog. Unfortunately, humans — even those working at the world’s most powerful tech firms — sometimes don’t spot small changes. It could be because they’re distracted or over-worked, or it could simply be because the email was a convincing fake. Whatever the reason, it’s important people aren’t left as the last line of defense.  The best thing you can do to prevent spear phishing scams, then, is to implement technology that protects against advanced impersonation attacks like spear phishing.  Tessian Defender’s stateful machine learning technology understands each employee’s inbox inside-out and can detect anomalies in email addresses, body copy, and more. That’s how it distinguishes between safe emails and suspicious ones, alerting the target when a phishing attack occurs. Looking for more resources? These might help.  What is Spear Phishing? Defending Against Targeted Email Attacks What Does a Spear Phishing Email Look Like? Phishing vs. Spear Phishing: Differences and Defense Strategies  2. Deepfake Attack on UK Energy Company In March 2019, the CEO of a UK energy provider received a phone call from someone who sounded exactly like his boss. The call was so convincing that the CEO ended up transferring $243,000 to a “Hungarian supplier” — a bank account that actually belonged to a scammer. This “cyber-assisted” attack might sound like something from a sci-fi movie, but, according to Nina Schick, Author of “Deep Fakes and the Infocalypse: What You Urgently Need to Know”, “This is not an emerging threat. This threat is here. Now.”   To learn more about how hackers use AI to mimic speech patterns, listen to Nina’s discussion about deepfakes with Elvis Chan, Supervisory Special Agent at the FBI at Tessian Human Layer Security Summit. How to Prevent Deepfake Attacks Deepfakes are an emerging threat that could soon become a widespread problem. 74% of IT leaders think deepfakes threaten their organizations’ and their employees’ security. But there are some steps you can take to protect your business from this new type of fraud. Make a habit of verifying telephone requests via another medium, e.g., email or SMS. This is a type of 2-Factor Authentication (2FA) — a security step that you should implement across all channels. If a caller insists that the request is urgent, try to verify their identity in another way —  such as by asking them some specific detail about the office or an event you both attended. Work closely with your IT department to log all suspicious activity and security incidents. For more information about deepfakes, read this article: Deepfakes: What are They and Why are They a Threat? 3. $60 Million CEO Fraud Lands CEO In Court Chinese plane parts manufacturer FACC lost nearly $60 million in a so-called “CEO fraud scam” where scammers impersonated high-level executives and tricked employees into transferring funds. After the incident, FACC then spent more money trying to sue its CEO and finance chief, alleging that they had failed to implement adequate internal security controls. While the case failed, it’s an important reminder: cybersecurity is business-critical and everyone’s responsibility. In fact, Gartner predicts that by 2024, CEOs could be personally liable for breaches.  How to Prevent CEO Fraud It’s easy to see why CEO fraud is a successful type of social engineering attack. Imagine working late at the office one day. You get an email from the CEO herself, asking you to make some last-minute amendments to an invoice. The tone is urgent, the email looks genuine, and you have a chance to impress the top boss — why wouldn’t you go ahead and do it? CEO fraud is a common form of Business Email Compromise (BEC). Using impersonation techniques, scammers can send emails using your CEO’s display name, or email addresses that are nearly indistinguishable. Alternatively, hackers can hijack your CEO’s email account. Tessian’s machine learning technology knows what your CEO’s emails should look like and can alert employees to tiny differences in email addresses and even subtle deviations from their “normal” tone. Learn more about how Tessian prevents CEO Fraud at some of the world’s leading businesses. Read customer stories here. 4. $75 Million Belgian Bank Whaling Attack Perhaps the most successful social engineering attack of all time was conducted against Belgian bank Crelan. While Crelan discovered its CEO had been “whaled” after conducting a routine internal audit, the perpetrators got away with $75 million and have never been brought to justice. Crelan fell victim to “whaling” — a type of spear-phishing where the scammers target high-level executives. Cybercriminals frequently try to harpoon these big targets because they have easy access to funds. You can read more about whaling here: Whaling Email Attacks: Examples & Prevention Strategies. How to Prevent Whaling In defending against whaling attacks, the same principles apply as when defending against spear phishing and CEO Fraud. In addition to making sure employees – including senior executives – are trained on how to spot impersonation attacks, you need to implement email security solutions to detect and prevent successful inbound attacks.  To learn more about how Tessian bolsters training, reinforces policies and procedures, and stops threats – all without disrupting employee’s workflow – book a demo.  5. High-Profile Twitters Users’ Accounts Compromised After Vishing Scam In July 2020, Twitter lost control of 130 Twitter accounts, including those of some of the world’s most famous people — Barack Obama, Joe Biden, and Kanye West.  The hackers downloaded some users’ Twitter data, accessed DMs, and made Tweets requesting donations to a Bitcoin wallet. Within minutes — before Twitter could remove the tweets — the perpetrator had earned around $110,000 in Bitcoin across more than 320 transactions. Twitter has described the incident as a “phone spear phishing” attack (also known as a “vishing” attack). The calls’ details remain unclear, but somehow Twitter employees were tricked into revealing account credentials that allowed access to the compromised accounts. Following the hack, the FBI launched an investigation into Twitter’s security procedures. The scandal saw Twitter’s share price plummet by 7% in pre-market trading the following day. How to Prevent Vishing Vishing attacks typically utilize “Voice over Internet Protocol” (VoIP) technology in order to fake their caller ID. Attackers can also use “war diallers” to contact many people in a short period. The attack may start with a recorded message directing the target to call back. The key to protecting your business from vishing attacks is staff training. Ensure your employees understand what a vishing attack might sound like (the caller has an urgent tone or offers unexpected benefits), and make it clear that they should never respond to such a call. You can read more about vishing on our blog. 6. Texas Attorney-General Warns of Delivery Company Smishing Scam Nearly everyone gets the occasional text message that looks like it could be a potential scam. But in September 2020, one smishing (SMS phishing) attack became so widespread that the Texas Attorney-General put out a press release warning residents about it. Victims of this scam received a fraudulent text message purporting to be from a delivery company such as DHL, UPS, or FedEx. The SMS invited the target to click a link and “claim ownership” of an undelivered package. After following the link, the target was asked to provide personal information and credit card details. The Texas Attorney-General warned all Texans not to follow the link. He stated that delivery companies do not communicate with customers in this way, and urged anyone receiving the text message to report it to the Office of the Attorney General or the Federal Trade Commission. How to Prevent Smishing While 96% of phishing occurs via email, smishing is an increasingly serious threat to individuals and businesses. Consumer Reports claims that the Federal Trade Commission (FCC) received 93,331 complaints about fraudulent text messages in 2018 — a 30% increase from 2017. Smishing scams follow the same patterns as other social engineering attacks. Smishing text messages are typically urgent in tone, claiming that the target is in danger or a fine or have been the victim of credit card fraud. Or they may claim that the target has won a prize, or is owed a tax refund. So, how do you avoid falling victim to a scam? In the workplace, security teams should ensure employees exercise the same caution when responding to text messages as they do with emails.  Top tip: Never to respond to any suspicious message, click links within SMS messages, or reveal personal or company information via SMS. Prevent social engineering attacks in your organization While we’ve included three tips to help you detect social engineering attacks in this blog: What is Social Engineering? 4 Types of Attacks, it’s important to remember that these scams – whether delivered by email, text, or voicemail, are really, really hard to spot. That’s why technology is essential and where Tessian comes in. Powered by machine learning, Tessian Defender analyzes and learns from an organization’s current and historical email data and protects employees against inbound email security threats, including whaling, CEO Fraud, BEC, spear phishing, and other targeted social engineering attacks. Best of all, it does all of this silently in the background in real-time and, in-the-moment warnings help bolster training and reinforce policies. That means employee productivity isn’t affected and security reflexes improve over time. To learn more about how Tessian can protect your people and data against social engineering attacks on email, book a demo today.
Spear Phishing
What Does a Spear Phishing Email Look Like?
By Maddie Rosenthal
17 September 2020
88% of organizations around the world experienced spear phishing attempts in 2019.  And, while security leaders are working hard to train their employees to spot these advanced impersonation attacks, every email looks different. A hacker could be impersonating your CEO or a client. They could be asking for a wire transfer or a spreadsheet. And malware can be distributed via a link or an attachment. But it’s not all bad news. While – yes – each email is different, there are four commonalities in virtually all spear phishing emails. 
Download the infographic now and help your employees spot spear phishing attacks. Before we go into more detail about these four red flags, let’s get into the mind of a hacker. What do hackers consider when creating a spear phishing attack? Hackers prey on their target’s psychological vulnerabilities.  For example, immediately after the outbreak of COVID-19, we saw a spike in spear phishing attacks impersonating health organizations, insurance companies, government agencies, and remote-access tools. Why? Because people were stressed, anxious, and distracted and therefore more likely to trust emails containing “helpful” information and take the bait. We explore this in detail in our report, The Psychology of Human Error.  While people cite distraction as the top reason for falling for phishing attacks, the perceived legitimacy of the email was a close second. Looking at real-world examples can help. Below are five articles that outline recent scams, including images of the emails.  COVID-19: Real-Life Examples of Opportunistic Phishing Emails Everything You Need to Know About Tax Day Scams 2020 Spotting the Stimulus Check Scams How to Spot and Avoid 2020 Census Scams Look Out For Back to School Scams Now that you know broadly what to look for and what makes you more vulnerable, let’s take a deeper dive into the four things you should carefully inspect before replying to an email. 4 Things to Inspect Before Replying to An Email The Display Name and Email Address The first thing you should do is look at the Display Name and the email address. Do they match? Do you recognize the person and/or organization? Have you corresponded with them before? It’s important to note that some impersonations are easier to spot than others. For example, in the example below, the Display Name ([email protected]) is vastly different from the email address ([email protected]).
But, hackers can make slight changes to the domain that can be indiscernible unless the target is really looking for it. To make it easier to understand, we’ll use FedEx as an example. In the chart below you’ll see five different types of impersonations. For more information about domain impersonations, read this article:  Inside Email Impersonation: Why Domain Name Spoofs Could Be Your Biggest Risk
The bottom line: Take the time to look closely at the sender’s information. The Subject Line As we’ve mentioned, hackers exploit the psychological vulnerabilities of their targets. It makes sense, then, that they’ll try to create a sense of urgency in the subject line.  Here is a list of the Top 5 subject lines used in spear phishing attacks: Urgent Follow up Important Are you available? Payment Status And, when it comes to Business Email Compromise attacks, the Top 5 subject lines are: Urgent Request Important Payment Attention While – yes – these subject lines can certainly appear in legitimate emails, you should exercise caution when responding. Better safe than sorry! Attachments and Links Hackers will often direct their targets to follow a link or download an attachment.  Links will direct users to a malicious website and attachments, once downloaded, will install malware on the user’s computer. These are called payloads. How can you spot one? While links may often look inconspicuous (especially when they’re hyperlinked to text) if you hover over them, you’ll be able to see the full URL. Look out for strange characters, unfamiliar domains, and redirects. 
Unfortunately, you can’t spot a malicious attachment as easily. Your best bet, then, is to avoid downloading any attachments unless you trust the source.  Note: Not all spear phishing emails contain a payload. Hackers can also request a wire transfer or simply build rapport with their target before making a request down the line. The Body Copy Just like the subject line will create a sense of urgency, the body copy of the email will generally motivate the target to act.  Look out for language that suggests there will be a consequence if you don’t act quickly. For example, a hacker may say that if a payment isn’t made within 2 hours, you’ll lose a customer. Or, if you don’t confirm your email address within 24 hours, your account will be deactivated. While spear phishing emails are generally carefully crafted, spelling errors and typos can also be giveaways. Likewise, you may notice language you wouldn’t expect from the alleged sender. For example, if an email appears to be sent from your CEO, but the copy doesn’t match previous emails from him or her, this could suggest that the email is a spear phishing attack. What to do if an email if you think an email is suspicious Now that you know what to look out for, what do you do if you think you’ve caught a phish? If anything seems unusual, do not follow or click links or download attachments.  If the email appears to be from a government organization or another trusted institution, visit their website via Google or your preferred search engine, find a support number, and ask them to confirm whether the communication is valid. If the email appears to come from someone you know and trust, like a colleague, reach out to the individual directly by phone, Slack, or a separate email thread. Rest assured, it’s better to confirm and proceed confidently than the alternative.  Contact your line manager and/or IT team immediately and report the email. But it’s not fair to leave people as the last line of defense. Even the most tech-savvy people can fall for spear phishing attacks.  Case in point: Last month, The SANS institute – a global cybersecurity training and certifications organization – revealed that nearly 30,000 accounts of PII were compromised in a phishing attack that convinced an end-user to install a self-hiding and malicious Office 365 add-on. That means organizations should invest in technology that can detect and prevent these threats. Tessian can help detect and prevent spear phishing attacks Unlike spam filters and Secure Email Gateways (SEGs) which can stop bulk phishing attacks, Tessian Defender can detect and prevent a wide range of impersonations, spanning more obvious, payload-based attacks to subtle, social-engineered ones. How? Tessian’s machine learning algorithms learn from historical email data to understand specific user relationships and the context behind each email. When an email lands in your inbox, Tessian Defender automatically analyzes millions of data points, including the email address, Display Name, subject line and body copy.  If anything seems “off”, it’ll be flagged. 
To learn more about how tools like Tessian Defender can prevent spear phishing attacks, speak to one of our experts and request a demo today. 
Compliance Data Exfiltration DLP Spear Phishing
Compliance in the Legal Sector: Laws & How to Comply
16 September 2020
Thanks to the digital transformation and increasingly strict data security obligations, law firms’ business priorities are changing. Today, data protection, transparency, and privacy are top-of-mind.  It makes sense.  Keep reading to find out… Why the legal sector is bound to such strict compliance standards Which regulations govern law firms How cybersecurity can help ensure compliance Interested in learning more about regional compliance standards or those that impact other industries? Check out our Compliance Hub to find articles, tips, guides, and more.
Why is the legal sector bound to strict compliance standards? Lawyers’ hard drives, email accounts, and smartphones can contain anything from sensitive intellectual property and trade secrets to the Personally Identifiable Information (PII) of clients.  Unfortunately, hackers and cybercriminals are all too aware of this. It’s no surprise, then, that the legal sector is amongst the most targeted by social engineering attacks like spear phishing. Ransomware is a big problem, too. In fact, just a few months ago, Grubman Shire Meiselas & Sacks, a prominent media law firm, had its client information compromised.  Those behind the attack later threatened to auction some of these files concerning major celebrities for as much as $1.5 million unless the firm paid a $42 million ransom.  But, it’s not just inbound attacks that law firms have to worry about. Because the legal sector is highly competitive, incidents involving Insider Threats are a concern, too.  96% of IT leaders working in the legal sector say they’re worried that someone within the organization will cause a breach, either accidentally (via a misdirected email, for example) or maliciously.  The regulations governing law firms When it comes to data protection and privacy, the legal sector is subject to a relatively strict regulatory framework both under the law and rules imposed by professional bodies. Depending on where a firm is based and what its practice areas are, it can be subject to several stringent laws and regulations. This is especially true for firms operating in major markets like the United States, the United Kingdom, and the European Union. In this article, we’ll focus on some of the more general regulations and standards that all firms operating in these markets are expected to abide by. General Data Protection Regulation (GDPR) When the GDPR was introduced in 2018, it represented the largest change to data protection legislation in almost two decades. It also contains some of the most thorough compliance obligations for law firms and indeed any other entity that collects, stores, and processes data. The GDPR has been designed to help and guide organizations with a legitimate business interest as to how personal data should be handled and gives regulators the power to impose large fines on firms that aren’t compliant.  You can read more about the largest GDPR fines (so far) in 2020 on our blog. What is the GDPR’s purpose? The GDPR was introduced amid growing concerns surrounding the safety of personal data and the need to protect it from hackers, cybercrime, Insider Threats, unethical use, and the growing attack surface.  Essentially, it gives citizens full and complete control of their data, subject to some restrictions (for example, where data must be held by firms by law).  What is the scope of the GDPR? The legislation regulates the use of ‘personal data’ and applies to all organizations located within the EU, as well as organizations outside the EU who offer their goods or services to EU citizens. It also applies to organizations that hold data pertaining to EU citizens, regardless of their location.  What should law firms know about the GDPR? The main part of the GDPR that law firms should be paying attention to is Article 5.  This sets out the principles relating to the collection and processing of personal data. The six key principles are that personal data: Should be processed lawfully, fairly and in a transparent manner; Should only be collected for legitimate purposes; Should be limited to what’s necessary in relation to the purpose(s) it’s processed; Must be accurate and kept up to date, with any inaccurate erased or rectified; Should be held for longer than is necessary for its purposes*; and Should be held with adequate security against theft, loss, and/or damage.  The GDPR also gives your clients the right to ask for their data to be removed (‘right of erasure’) without the need for any outside authorization. Note: Data can only be kept contrary to a client’s wishes to ensure compliance with other regulations.  What should a firm do in the event of a breach? Before GDPR, law firms could follow their own protocols when dealing with a data breach. But now, the GDPR forces firms to report any data breaches, no matter how big or small they are, to the relevant regulatory authority within 72 hours. In the UK, for example, the regulatory authority is the Information Commissioner’s Office (ICO):  The notification must: Contain relevant details regarding the nature of the breach; The approximate number of people impacted; and Contact details of the firm’s Data Protection Officer (DPO).  Clients who have had their personal data compromised must also be notified of the breach, the potential outcome, and any remediation “without undue delays”.  It’s important to note that breaches aren’t always the results of malicious activity by an Insider Threat or hacker outside the organization. Even accidents can result in breaches. In fact, misdirected emails (emails sent to the wrong person) has consistently been one of the most frequently reported incidents to the ICO.  That’s why it’s essential law firms (and other organizations) have safeguards in place to prevent mistakes like these from happening. Looking for a solution? Tessian Guardian prevents misdirected emails in some of the world’s most prestigious law firms, including Dentons, Hill Dickinson, and Travers Smith What are the penalties for non-compliance? Financial penalties imposed for GDPR violations can be harsh, and they often are; regulatory authorities are keen to highlight just how important the GDPR is and how seriously it should be taken. Fines for non-compliance can be as high as 4% of annual global turnover or €20 million—whichever is higher. American Bar Association Rule 1.6 Rule 1.6 governs the confidentiality of client information. It states, “A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Simply put, lawyers must make efforts to protect the data of their clients.  Two years ago, the American Bar Association issued new guidance in the form of Formal Opinion 483. This covers the importance of data protection and how firms should act when, not if, a security breach happens. This wording demonstrates that the ABA recognizes that breaches are part and parcel of firms operating in the modern world, and the statistics confirm this. 
In essence, Formal Opinion 483 states:  Lawyers have a duty of competence in implementing adequate security measures regarding technology. Lawyers must reasonably and continuously assess their systems, operating procedures, and plans for mitigating a breach. In the event of a suspected or confirmed breach, lawyers must take steps to stop the attack and prevent any further loss of data. When a breach is detected and confirmed, lawyers must inform their clients in a timely manner and with enough information for clients to make informed decisions.  The bottom line: law firms must protect data with cybersecurity. Solicitors’ Regulation Authority Code of Conduct In the UK, solicitors are obliged under the Solicitors’ Regulation Authority (SRA) Code of Conduct to maintain effective systems and mitigate risks to client confidentiality and client money. Solicitors are also obliged to ensure systems comply more broadly with the SRA’s other regulatory arrangements.  The SRA says that, although being hacked or falling victim to a data breach is not necessarily a failure to meet these requirements, firms should take proportionate steps to protect themselves and their clients while retaining the advantages of advanced IT.  Where a report of cybercrime (note: crime, not a loss that takes place due to negligence) is received, the SRA takes a constructive approach in dealing with the firm, especially if the firm:  Is proactive and immediately notifies the SRA. Has taken steps to inform the client and as a minimum make good any loss. Shows they are taking steps to improve their systems and processes to reduce the risk of a similar incident happening again.  That means that, under the SRA’s Code of Conduct, law firms should take steps to prevent inbound attacks like spear phishing and set-up policies and processes that ensure swift reporting.  The good news is, Tessian can help with both inbound attacks and Insider Threats and has a history of successfully protecting law firms around the world from both. 
How Tessian helps law firms stay compliant Across all three of the regulations listed here, there’s one commonality: law firms are responsible for ensuring that their IT systems and processes are robust and secure enough to keep data safe and mitigate the chance of a breach taking place.  But, that’s easier said than done, especially in our dynamic and digitally connected world where threats are ever-evolving. So, where should law firms start? Email. 90% of all data breaches start on email and it’s the threat vector IT leaders are most concerned about protecting. That’s why Tessian is focused on protecting this channel. Across three solutions, Tessian detects and prevents threats using machine learning, which means it’s constantly adapting, without requiring maintenance from thinly-stretched security teams. Tessian Defender detects and prevents spear phishing Tessian Guardian detects and prevents accidental data loss via misdirected email Tessian Enforcer detects and prevents data exfiltration attempts from Insider Threats Importantly, Tessian is non-disruptive. That way, partners, lawyers, and administrators can do their jobs without security getting in the way. Tessian stops threats, not business.  To learn more about how Tessian helps law firms like Dentons, Hill Dickinson, and Travers Smith protect data, maintain client trust, and satisfy compliance standards, talk to one of our experts. 
Data Exfiltration DLP Human Layer Security Spear Phishing
Worst Email Mistakes at Work and How to Fix Them
By Maddie Rosenthal
10 September 2020
Everyone makes mistakes at work. It could be double-booking a meeting, attaching the wrong document to an email, or misinterpreting directions from your boss. While these snafus may cause red-faced embarrassment, they generally won’t have any long-term consequences. But, what about mistakes that compromise cybersecurity? This happens more often than you might think. In fact, nearly half of employees say they’ve done it, and employees under 40 are among the most likely. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); In this article, we’ll focus on email mistakes. You’ll learn: The top five email mistakes that compromise cybersecurity How frequently these incidents happen What to do if you make a mistake on email
I sent an email to the wrong person At Tessian, we call this a misdirected email. If you’ve sent one, you’re not alone. 58% of people say they’ve done it and, according to Tessian platform data, at least 800 are fired off every year in organizations with over 1,000 people. It’s also the number one security incident reported to the Information Commissioner’s Office (ICO) under the GDPR. (More on the consequences related to data privacy below.) Why does it happen so often? Well, because it’s incredibly easy to do. It could be a simple typo (for example, sending an email to [email protected] instead of [email protected]) or it could be an incorrect suggestion from autocomplete.  What are the consequences of sending a misdirected email? While we’ve written about the consequences of sending an email to the wrong person in this article, here’s a high-level overview:  Embarrassment  Fines under compliance standards like GDPR and CCPA Lost customer trust and increased churn Job loss Revenue loss Damaged reputation
Real-world example of a misdirected email In 2019, the names of 47 claimants who were the victims of sexual abuse were leaked in an email from the program administrator after her email client auto-populated the wrong email address.  While the program administrator is maintaining that this doesn’t qualify as a data leak or breach, the recipient of the email – who worked in healthcare and understands data privacy requirements under HIPAA – continues to insist that the 47 individuals must be notified.  As of September 2020, they still haven’t been. I accidentally hit “reply all” or cc’ed someone instead of bcc’ing them Like sending a misdirected email, accidentally hitting “reply all” or cc instead of bcc are both easy mistakes to make.  What are the consequences of hitting “reply all” or cc instead of bcc? As you may have guessed, the consequences are the same as the consequences of sending a misdirected email. And, importantly, the consequences depend entirely on what information was contained in, or attached to, the email. For example, if you drafted a snarky response to a company-wide email and intended to send it to a single co-worker but ended up firing it off everyone, you’ll be embarrassed and may worry about your professional credibility.  But, if you replace that snarky response with a spreadsheet containing medical information about employees, you’ll have to report the data loss incident which could have long-term consequences. Real-world example of hitting “reply all” In 2018, an employee at the Utah Department of Corrections accidentally sent out a calendar invite for her division’s annual potluck. Harmless, right? Wrong. Instead of sending the invite to 80 people, it went to 22,000; nearly every employee in Utah government. While there were no long-term consequences (i.e., it wasn’t considered a data loss incident or breach) it does go to show how easily data can travel and land in the wrong hands.  Real-world example of cc’ing someone instead of bcc’ing them On January 21, 2020, 450 customer email addresses were inadvertently exposed after they were copied, rather than blind copied, into an email. The email was sent by an employee at speaker-maker Sonos and, while it was an accident, under GDPR, the mistake is considered a potential breach.  I fell for a phishing scam According to Tessian research, 1 in 4 employees has clicked on a phishing email. But, the odds aren’t exactly in our favor. In 2019, 22% of breaches in 2019 involved phishing…and 96% of phishing attacks start on email. (You can find more Phishing Statistics here.) Like sending an email to the wrong person, it’s easy to do, especially when we’re distracted, stressed, or tired. But, it doesn’t just come down to psychology. Phishing scams are getting harder and harder to detect as hackers use increasingly sophisticated techniques to dupe us.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); What are the consequences of falling for a phishing scam? Given the top five “types” of data that are compromised in phishing attacks (see below), the consequences of a phishing attack are virtually limitless. Identify theft. Revenue loss. Customer churn. A wiped hardrive. But, the top five “types” of data that are compromised in a phishing attack are: Credentials (passwords, usernames, pin numbers) Personal data (name, address, email address) Internal data (sales projections, product roadmaps)  Medical (treatment information, insurance claims) Bank (account numbers, credit card information) Real-world example of a successful phishing attack In August 2020, The SANS institute – a global cybersecurity training and certifications organization – revealed that nearly 30,000 accounts of PII were compromised in a phishing attack that convinced an end-user to install a self-hiding and malicious Office 365 add-on. While no passwords or financial information were compromised and all the affected individuals have been notified, the breach goes to show that anyone – even cybersecurity experts – can fall for phishing scams. But, most phishing attacks have serious consequences. According to one report, 60% of organizations lose data. 50% have credentials or accounts compromised. Another 50% are infected with ransomware. 35% experience financial losses. I sent an unauthorized email As a part of a larger cybersecurity strategy, most organizations will have policies in place that outline what data can be moved outside the network and how it can be moved outside the network. Generally speaking, sending data to personal email accounts or third-parties is a big no-no. At Tessian, we call these emails “unauthorized” and they’re sent 38x more than IT leaders estimate. Tessian platform data shows that nearly 28,000 unauthorized emails are sent in organizations with 1,000 employees every year.  So, why do people send them? It could be well-intentioned. For example, sending a spreadsheet to your personal email address to work over the weekend. Or, it could be malicious. For example, sending trade secrets to a third-party in exchange for a job opportunity.  What are the consequences of sending an unauthorized email Whether well-intentioned or malicious, the consequences are the same: if the email contains data, it could be considered a data loss incident or even a breach. In that case, the consequences include: Lost data Lost intellectual property Revenue loss Losing customers and/or their trust Regulatory fines Damaged reputation No sensitive data involved? The consequences will depend on the organization and existing policies. But, you should (at the very least) expect a warning.  Real-world example of an unauthorized email In 2017, an employee at Boeing shared a spreadsheet with his wife in hopes that she could help solve formatting issues. While this sounds harmless, it wasn’t. The personal information of 36,000 employees was exposed, including employee ID data, places of birth, and accounting department codes. You can find more real-word examples of “Insider Threats” in this article: Insider Threats: Types And Real-World Examples How can I avoid making mistakes on email? The easiest answer is: be vigilant. Double-check who you’re sending emails to and what you’re sending. Make sure you understand your company’s policies when it comes to data. Be cautious when responding to requests for information or money.  But vigilance alone isn’t enough. To err is human and, as we said at the beginning of this article, everyone makes mistakes.  That’s why to prevent email mistakes, data loss, and successful targeted attacks, organizations need to implement email security solutions that prevent human error. That’s exactly what Tessian does. Powered by machine learning, our Human Layer Security technology understands human behavior and relationships. Tessian Guardian automatically detects and prevents misdirected emails Tessian Enforcer automatically detects and prevents data exfiltration attempts Tessian Defender automatically detects and prevents spear phishing attacks Importantly, Tessian’s technology automatically updates its understanding of human behavior and evolving relationships through continuous analysis and learning of the organization’s email network. That means it gets smarter over time to keep you protected, always.  Interested in learning more about how Tessian can help prevent email mistakes in your organization? You can read some of our customer stories here or book a demo.
Compliance Customer Stories Data Exfiltration DLP Human Layer Security Spear Phishing
18 Actionable Insights From Tessian Human Layer Security Summit
By Maddie Rosenthal
09 September 2020
In case you missed it, Tessian hosted its third (and final) Human Layer Security Summit of 2020 on September 9. This time, we welcomed over a dozen security and business leaders from the world’s top institutions to our virtual stage, including: Jeff Hancock from Stanford University David Kennedy, Co-Founder and Chief Hacking Officer at TrustedSec Merritt Baer, Principal Security Architect at AWS Rachel Beard, Principal Security Technical Architect at Salesforce  Tim Fitzgerald, CISO at Arm  Sandeep Amar, CPO at MSCI  Martyn Booth, CISO at Euromoney  Kevin Storli, Global CTO and UK CISO at PwC Elvis M. Chan, Supervisory Special Agent at the FBI  Nina Schick, Author of “Deep Fakes and the Infocalypse: What You Urgently Need to Know” Joseph Blankenship, VP Research, Security & Risk at Forrester Howard Shultz, Former CEO at Starbucks  While you can watch the full event on YouTube below, we’ve identified 18 valuable insights that security, IT, compliance, and business leaders should apply to their strategies as they round out this year and look forward to the next.
Here’s what we learned at Tessian’s most recent Human Layer Security Summit. Not sure what Human Layer Security is? Check out this guide which covers everything you need to know about this new category of protection.  1. Cybersecurity is mission-critical Security incidents – whether it’s a ransomware attack, brute force attack, or data leakage from an insider threat – have serious consequences. Not only can people lose their jobs, but businesses can lose customer trust, revenue, and momentum. While this may seem obvious to security leaders, it may not be so obvious to individual departments, teams, and stakeholders. But it’s essential that this is communicated (and re-communicated).  Why? Because a company that’s breached cannot fulfill its mission. Keep reading for insights and advice around keeping your company secure, all directly from your peers in the security community. 2. Most breaches start with people People control our most sensitive systems and data. It makes sense, then, that most data breaches start with people. But, that doesn’t mean employees are the weakest link. They’re a business’ strongest asset! So, it’s all about empowering them to make better security decisions. That’s why organizations have to adopt people-centric security solutions and strategies.
The good news is, security leaders don’t face an uphill battle when it comes to helping employees understand their responsibility when it comes to cybersecurity… 3. Yes, employees are aware of their duty to protect data Whether it’s because of compliance standards, cybersecurity headlines in mainstream media, or a larger focus on privacy and protection at work, Martyn Booth, CISO at Euromoney reminded us that most employees are actually well aware of the responsibility they bear when it comes to safeguarding data.  This is great news for security leaders. It means the average employee will be more likely to abide by policies and procedures, will pay closer attention during awareness training, and will therefore contribute to a more positive security culture company-wide. Win-win. 4. But, employees are more vulnerable to phishing scams outside of their normal office environment  While – yes – employees are more conscious of cybersecurity, the shift to remote working has also left them more vulnerable to attacks like phishing scams.  “We have three “places”: home, work, and where we have fun. When we combine two places into one, it’s difficult psychologically. When we’re at home sitting at our coffee table, we don’t have the same cues that remind us to think about security that we do in the office. This is a huge disruption,” Jeff Hancock, Professor at Stanford University explained.  Unfortunately, hackers are taking advantage of these psychological vulnerabilities. And, as David Kennedy, Co-Founder and Chief Hacking Officer at TrustedSec pointed out, this isn’t anything new. Cybercriminals have always been opportunistic in their attacks and therefore take advantage of chaos and emotional distress.  To prevent successful opportunistic attacks, he recommends that you: Reassess what the new baseline is for attacks Educate employees on what threats look like today, given recent events Identify which brands, organizations, people, and departments may be impersonated (and targeted) in relation to the pandemic But, it’s not just inbound email attacks we need to be worried about.  5. They’re more likely to make other mistakes that compromise cybersecurity, too This change to our normal environment doesn’t just affect our ability to spot phishing attacks. It also makes us more likely to make other mistakes that compromise cybersecurity. Across nearly every session, our guest speakers said they’ve seen more incidents involving human error and that security leaders should expect this trend to continue. That’s why training, policies, and technology are all essential components of any security strategy. More on this below. 6. Security awareness training has to be ongoing and ever-evolving At our first Human Layer Security Summit back in March, Mark Logsdon, Head of Cyber Assurance and Oversight at Prudential, highlighted three key flaws in security awareness training: It’s boring It’s often irrelevant It’s expensive What he said is still relevant six months on and it’s a bigger problem than ever, especially now that the perimeter has disappeared, security teams are short-handed, and individual employees are working at home and on their own devices. So, what can security leaders do?  Kevin Storli, Global CTO and UK CISO at PwC highlighted the importance of tailoring training to ensure it’s always relevant. That means that instead of just reminding employees about compliance standards and the importance of a strong password, we should also be focusing on educating employees about remote access, endpoints, and BYOD policies. But one training session isn’t enough to make security best practice really stick. These lessons have to be constantly reinforced through gamification, campaigns, and technology.  Tim Fitzgerald, CISO at Arm highlighted how Tessian’s in-the-moment warnings have helped his employees make the right decisions at the right time.  “Warnings help create that trigger in their brain. It makes them pause and gives them that extra breath before taking the next potentially unsafe step. This is especially important when they’re dealing with data or money. Tessian ensures they question what they’re doing,” he said.
7. You have to combine human policies with technical controls to ensure security  It’s clear that technology and training are both valuable. That means your best bet is to combine the two. In discussion with Ed Bishop, Tessian Co-Founder and CTO, Merritt Baer, Principal Security Architect at AWS and Rachel Beard, Principal Security Technical Architect at Salesforce, both highlighted how important it is for organizations to combine policies with technical controls. But security teams don’t have to shoulder the burden alone. When using tools like Salesforce, for example, organizations can really lean on the vendor to understand how to use the platform securely. Whether it’s 2FA, customized policies, or data encryption, many security features will be built-in.  8. But…Zero Trust security models aren’t always the answer While – yes – it’s up to security teams to ensure policies and controls are in place to safeguard data and systems, too many policies and controls could backfire. That means that “Zero Trust” security models aren’t necessarily the best way to prevent breaches.
9. Security shouldn’t distract people from their jobs  Security teams implement policies and procedures, introduce new software, and make training mandatory for good reason. But, if security becomes a distraction for employees, they won’t exercise best practice.  The truth is, they just want to do the job they were hired to do!  Top tip from the event: Whenever possible, make training and policies customized, succinct, and relevant to individual people or departments.  10. It also shouldn’t prevent them from doing their jobs  This insight goes back to the idea that “Zero Trust” security models may not be the best way forward. Why? Because, like Rachel, Merrit, Sandeep, and Martyn all pointed out: if access controls or policies prevent an employee from doing their job, they’ll find a workaround or a shortcut. But, security should stop threats, not flow. That’s why the most secure path should also be the path of least resistance. Security strategies should find a balance between the right controls and the right environment.  This, of course, is a challenge, especially when it comes to rule-based solutions. “If-then” controls are blunt instruments. Solutions powered by machine learning, on the other hand, detect and prevent threats without getting in the way. You can learn more about the limitations of traditional data loss prevention solutions in our report The State of Data Loss Prevention 2020.  11. Showing downtrending risks helps demonstrate the ROI of security solutions  Throughout the event, several speakers mentioned that preemptive controls are just as important as remediation. And it makes sense. Better to detect risky behavior before a security incident happens, especially given the time and resources required in the event of a data breach.  But tracking risky behavior is also important. That way, security leaders can clearly demonstrate the ROI of security solutions. Martyn Booth, CISO at Euromoney, explained how he uses Tessian Human Layer Security Intelligence to monitor user behavior, influence safer behavior, and track risk over time. “We record how many alerts are sent out and how employees interact with those alerts. Do they follow the acceptable use policy or not? Then, through our escalation workflows that ingest Tessian data, we can escalate or reinforce. From that, we’ve seen incidents involving data exfiltration trend downwards over time. This shows a really clear risk reduction,” he said. 12. Targeted attacks are becoming more difficult to spot and hackers are using more sophisticated techniques As we mentioned earlier, hackers take advantage of psychological vulnerabilities. But, social media has turbo-charged cybercrime, enabling cybercriminals to create more sophisticated attacks that can be directed at larger organizations. Yes, even those with strong cybersecurity. Our speakers mentioned several examples, including Garmin and Twitter. So, how do they do it? Research! LinkedIn, company websites, out-of-office messages, press releases, and news articles all provide valuable information that a hacker could use to craft a believable email. But, there are ways to limit open-source recon. See tips from David Kennedy, Co-Founder and Chief Hacking Officer at TrustedSec, below. 
13. Deepfakes are a serious concern Speaking of social media, Elvis M Chan, Supervisory Special Agent at the FBI and Nina Schick, Author of “Deep Fakes and the Infocalypse: What You Urgently Need to Know”,  took a deep dive into deepfakes. And, according to Nina, “This is not an emerging threat. This threat is here. Now.” While we tend to associate deepfakes with election security, it’s important to note that this is a threat that affects businesses, too.  In fact, Tim Fitzgerald, CISO at Arm, cited an incident in which his CEO was impersonated in a deepfake over Whatsapp. The ask? A request to move money. According to Tim, it was quite compelling.  Unfortunately, deepfakes are surprisingly easy to make and generation is outpacing detection. But, clear policies and procedures around authenticating and approving requests can ensure these scams aren’t successful. Not sure what a deepfake is? We cover everything you need to know in this article: Deepfakes: What Are They and Why Are They a Threat? 14. Supply chain attacks are, too  In conversation with Henry Treveleyan Thomas, Head of Customer Success at Tessian, Kevin Storli, Global CTO and UK CISO at PwC discussed how organizations with large supply chains are especially vulnerable to advanced impersonation attacks like spear phishing. “It’s one thing to ensure your own organization is secure. But, what about your supply chain? That’s a big focus for us: ensuring our supply chain has adequate security controls,” he said. Why is this so important? Because hackers know large organizations like PwC will have robust security strategies. So, they’ll look for vulnerabilities elsewhere to gain a foothold. That’s why strong cybersecurity can actually be a competitive differentiator and help businesses attract (and keep) more customers and clients.  15. People will generally make the right decisions if they’re given the right information 88% of data breaches start with people. But, that doesn’t mean people are careless or malicious. They’re just not security experts. That’s why it’s so important security leaders provide their employees with the right information at the right time. Both Sandeep Amar, CPO at MSCI and Tim Fitzgerald, CISO at Arm talked about this in detail.  It could be a guide on how to spot spear phishing attacks or – as we mentioned in point #6 – in-the-moment warnings that reinforce training.   Check out their sessions for more insights.  16. Success comes down to people While we’ve talked a lot about human error and psychological vulnerabilities, one thing was made clear throughout the Human Layer Security Summit. A business’s success is completely reliant on its people. And, we don’t just mean in terms of security. Howard Shultz, Former CEO at Starbucks, offered some incredible advice around leadership which we can all heed, regardless of our role. In particular, he recommended: Creating company values that really guide your organization Ensuring every single person understands how their role is tied to the goals of the organization Leading with truth, transparency, and humility
17. But people are dealing with a lot of anxiety right now Whether you’re a CEO or a CISO, you have to be empathetic towards your employees. And, the fact is, people are dealing with a lot of anxiety right now. Nearly every speaker mentioned this. We’re not just talking about the global pandemic.  We’re talking about racial and social inequality. Political unrest. New working environments. Bigger workloads. Mass lay-offs.  Joseph Blankenship, VP Research, Security & Risk at Forrester, summed it up perfectly, saying “We have an anxiety-ridden user base and an anxiety-ridden security base trying to work out how to secure these new environments. We call them users, but they’re actually human beings and they’re bringing all of that anxiety and stress to their work lives.” That means we all have to be human first. And, with all of this in mind, it’s clear that….. 18. The role of the CISO has changed  Sure, CISOs are – as the name suggests – responsible for security. But, to maintain security company-wide, initiatives have to be perfectly aligned with business objectives, and every individual department, team, and person has to understand the role they play. Kevin Storli, Global CTO and UK CISO at PwC touched on this in his session. “To be successful in implementing security change, you have to bring the larger organization along on the journey. How do you get them to believe in the mission? How do you communicate the criticality? How do you win the hearts and minds of the people? CISOs no longer live in the back office and address just tech aspects. It’s about being a leader and using security to drive value.” That’s a tall order and means that CISOs have to wear many hats. They need to be technology experts while also being laser-focused on the larger business. And, to build a strong security culture, they have to borrow tactics from HR and marketing.  The bottom line: The role of the CISO is more essential now than ever. It makes sense. Security is mission-critical, remember? If you’re looking for even more insights, make sure you watch the full event, which is available on-demand. You can also check out previous Human Layer Security Summits on YouTube.
Human Layer Security Spear Phishing
Why We Click: The Psychology Behind Phishing Scams and How to Avoid Being Hacked
07 September 2020
We all know the feeling, that awful sinking in your stomach when you realize you’ve clicked a link that you shouldn’t have. Maybe it was late at night, or you were in a hurry. Maybe you received an alarming email about a problem with your paycheck or your taxes. Whatever the reason, you reacted quickly and clicked a suspicious link or gave away personal information only to realize you made a dangerous mistake.  You’re not alone. In a recent survey conducted by my company Tessian, two-fifths (43%) of people admitted to making a mistake at work that had security repercussions, while nearly half (47%) of people working in the tech industry said they’ve clicked on a phishing email at work. In fact, most data breaches occur because of human error. Hackers are well aware of this and know exactly how to manipulate people into slipping up. That’s why emails scams — also known as phishing — are so successful.  Phishing has been a persistent problem during the COVID-19 pandemic. In April, Google alone saw more than 18 million daily email scams related to COVID-19 in a single week. Hackers are taking advantage of psychological factors like stress, social relationships and uncertainty that affect people’s decision-making. Here’s a look at some of the psychological factors that make people vulnerable and what to look out for in a scam. 
Stress and Anxiety Take A Toll Hackers thrive during times of uncertainty and unrest, and 2020 has been a heyday for them. In the last few months they’ve posed as government officials, urging recipients to return stimulus checks or unemployment benefits that were “overpaid” and threatening jail time. They’ve also impersonated health officials, prompting the World Health Organization to issue an alert warning people not to fall for scams implying association with the organization. Other COVID scams have lured users by offering antibody tests, PPE and medical equipment. Where chaos leads, hackers follow. The stressful events of this year mean that cybersecurity is not top-of-mind for many of us. But foundational principles of human psychology also suggest that these same events can easily lead to poor or impulsive decisions online. More than half (52%) of those in our survey said that stress causes them to make more mistakes. The reason for this has to do with how stress impacts our brains, specifically our ability to weigh risk and reward. Studies have shown that anxiety can disrupt neurons in the brain’s prefrontal cortex that help us make smart decisions, while stress can cause people to weigh the potential reward of a decision over possible risks, to the point where they even ignore negative information. When confronted with a potential scam, it’s important to stop, take a breath, and weigh the potential risks and negative information like suspicious language or misspelled words. Urgency can also add stress to an otherwise normal situation — and hackers know to take advantage of this. Look out for emails, texts or phone calls that demand money or personal information within a very short window. Hacking Your Network Some of the most common phishing scams impersonate someone in your “known” network, but your “unknown” network can also be manipulated. Your known network consists of your friends, family and colleagues — people you know and trust. Hackers exploit these relationships, betting they can sway someone to click on a link if they think it’s coming from someone they know. These impersonation scams can be quite effective because they introduce emotion to the decision-making progress. If a phone call or email claims your family member needs money for a lawyer or a medical procedure, fear or worry replace logic. Online scams promising money add greed into the equation, while phishing emails impersonating someone in authority or someone you admire, like a boss or colleague, cloud deductive reasoning with our desire to be liked. The difference between clicking a dangerous link or deleting the email can involve simply recognizing the emotions being triggered and taking a second look with logic in mind.  Meanwhile, the rise of social media and the abundance of personal information online has allowed hackers to impersonate your “unknown” network as well — people you might know. Hackers can easily find out where you work or where you went to school and use that information to send an email posing as a college alumnus to seek money or personal information. An easy way to check a suspicious email is by looking beyond the display name to examine the full email address of the sender by clicking the name. Scammers will often change, delete or add on a letter to an email address. 
The Impact of Distraction and New Surroundings The rise of remote work brought on by COVID-19 can also impact people’s psychological states and make them vulnerable to scams. Remote work can bring an overwhelming combination of video call fatigue, an “always on” mentality and household responsibilities like childcare. In fact, 57% of those surveyed in our report said they feel more distracted when working from home. Why is this a problem from a cybersecurity standpoint? Distraction can impair our decision-making abilities. Forty-seven percent of employees cited distraction as the top reason for falling for a phishing scam. While many people tend to have their guard up in a physical office, we tend to relax at home and may let our guard down, even if we’re working. With an estimated 70% of employees working from home part or full-time due to COVID-19, this creates an opportunity for hackers.  It’s also more difficult to verify a legitimate request from an impersonation when you’re not in the same office as a colleague. One common scam impersonates an HR staff member to request personal information from employees at home. When in doubt, don’t click any links, download attachments or provide sensitive data like passwords, financial information or a social security number until you can confirm a request with a colleague directly. Self-Care and Awareness  These scams will always be out there, but that doesn’t mean people should constantly worry and keep their guard up — that would be exhausting. A simple combination of awareness and self-care when online can make a big difference.  Once you know the tactics a hacker might use and the psychological factors like stress, emotions and distraction to look out for, it will be easier to spot an email scam without the anxiety. It’s also important to take breaks and prioritize self-care when you’re feeling stressed or tired. Step away from the computer when you can and have a conversation with your manager about why the pressure to be “always-on” when working remotely can have a negative impact psychologically and create cybersecurity risks. By understanding why people fall for these scams, we can start to find ways to easily identify and avoid them.  This article was originally published in Fast Company and was co-authored by Tim Sadler, CEO of Tessian and Jeff Hancock, Harry and Norman Chandler Professor of Communication at Stanford University 
Spear Phishing
How to Avoid Falling Victim to Voting Scams in the 2020 U.S. Election
By Laura Brooks
28 August 2020
Scammers thrive in times of crisis and confusion. This is perhaps why the controversy surrounding mail-in voting could prove to be another golden opportunity for cybercriminals.  Throughout 2020, we’ve seen a surge of cybercriminals capitalizing on key and newsworthy moments in the COVID-19 crisis, creating scams that take advantage of the stimulus checks, the Paycheck Protection Program and students heading back to school.  Knowing that people are seeking answers during uncertain times, hackers craft scams – usually in the form of phishing emails – that appear to provide the information people are looking for. Instead, victims are lured to fake websites that are designed to steal their valuable personal or financial information.  Hackers are creating websites related to mail-in voting Given the uncertainties surrounding election security and voters’ safety during the pandemic, fueled further by President Trump’s recent attacks against the US Postal Service, it’s highly likely that scammers could set their sights on creating scams associated with mail-in voting.  In fact, our researchers discovered that around 75 domains spoofing websites related to mail-in voting were registered between July 2 to August 6.  Some of these websites tout information about voting-by-mail, such as mymailinballot.com and mailinyourvote.com. Others encourage voters to request or track their ballot, such as requestmailinballot.com and myballotracking.com.  Anyone accessing these websites should be wary, though. Keep reading to find out why. What risks do these spoofed domains pose?  To understand the risks these spoofed domains pose, consider why hacker’s create them. They’re after sensitive information like your name, address, and phone number as well as financial information like your credit card details. For example, if a malicious website claims to offer visitors a way to register to vote or cast their vote – which several of these newly created domains did – there will be a form that collects personally identifiable information (PII). Likewise, if a malicious website is asking for donations, visitors will be asked to enter credit card details.  If any of this information falls into the wrong hands, it could be sold on the dark web, resulting in identity theft or payment card fraud.  Of course, not every domain that our researchers discovered can be deemed malicious. But, it’s important you stay vigilant and never provide personal information unless you trust the domain.
So, how can voters avoid falling for mail-in voting scams?  Here are some tips to help you avoid falling victim to voting scams in the upcoming election:  1. Find answers online, but don’t trust everything you read It’s perfectly reasonable to look online for answers about how to vote. There’s a lot of useful information about ordering absentee ballots and locating local secure ballot boxes. However, be aware that there is a lot of misinformation online, particularly around this year’s election. Source information from trusted websites like https://www.usa.gov/how-to-vote.  2. Think twice before sharing personal details Before entering any personal or financial details, always check the URL of the domain and verify the legitimacy of the service by calling them directly. Question domains or pop-ups that request personal information from you, especially as it relates to your voting preference or other personal information. 3. Never share direct deposit details, credit card information, or your Social Security number on an unfamiliar website This information should be kept private and confidential. If a website asks you to share details like this, walk away.  Keep up with our blog for more insights, analysis, and tips for staying safe online. 
Compliance Data Exfiltration DLP Spear Phishing
August Cybersecurity News Roundup
By Maddie Rosenthal
28 August 2020
The end of the month means another roundup of the top cybersecurity headlines. Keep reading for a summary of the top 12 stories from August. Bonus: We’ve included links to extra resources in case anything piques your interest and you want to take a deeper dive. Did we miss anything? Email [email protected] Russian charged with trying to recruit Tesla employee to plant malware  Earlier this week, news broke that the FBI had arrested Egor Igorevich Kriuchkov – a 27-year-old Russian citizen – for trying to recruit a fellow Tesla employee to plant malware inside the Gigafactory Nevada. The plan? Insert malware into the electric car maker’s system, causing a distributed denial of service (DDos) attack to occur. This would essentially give hackers free rein over the system.  But, instead of breaching the network, the Russian-speaking employee turned down Egor’s million-dollar offer (to be paid in cash or bitcoin) and instead worked closely with the FBI to thwart the attack. Feds warn election officials of potentially malicious ‘typosquatting’ websites Stories of election fraud have dominated headlines over the last several months. The latest story involves suspicious “typosquatting” websites that may be used for credential harvesting, phishing, and influence operations.
While the FBI hasn’t yet identified any malicious incidents, they have found dozens of illegitimate websites that could be used to interfere with the 2020 vote.   To stay safe, make sure you double-check any URLs you’ve typed in and never input any personal information unless you trust the domain.  Former Google engineer sent to prison for stealing robocar secrets An Insider Threat at Google who exfiltrated 14,000 files five years ago has been sentenced to 18 months in prison. The sentencing came four months after Anthony Levandowski plead guilty to stealing trade secrets, including diagrams and drawings related to simulations, radar technology, source code snippets, PDFs marked as confidential, and videos of test drives.  He’s also been ordered to pay more than $850,000. Looking for more information about the original incident? Check out this article: Insider Threats: Types and Real-World Examples. All the information you need is under Example #4. For six months, security researchers have secretly distributed an Emotet vaccine across the world Emotet – one of today’s most skilled malware groups – has caused security and IT leaders headaches since 2014.  But, earlier this year, James Quinn, a malware analyst working for Binary Defense, discovered a bug in Emotet’s code and was able to put together a PowerShell script that exploited the registry key mechanism to crash the malware. According to ZDNet, he essentially created “both an Emotet vaccine and killswitch at the same time.” Working with Team CYMRU, Binary Defense handed over the “vaccine” to national Computer Emergency Response Teams (CERTs), which then spread it around the world to companies in their respective jurisdictions. Online business fraud down, consumer fraud up New research from TransUnion shows that between March and July, hackers have started to change their tactics. Instead of targeting businesses, they’re now shifting their focus to consumers. Key findings include: Consumer fraud has increased 10%, while business fraud has declined 9% since the beginning of the pandemic Nearly one-third of consumers have been targeted by COVID-19 related fraud Phishing is the most common method used in fraud schemes You can read the full report here. FBI and CISA issue warning over increase in vishing attacks A joint warning from the Federal Bureau of Investigations (FBI) and Cybersecurity Infrastructure Security Agency (CISA) was released in mid-August, cautioning the public that they’ve seen a spike in voice phishing attacks (known as vishing).  They’ve attributed the increase in attacks to the shift to remote working. Why? Because people are no longer able to verify requests in-person. Not sure what vishing is? Check out this article, which outlines how hackers are able to pull off these attacks, how you can spot them, and what to do if you’re targeted.  TikTok sues U.S. government over Trump ban In last month’s cybersecurity roundup, we outlined why India had banned TikTok and why America might be next. 30 days later, we have a few updates. On August 3, President Trump said TikTok would be banned in the U.S. unless it was bought by Microsoft (or another company) before September 15. Three days later, Trump signed an executive order barring US businesses from making transactions with TikTok’s parent company, ByteDance. The order will go into effect 45 days after it was signed. A few weeks later, ByteDance filed a lawsuit against the U.S. government, arguing the company was denied due process to argue that it isn’t actually a national security threat. In the meantime, TikTok is continuing its sales conversations with Microsoft and Oracle. Stay tuned next month for an update on what happens in the next 30 days. A Stanford deception expert and cybersecurity CEO explain why people fall for online scams According to a new research report – The Psychology of Human Error – nearly half of employees have made a mistake at work that had security repercussions. But why? Employees say stress, distraction, and fatigue are part of the problem and drive them to make more mistakes at work, including sending emails to the wrong people and clicking on phishing emails.  And, as you might expect, the sudden transition to remote work has only added fuel to the fire. 57% of employees say they’re even more distracted when working from home.  To avoid making costly mistakes, Jeff Hancock, a professor at Stanford, recommends taking breaks and prioritizing self-care. Of course, cybersecurity solutions will help prevent employees from causing a breach, too. University of Utah pays $457,000 to ransomware gang On August 21, the University of Utah posted a statement on its website saying that they were the victim of a ransomware attack and, to avoid hackers leaking sensitive student information, they paid $457,000. But, according to the statement, the hackers only managed to encrypt .02% of the data stored on their servers. While the University hasn’t revealed which ransomware gang was behind the attack, they have confirmed that the attack took place on July 19, that it was the College of Social and Behavioral Sciences that was hacked, and that the university’s cyber insurance policy paid for part of the ransom. Verizon analyzed the COVID-19 data breach landscape This month, Verizon updates its annual Data Breach Landscape Report to include new facts and figures related to COVID-19. Here some of the trends to look out for based on their findings: Breaches caused by human error will increase. Why? Many organizations are operating with fewer staff than before due to either illness or layoffs. Some staff may also have limitations because of new remote working set-ups. When you combine that with larger workloads and more distractions, we’re bound to see more mistakes. Organizations should be especially wary of stolen-credential related hacking, especially as many IT and security teams are working to lock down and maintain remote access.  Ransomware attacks will increase in the coming months. SANS Institute Phishing Attack Leads to Theft of 28,000 Records  The SANS institute – a global cybersecurity training and certifications organization – revealed that nearly 30,000 accounts of PII were compromised in a phishing attack that convinced an end-user to install a self-hiding and malicious Office 365 add-on. While no passwords or financial information were compromised and all the affected individuals have been notified, the breach goes to show that anyone – even cybersecurity experts – can fall for phishing scams. The cybersecurity skills shortage is getting worse In March, Tessian released its Opportunity in Cybersecurity Report which set out to answer one (not-so-simple) question: Why are there over 4 million unfilled positions in cybersecurity and why is the workforce twice as likely to be male than female? The answer is multi-faceted and has a lot to do with a lack of knowledge of the industry and inaccurate perceptions of what it means to work in cybersecurity.  The bad news is, it looks like the problem is getting worse. A recent report, The Life and Times of Cybersecurity Professionals 2020, shows that only 7% of cybersecurity professionals say their organization has improved its position relative to the cybersecurity skills shortage in the last several years. Another 58% say their organizations should be doing more to bridge the gap. What do you think will help encourage more people to join the industry?  That’s all for this month! Keep up with us on social media and check our blog for more updates.
Human Layer Security Spear Phishing
Must-Know Phishing Statistics: Updated 2020
By Maddie Rosenthal
25 August 2020
Phishing attacks aren’t a new threat. In fact, these scams have been circulating since the mid-’90s. But, over time, they’ve become more and more sophisticated, have targeted larger numbers of people, and have caused more harm to both individuals and organizations. That means that this year – despite a growing number of vendors offering anti-phishing solutions – phishing is a bigger problem than ever. The problem is so big, in fact, that it’s hard to keep up with the latest facts and figures. That’s why we’ve put together this article. We’ve rounded up the latest phishing statistics, including: The frequency of phishing attacks The tactics employed by hackers The data that’s compromised by breaches The cost of a breach The most targeted industries The most impersonated brands  Facts and figures related to COVID-19 scams Looking for something more visual? Check out this infographic with key statistics.
If you’re familiar with phishing, spear phishing, and other forms of social engineering attacks, skip straight to the first category of 2020 phishing statistics. If not, we’ve pulled together some of our favorite resources that you can check out first to learn more about this hard-to-detect security threat.  How to Identify and Prevent Phishing Attacks What is Spear Phishing? Spear Phishing Demystified: The Terms You Need to Know Phishing vs. Spear Phishing: Differences and Defense Strategies How to Catch a Phish: A Closer Look at Email Impersonation CEO Fraud Email Attacks: How to Recognize & Block Emails that Impersonate Executives Business Email Compromise: What it is and How it Happens Whaling Attacks: Examples and Prevention Strategies  The frequency of phishing attacks According to Verizon’s 2020 Data Breach Investigations Report (DBIR), 22% of breaches in 2019 involved phishing. While this is down 6.6% from the previous year, it’s still the “threat action variety” most likely to cause a breach.  The frequency of attacks varies industry-by-industry (click here to jump to key statistics about the most phished). But 88% of organizations around the world experienced spear phishing attempts in 2019. Another 86% experienced business email compromise (BEC) attempts.  But, there’s a difference between an attempt and a successful attack. 65% of organizations in the United States experienced a successful phishing attack. This is 10% higher than the global average.  The tactics employed by hackers 96% of phishing attacks arrive by email. Another 3% are carried out through malicious websites and just 1% via phone. When it’s done over the telephone, we call it vishing and when it’s done via text message, we call it smishing. According to Symantec’s 2019 Internet Security Threat Report (ISTR), the top five subject lines for business email compromise (BEC) attacks: Urgent Request Important Payment Attention Hackers are relying more and more heavily on the credentials they’ve stolen via phishing attacks to access sensitive systems and data. That’s one reason why breaches involving malware have decreased by over 40%.
According to Sonic Wall’s 2020 Cyber Threat report, in 2019, PDFs and Microsoft Office files were the delivery vehicles of choice for today’s cybercriminals. Why? Because these files are universally trusted in the modern workplace.  When it comes to targeted attacks, 65% of active groups relied on spear phishing as the primary infection vector. This is followed by watering hole websites (23%), trojanized software updates (5%), web server exploits (2%), and data storage devices (1%).  The data that’s compromised by breaches The top five “types” of data that are compromised in a phishing attack are: Credentials (passwords, usernames, pin numbers) Personal data (name, address, email address) Internal data (sales projections, product roadmaps)  Medical (treatment information, insurance claims) Bank (account numbers, credit card information) While instances of financially-motivated social engineering incidents have more than doubled since 2015, this isn’t a driver for targeted attacks. Just 6% of targeted attacks are motivated by financial incentives, while 96% are motivated by intelligence gathering. The other 10% are simply trying to cause chaos and disruption. While we’ve already discussed credential theft, malware, and financial motivations, the consequences and impact vary. According to one report: Nearly 60% of organizations lose data Nearly 50% of organizations  have credentials or accounts compromised Nearly 50% of organizations are infected with ransomware Nearly 40% of organizations are infected with malware Nearly 35% of organizations experience financial losses
The cost of a breach According to IBM’s Cost of a Data Breach Report, the average cost per compromised record has steadily increased over the last three years. In 2019, the cost was $150. For some context, 5.2 million records were stolen in Marriott’s most recent breach. That means the cost of the breach could amount to $780 million. But, the average breach costs organizations $3.92 million. This number will generally be higher in larger organizations and lower in smaller organizations.  Losses from business email compromise (BEC) have skyrocketed over the last year. The FBI’s Internet Crime Report shows that in 2019, BEC scammers made nearly $1.8 billion. That’s over half of the total losses reported by organizations. And, this number is only increasing. According to the Anti-Phishing Working Group’s Phishing Activity Trends Report, the average wire-transfer loss from BEC attacks in the second quarter of 2020 was $80,183. This is up from $54,000 in the first quarter. This cost can be broken down into several different categories, including: Lost hours from employees Remediation Incident response Damaged reputation Lost intellectual property Direct monetary losses Compliance fines Lost revenue Legal fees Costs associated remediation generally account for the largest chunk of the total.  Importantly, these costs can be mitigated by cybersecurity policies, procedures, technology, and training. Artificial Intelligence platforms can save organizations $8.97 per record.  The most targeted industires While the Manufacturing industry saw the most breaches from social attacks (followed by Healthcare and then Professional services), employees working in Wholesale Trade are the most frequently targeted by phishing attacks, with 1 in every 22 users being targeted by a phishing email last year.   According to a different data set, the most phished industries vary by company size. Nonetheless, it’s clear Manufacturing and Healthcare are among the highest risk industries. The industries most at risk in companies with 1-249 employees are: Healthcare & Pharmaceuticals Education Manufacturing The industries most at risk in companies with 250-999 employees are: Construction Healthcare & Pharmaceuticals Business Services The industries most at risk in companies with 1,000+ employees are: Technology Healthcare & Pharmaceuticals Manufacturing The most impersonated brands Earlier this year, Check Point released its list of the most impersonated brands. These vary based on whether the attempt was via email or mobile, but the most impersonated brands overall for Q1 2020 were: Apple Netflix Yahoo WhatsApp PayPal Chase Facebook Microsoft eBay Amazon The common factor between all of these consumer brands? They’re trusted and frequently communicate with their customers via email. Whether we’re asked to confirm credit card details, our home address, or our password, we often think nothing of it and willingly hand over this sensitive information. But, after the outbreak of COVID-19 at the end of Q1, hackers changed their tactics and, by the end of Q2, Zoom was the most impersonated brand in email attacks. Read on for more COVID-related phishing statistics.
Facts and figures related to COVID-19 scams Because hackers tend to take advantage of key calendar moments (like Tax Day or the 2020 Census) and times of general uncertainty, individuals and organizations saw a spike in COVID-19 phishing attacks starting in March. But, according to one report, COVID-19 related scams reached their peak in the third and fourth weeks of April. And, it looks like hackers were laser-focused on money. Incidents involving payment and invoice fraud increased by 112% between Q1 2020 and Q2 2020. It makes sense, then, that finance employees were among the most frequently targeted employees. In fact, attacks on finance employees increased by 87% while attacks on the C-Suite decreased by 37%.
What can individuals and organizations do to prevent being targeted by phishing attacks? While you can’t stop hackers from sending phishing or spear phishing emails, you can make sure you (and your employees) are prepared if and when one is received. You should start with training. Educate employees about the key characteristics of a phishing email and remind them to be scrupulous and inspect emails, attachments, and links before taking any further action. Review the email address of senders and look out for impersonations of trusted brands or people (Check out our blog CEO Fraud Email Attacks: How to Recognize & Block Emails that Impersonate Executives for more information.) Always inspect URLs in emails for legitimacy by hovering over them before clicking Beware of URL redirects and pay attention to subtle differences in website content Genuine brands and professionals generally won’t ask you to reply divulging sensitive personal information. If you’ve been prompted to, investigate and contact the brand or person directly, rather than hitting reply We’ve created several resources to help employees identify phishing attacks. You can download a shareable PDF with examples of phishing emails and tips at the bottom of this blog: Coronavirus and Cybersecurity: How to Stay Safe From Phishing Attacks. But, humans shouldn’t be the last line of defense. That’s why organizations need to invest in technology and other solutions to prevent successful phishing attacks. But, given the frequency of attacks year-on-year, it’s clear that spam filters, antivirus software, and other legacy security solutions aren’t enough. That’s where Tessian comes in. By learning from historical email data, Tessian’s machine learning algorithms can understand specific user relationships and the context behind each email. This allows Tessian Defender to not only detect, but also prevent a wide range of impersonations, spanning more obvious, payload-based attacks to subtle, social-engineered ones. To learn more about how tools like Tessian Defender can prevent spear phishing attacks, speak to one of our experts and request a demo today.
Spear Phishing
Deepfakes: What are They and Why are They a Threat?
By Ed Bishop
21 August 2020
According to a recent Tessian survey, 74% of IT leaders think deepfakes are a threat to their organizations’ and their employees’ security. Are they right to be worried? We take a look. What is a deepfake?
How could deepfakes compromise security? “Hacking humans” is a tried and tested method of attack used by cybercriminals to breach companies’ security, access valuable information and systems, and steal huge sums of money.  In the world of cybersecurity, attempts to “hack humans” are known as social engineering attacks. In layman’s terms, social engineering is simply an attempt to trick people. These tactics and techniques have been around for years and they are constantly evolving.  For example, cybercriminals have realized that the “spray-and-pray” phishing campaigns they previously used were losing their efficacy. Why? Because companies have strengthened their defenses against these bulk attacks and people have begun to recognize the cues that signalled a scam, such as poor grammar or typos.  As a result, hackers have moved to crafting more sophisticated and targeted spear phishing attacks, impersonating senior executives, third party suppliers, or other trusted authorities in emails to deceive employees.  Some even play the long game, building rapport with their targets over time before asking them to wire money or share credentials. Attackers will also directly spoof the sender’s domain and add company logos to their messages to make them look more legitimate. It’s working.  Last year alone, scammers made nearly $1.8 billion through Business Email Compromise attacks. While spear phishing attacks take more time and effort to create, they are more effective and the ROI for an attacker is much higher. So, what does this have to do with deep fakes? Deepfakes – either as videos or audio recordings – are the next iteration of advanced impersonation techniques malicious actors can use to abuse trust and manipulate people into complying with their requests.  These attacks have proven even more effective than targeted email attacks. As the saying goes, seeing – or hearing – is believing. If an employee believes that the person on the video call in front of them is the real deal – or if the person calling them is their CEO – then it’s unlikely that they would ignore the request. Why would they question it?
Examples of deepfakes In 2019, cybercriminals mimicked the voice of a CEO at a large energy firm, demanding a fraudulent transfer of £220,000. And, just last month, Twitter also experienced a major security breach after employees were targeted by a “phone spear phishing attack” or “vishing” attack. Targeted employees received phone calls from hackers posing as IT staff, tricking them into sharing passwords for internal tools and systems.  While it’s still early days and, in some cases, the deepfake isn’t that convincing, there’s no denying that deepfake technology will continue to get better, faster, and cheaper in the near future.  You just have to look at advanced algorithms like GPT-3 to see how quickly it can become a reality.  Earlier this year, OpenAI released GPT-3—an advanced natural language processing (NLP) algorithm that uses deep learning to produce human-like text. It’s so convincing, in fact, that a student used the tool to produce a fake blog post that landed in the top spot on Hacker News—proving that AI-written content can pass as human-authored.
It’s easy to see why the security community is scared about the potential impact of deepfakes.. Gone are the days of hackers drafting poorly written emails, full of typos and grammatical errors. Using AI, they can craft highly convincing messages that actually look like they’ve been written by the people they’re impersonating.  This is something we will explore further at the Tessian HLS Summit on September 9th. Register here. Who is most likely to be targeted by deepfake scams? The truth is, anyone could be a target. There is no one group of people more likely than another to be targeted by deepfakes.  Within your organization, though, it is important to identify who might be most vulnerable to these types of advanced impersonation scams and make them aware of how – and on what channels – they could be targeted.  For example, a less senior employee may have no idea what their CEO sounds like or even looks like. That makes them a prime target.  It’s a similar story for new joiners. Hackers will do their homework, trawl through LinkedIn, and prey on new members of staff, knowing that it’s unlikely they would have met senior members of the organization. New joiners, therefore, would not recognize their voices if they receive a call from them.  Attackers may also pretend to be someone from the IT team who’s carrying out a routine set-up exercise. This would be an opportune time to ask their targets to share account credentials.  As new joiners have no reference points to verify whether the person calling them is real or fake, -or if the request they’re being asked to carry out is even legitimate – it’s likely that they’ll fall for the scam. 
How easy are deepfakes to make? Researchers have shown that you only need about one minute of audio to create an audio deepfake, while “talking head” style fake videos require around 40 minutes of input data.  If your CEO has spoken at an industry conference, and there’s a recording of it online, hackers have the input data they need to train its algorithms and create a convincing deepfake. But crafting a deepfake can take hours or days, depending on the hacker’s skill level. For reference, Timothy Lee, a senior tech reporter at Ars Technica was able to create his own deepfake in two weeks and he spent just $552 doing it.  Deepfakes, then, are a relatively simple but effective way to hack an organization. Or even an election.
How could deepfakes compromise election security? There’s been a lot of talk about how deepfakes could be used to compromise the security of the 2020 U.S. presidential election. In fact, an overwhelming 76% of IT leaders believe deepfakes will be used as part of disinformation campaigns in the election.  Fake messages about polling site disruptions, opening hours, and voting methods could affect turnout or prevent groups of people from voting. Worse still, disinformation and deepfake campaigns -whereby criminals swap out the messages delivered by trusted voices like government officials or journalists – threaten to cause even more chaos and confusion among voters.  Elvis Chan, a Supervisory Special Agent assigned to the FBI who will be speaking at the Tessian HLS Summit in September, believes that people are right to be concerned.  “Deepfakes may be able to elicit a range of responses which can compromise election security,” he said. “On one end of the spectrum, deepfakes may erode the American public’s confidence in election integrity. On the other end of the spectrum, deepfakes may promote violence or suppress turnout at polling locations,” he said. So, how can you spot a deepfake and how can you protect your people from them? 
How to protect yourself and your organization from deepfakes Poorly-made video deepfakes are easy to spot – the lips are out of sync, the speaker isn’t blinking, or there may be a flicker on the screen. But, as the technology improves over time and NLP algorithms become more advanced, it’s going to be more difficult for people to spot deepfakes and other advanced impersonation scams.  Ironically, AI is one of the most powerful tools we have to combat AI-generated attacks.  AI can understand patterns and automatically detect unusual patterns and anomalies – like impersonations – faster and more accurately than a human can.  But, we can’t just rely on technology. Education and awareness amongst people is also incredibly important. It’s therefore encouraging to see that 61% of IT leaders are already educating their employees on the threat of deepfakes and another 27% have plans to do so.  To help you out, we’ve put together some of our top tips which you and your employees can follow if you are being targeted by a deepfake or vishing attack:  Pause and question whether it seems right for a colleague – senior or otherwise – to ask you to carry out the request. Verify the request with the person directly via another channel of communication, such as email or instant messaging. People will not mind if you ask.  Ask the person requesting an action something only you and they would know, to verify their identity. For example, ask them what their partner’s name is or what the office dog is called.  Report incidents to the IT team. With this knowledge, they will be able to put in place measures to prevent similar attacks in the future. Looking for more advice? At the Tessian HLS Summit on September 9th, the FBI’s Elvis Chan will discuss tactics such as reporting, content verification, and critical thinking training in order to help employees avoid deepfakes or advanced impersonation.  You can register for the event here. 
Page