Spear Phishing
What Does a Spear Phishing Email Look Like?
By Maddie Rosenthal
17 September 2020
88% of organizations around the world experienced spear phishing attempts in 2019.  And, while security leaders are working hard to train their employees to spot these advanced impersonation attacks, every email looks different. A hacker could be impersonating your CEO or a client. They could be asking for a wire transfer or a spreadsheet. And malware can be distributed via a link or an attachment. But it’s not all bad news. While – yes – each email is different, there are four commonalities in virtually all spear phishing emails. 
Download the infographic now and help your employees spot spear phishing attacks. Before we go into more detail about these four red flags, let’s get into the mind of a hacker. What do hackers consider when creating a spear phishing attack? Hackers prey on their target’s psychological vulnerabilities.  For example, immediately after the outbreak of COVID-19, we saw a spike in spear phishing attacks impersonating health organizations, insurance companies, government agencies, and remote-access tools. Why? Because people were stressed, anxious, and distracted and therefore more likely to trust emails containing “helpful” information and take the bait. We explore this in detail in our report, The Psychology of Human Error.  While people cite distraction as the top reason for falling for phishing attacks, the perceived legitimacy of the email was a close second. Looking at real-world examples can help. Below are five articles that outline recent scams, including images of the emails.  COVID-19: Real-Life Examples of Opportunistic Phishing Emails Everything You Need to Know About Tax Day Scams 2020 Spotting the Stimulus Check Scams How to Spot and Avoid 2020 Census Scams Look Out For Back to School Scams Now that you know broadly what to look for and what makes you more vulnerable, let’s take a deeper dive into the four things you should carefully inspect before replying to an email. 4 Things to Inspect Before Replying to An Email The Display Name and Email Address The first thing you should do is look at the Display Name and the email address. Do they match? Do you recognize the person and/or organization? Have you corresponded with them before? It’s important to note that some impersonations are easier to spot than others. For example, in the example below, the Display Name ([email protected]) is vastly different from the email address ([email protected]).
But, hackers can make slight changes to the domain that can be indiscernible unless the target is really looking for it. To make it easier to understand, we’ll use FedEx as an example. In the chart below you’ll see five different types of impersonations. For more information about domain impersonations, read this article:  Inside Email Impersonation: Why Domain Name Spoofs Could Be Your Biggest Risk
The bottom line: Take the time to look closely at the sender’s information. The Subject Line As we’ve mentioned, hackers exploit the psychological vulnerabilities of their targets. It makes sense, then, that they’ll try to create a sense of urgency in the subject line.  Here is a list of the Top 5 subject lines used in spear phishing attacks: Urgent Follow up Important Are you available? Payment Status And, when it comes to Business Email Compromise attacks, the Top 5 subject lines are: Urgent Request Important Payment Attention While – yes – these subject lines can certainly appear in legitimate emails, you should exercise caution when responding. Better safe than sorry! Attachments and Links Hackers will often direct their targets to follow a link or download an attachment.  Links will direct users to a malicious website and attachments, once downloaded, will install malware on the user’s computer. These are called payloads. How can you spot one? While links may often look inconspicuous (especially when they’re hyperlinked to text) if you hover over them, you’ll be able to see the full URL. Look out for strange characters, unfamiliar domains, and redirects. 
Unfortunately, you can’t spot a malicious attachment as easily. Your best bet, then, is to avoid downloading any attachments unless you trust the source.  Note: Not all spear phishing emails contain a payload. Hackers can also request a wire transfer or simply build rapport with their target before making a request down the line. The Body Copy Just like the subject line will create a sense of urgency, the body copy of the email will generally motivate the target to act.  Look out for language that suggests there will be a consequence if you don’t act quickly. For example, a hacker may say that if a payment isn’t made within 2 hours, you’ll lose a customer. Or, if you don’t confirm your email address within 24 hours, your account will be deactivated. While spear phishing emails are generally carefully crafted, spelling errors and typos can also be giveaways. Likewise, you may notice language you wouldn’t expect from the alleged sender. For example, if an email appears to be sent from your CEO, but the copy doesn’t match previous emails from him or her, this could suggest that the email is a spear phishing attack. What to do if an email if you think an email is suspicious Now that you know what to look out for, what do you do if you think you’ve caught a phish? If anything seems unusual, do not follow or click links or download attachments.  If the email appears to be from a government organization or another trusted institution, visit their website via Google or your preferred search engine, find a support number, and ask them to confirm whether the communication is valid. If the email appears to come from someone you know and trust, like a colleague, reach out to the individual directly by phone, Slack, or a separate email thread. Rest assured, it’s better to confirm and proceed confidently than the alternative.  Contact your line manager and/or IT team immediately and report the email. But it’s not fair to leave people as the last line of defense. Even the most tech-savvy people can fall for spear phishing attacks.  Case in point: Last month, The SANS institute – a global cybersecurity training and certifications organization – revealed that nearly 30,000 accounts of PII were compromised in a phishing attack that convinced an end-user to install a self-hiding and malicious Office 365 add-on. That means organizations should invest in technology that can detect and prevent these threats. Tessian can help detect and prevent spear phishing attacks Unlike spam filters and Secure Email Gateways (SEGs) which can stop bulk phishing attacks, Tessian Defender can detect and prevent a wide range of impersonations, spanning more obvious, payload-based attacks to subtle, social-engineered ones. How? Tessian’s machine learning algorithms learn from historical email data to understand specific user relationships and the context behind each email. When an email lands in your inbox, Tessian Defender automatically analyzes millions of data points, including the email address, Display Name, subject line and body copy.  If anything seems “off”, it’ll be flagged. 
To learn more about how tools like Tessian Defender can prevent spear phishing attacks, speak to one of our experts and request a demo today. 
Compliance Data Exfiltration DLP Spear Phishing
Compliance in the Legal Sector: Laws & How to Comply
16 September 2020
Thanks to the digital transformation and increasingly strict data security obligations, law firms’ business priorities are changing. Today, data protection, transparency, and privacy are top-of-mind.  It makes sense.  Keep reading to find out… Why the legal sector is bound to such strict compliance standards Which regulations govern law firms How cybersecurity can help ensure compliance Interested in learning more about regional compliance standards or those that impact other industries? Check out our Compliance Hub to find articles, tips, guides, and more or download our CEO’s Guide to Data Protection and Compliance to learn more about how cybersecurity enables business and drives revenue. 
Why is the legal sector bound to strict compliance standards? Lawyers’ hard drives, email accounts, and smartphones can contain anything from sensitive intellectual property and trade secrets to the Personally Identifiable Information (PII) of clients.  Unfortunately, hackers and cybercriminals are all too aware of this. It’s no surprise, then, that the legal sector is amongst the most targeted by social engineering attacks like spear phishing. Ransomware is a big problem, too. In fact, just a few months ago, Grubman Shire Meiselas & Sacks, a prominent media law firm, had its client information compromised.  Those behind the attack later threatened to auction some of these files concerning major celebrities for as much as $1.5 million unless the firm paid a $42 million ransom.  But, it’s not just inbound attacks that law firms have to worry about. Because the legal sector is highly competitive, incidents involving Insider Threats are a concern, too.  96% of IT leaders working in the legal sector say they’re worried that someone within the organization will cause a breach, either accidentally (via a misdirected email, for example) or maliciously.  The regulations governing law firms When it comes to data protection and privacy, the legal sector is subject to a relatively strict regulatory framework both under the law and rules imposed by professional bodies. Depending on where a firm is based and what its practice areas are, it can be subject to several stringent laws and regulations. This is especially true for firms operating in major markets like the United States, the United Kingdom, and the European Union. In this article, we’ll focus on some of the more general regulations and standards that all firms operating in these markets are expected to abide by. General Data Protection Regulation (GDPR) When the GDPR was introduced in 2018, it represented the largest change to data protection legislation in almost two decades. It also contains some of the most thorough compliance obligations for law firms and indeed any other entity that collects, stores, and processes data. The GDPR has been designed to help and guide organizations with a legitimate business interest as to how personal data should be handled and gives regulators the power to impose large fines on firms that aren’t compliant.  You can read more about the largest GDPR fines (so far) in 2020 on our blog. What is the GDPR’s purpose? The GDPR was introduced amid growing concerns surrounding the safety of personal data and the need to protect it from hackers, cybercrime, Insider Threats, unethical use, and the growing attack surface.  Essentially, it gives citizens full and complete control of their data, subject to some restrictions (for example, where data must be held by firms by law).  What is the scope of the GDPR? The legislation regulates the use of ‘personal data’ and applies to all organizations located within the EU, as well as organizations outside the EU who offer their goods or services to EU citizens. It also applies to organizations that hold data pertaining to EU citizens, regardless of their location.  What should law firms know about the GDPR? The main part of the GDPR that law firms should be paying attention to is Article 5.  This sets out the principles relating to the collection and processing of personal data. The six key principles are that personal data: Should be processed lawfully, fairly and in a transparent manner; Should only be collected for legitimate purposes; Should be limited to what’s necessary in relation to the purpose(s) it’s processed; Must be accurate and kept up to date, with any inaccurate erased or rectified; Should be held for longer than is necessary for its purposes*; and Should be held with adequate security against theft, loss, and/or damage.  The GDPR also gives your clients the right to ask for their data to be removed (‘right of erasure’) without the need for any outside authorization. Note: Data can only be kept contrary to a client’s wishes to ensure compliance with other regulations.  What should a firm do in the event of a breach? Before GDPR, law firms could follow their own protocols when dealing with a data breach. But now, the GDPR forces firms to report any data breaches, no matter how big or small they are, to the relevant regulatory authority within 72 hours. In the UK, for example, the regulatory authority is the Information Commissioner’s Office (ICO):  The notification must: Contain relevant details regarding the nature of the breach; The approximate number of people impacted; and Contact details of the firm’s Data Protection Officer (DPO).  Clients who have had their personal data compromised must also be notified of the breach, the potential outcome, and any remediation “without undue delays”.  It’s important to note that breaches aren’t always the results of malicious activity by an Insider Threat or hacker outside the organization. Even accidents can result in breaches. In fact, misdirected emails (emails sent to the wrong person) has consistently been one of the most frequently reported incidents to the ICO.  That’s why it’s essential law firms (and other organizations) have safeguards in place to prevent mistakes like these from happening. Looking for a solution? Tessian Guardian prevents misdirected emails in some of the world’s most prestigious law firms, including Dentons, Hill Dickinson, and Travers Smith What are the penalties for non-compliance? Financial penalties imposed for GDPR violations can be harsh, and they often are; regulatory authorities are keen to highlight just how important the GDPR is and how seriously it should be taken. Fines for non-compliance can be as high as 4% of annual global turnover or €20 million—whichever is higher. American Bar Association Rule 1.6 Rule 1.6 governs the confidentiality of client information. It states, “A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Simply put, lawyers must make efforts to protect the data of their clients.  Two years ago, the American Bar Association issued new guidance in the form of Formal Opinion 483. This covers the importance of data protection and how firms should act when, not if, a security breach happens. This wording demonstrates that the ABA recognizes that breaches are part and parcel of firms operating in the modern world, and the statistics confirm this. 
In essence, Formal Opinion 483 states:  Lawyers have a duty of competence in implementing adequate security measures regarding technology. Lawyers must reasonably and continuously assess their systems, operating procedures, and plans for mitigating a breach. In the event of a suspected or confirmed breach, lawyers must take steps to stop the attack and prevent any further loss of data. When a breach is detected and confirmed, lawyers must inform their clients in a timely manner and with enough information for clients to make informed decisions.  The bottom line: law firms must protect data with cybersecurity. Solicitors’ Regulation Authority Code of Conduct In the UK, solicitors are obliged under the Solicitors’ Regulation Authority (SRA) Code of Conduct to maintain effective systems and mitigate risks to client confidentiality and client money. Solicitors are also obliged to ensure systems comply more broadly with the SRA’s other regulatory arrangements.  The SRA says that, although being hacked or falling victim to a data breach is not necessarily a failure to meet these requirements, firms should take proportionate steps to protect themselves and their clients while retaining the advantages of advanced IT.  Where a report of cybercrime (note: crime, not a loss that takes place due to negligence) is received, the SRA takes a constructive approach in dealing with the firm, especially if the firm:  Is proactive and immediately notifies the SRA. Has taken steps to inform the client and as a minimum make good any loss. Shows they are taking steps to improve their systems and processes to reduce the risk of a similar incident happening again.  That means that, under the SRA’s Code of Conduct, law firms should take steps to prevent inbound attacks like spear phishing and set-up policies and processes that ensure swift reporting.  The good news is, Tessian can help with both inbound attacks and Insider Threats and has a history of successfully protecting law firms around the world from both. 
How Tessian helps law firms stay compliant Across all three of the regulations listed here, there’s one commonality: law firms are responsible for ensuring that their IT systems and processes are robust and secure enough to keep data safe and mitigate the chance of a breach taking place.  But, that’s easier said than done, especially in our dynamic and digitally connected world where threats are ever-evolving. So, where should law firms start? Email. 90% of all data breaches start on email and it’s the threat vector IT leaders are most concerned about protecting. That’s why Tessian is focused on protecting this channel. Across three solutions, Tessian detects and prevents threats using machine learning, which means it’s constantly adapting, without requiring maintenance from thinly-stretched security teams. Tessian Defender detects and prevents spear phishing Tessian Guardian detects and prevents accidental data loss via misdirected email Tessian Enforcer detects and prevents data exfiltration attempts from Insider Threats Importantly, Tessian is non-disruptive. That way, partners, lawyers, and administrators can do their jobs without security getting in the way. Tessian stops threats, not business.  To learn more about how Tessian helps law firms like Dentons, Hill Dickinson, and Travers Smith protect data, maintain client trust, and satisfy compliance standards, talk to one of our experts. 
Compliance Customer Stories Data Exfiltration DLP Human Layer Security Spear Phishing
18 Actionable Insights From Tessian Human Layer Security Summit
By Maddie Rosenthal
09 September 2020
In case you missed it, Tessian hosted its third (and final) Human Layer Security Summit of 2020 on September 9. This time, we welcomed over a dozen security and business leaders from the world’s top institutions to our virtual stage, including: Jeff Hancock from Stanford University David Kennedy, Co-Founder and Chief Hacking Officer at TrustedSec Merritt Baer, Principal Security Architect at AWS Rachel Beard, Principal Security Technical Architect at Salesforce  Tim Fitzgerald, CISO at Arm  Sandeep Amar, CPO at MSCI  Martyn Booth, CISO at Euromoney  Kevin Storli, Global CTO and UK CISO at PwC Elvis M. Chan, Supervisory Special Agent at the FBI  Nina Schick, Author of “Deep Fakes and the Infocalypse: What You Urgently Need to Know” Joseph Blankenship, VP Research, Security & Risk at Forrester Howard Shultz, Former CEO at Starbucks  While you can watch the full event on YouTube below, we’ve identified 18 valuable insights that security, IT, compliance, and business leaders should apply to their strategies as they round out this year and look forward to the next.
Here’s what we learned at Tessian’s most recent Human Layer Security Summit. Not sure what Human Layer Security is? Check out this guide which covers everything you need to know about this new category of protection.  1. Cybersecurity is mission-critical Security incidents – whether it’s a ransomware attack, brute force attack, or data leakage from an insider threat – have serious consequences. Not only can people lose their jobs, but businesses can lose customer trust, revenue, and momentum. While this may seem obvious to security leaders, it may not be so obvious to individual departments, teams, and stakeholders. But it’s essential that this is communicated (and re-communicated).  Why? Because a company that’s breached cannot fulfill its mission. Keep reading for insights and advice around keeping your company secure, all directly from your peers in the security community. 2. Most breaches start with people People control our most sensitive systems and data. It makes sense, then, that most data breaches start with people. But, that doesn’t mean employees are the weakest link. They’re a business’ strongest asset! So, it’s all about empowering them to make better security decisions. That’s why organizations have to adopt people-centric security solutions and strategies.
The good news is, security leaders don’t face an uphill battle when it comes to helping employees understand their responsibility when it comes to cybersecurity… 3. Yes, employees are aware of their duty to protect data Whether it’s because of compliance standards, cybersecurity headlines in mainstream media, or a larger focus on privacy and protection at work, Martyn Booth, CISO at Euromoney reminded us that most employees are actually well aware of the responsibility they bear when it comes to safeguarding data.  This is great news for security leaders. It means the average employee will be more likely to abide by policies and procedures, will pay closer attention during awareness training, and will therefore contribute to a more positive security culture company-wide. Win-win. 4. But, employees are more vulnerable to phishing scams outside of their normal office environment  While – yes – employees are more conscious of cybersecurity, the shift to remote working has also left them more vulnerable to attacks like phishing scams.  “We have three “places”: home, work, and where we have fun. When we combine two places into one, it’s difficult psychologically. When we’re at home sitting at our coffee table, we don’t have the same cues that remind us to think about security that we do in the office. This is a huge disruption,” Jeff Hancock, Professor at Stanford University explained.  Unfortunately, hackers are taking advantage of these psychological vulnerabilities. And, as David Kennedy, Co-Founder and Chief Hacking Officer at TrustedSec pointed out, this isn’t anything new. Cybercriminals have always been opportunistic in their attacks and therefore take advantage of chaos and emotional distress.  To prevent successful opportunistic attacks, he recommends that you: Reassess what the new baseline is for attacks Educate employees on what threats look like today, given recent events Identify which brands, organizations, people, and departments may be impersonated (and targeted) in relation to the pandemic But, it’s not just inbound email attacks we need to be worried about.  5. They’re more likely to make other mistakes that compromise cybersecurity, too This change to our normal environment doesn’t just affect our ability to spot phishing attacks. It also makes us more likely to make other mistakes that compromise cybersecurity. Across nearly every session, our guest speakers said they’ve seen more incidents involving human error and that security leaders should expect this trend to continue. That’s why training, policies, and technology are all essential components of any security strategy. More on this below. 6. Security awareness training has to be ongoing and ever-evolving At our first Human Layer Security Summit back in March, Mark Logsdon, Head of Cyber Assurance and Oversight at Prudential, highlighted three key flaws in security awareness training: It’s boring It’s often irrelevant It’s expensive What he said is still relevant six months on and it’s a bigger problem than ever, especially now that the perimeter has disappeared, security teams are short-handed, and individual employees are working at home and on their own devices. So, what can security leaders do?  Kevin Storli, Global CTO and UK CISO at PwC highlighted the importance of tailoring training to ensure it’s always relevant. That means that instead of just reminding employees about compliance standards and the importance of a strong password, we should also be focusing on educating employees about remote access, endpoints, and BYOD policies. But one training session isn’t enough to make security best practice really stick. These lessons have to be constantly reinforced through gamification, campaigns, and technology.  Tim Fitzgerald, CISO at Arm highlighted how Tessian’s in-the-moment warnings have helped his employees make the right decisions at the right time.  “Warnings help create that trigger in their brain. It makes them pause and gives them that extra breath before taking the next potentially unsafe step. This is especially important when they’re dealing with data or money. Tessian ensures they question what they’re doing,” he said.
7. You have to combine human policies with technical controls to ensure security  It’s clear that technology and training are both valuable. That means your best bet is to combine the two. In discussion with Ed Bishop, Tessian Co-Founder and CTO, Merritt Baer, Principal Security Architect at AWS and Rachel Beard, Principal Security Technical Architect at Salesforce, both highlighted how important it is for organizations to combine policies with technical controls. But security teams don’t have to shoulder the burden alone. When using tools like Salesforce, for example, organizations can really lean on the vendor to understand how to use the platform securely. Whether it’s 2FA, customized policies, or data encryption, many security features will be built-in.  8. But…Zero Trust security models aren’t always the answer While – yes – it’s up to security teams to ensure policies and controls are in place to safeguard data and systems, too many policies and controls could backfire. That means that “Zero Trust” security models aren’t necessarily the best way to prevent breaches.
9. Security shouldn’t distract people from their jobs  Security teams implement policies and procedures, introduce new software, and make training mandatory for good reason. But, if security becomes a distraction for employees, they won’t exercise best practice.  The truth is, they just want to do the job they were hired to do!  Top tip from the event: Whenever possible, make training and policies customized, succinct, and relevant to individual people or departments.  10. It also shouldn’t prevent them from doing their jobs  This insight goes back to the idea that “Zero Trust” security models may not be the best way forward. Why? Because, like Rachel, Merrit, Sandeep, and Martyn all pointed out: if access controls or policies prevent an employee from doing their job, they’ll find a workaround or a shortcut. But, security should stop threats, not flow. That’s why the most secure path should also be the path of least resistance. Security strategies should find a balance between the right controls and the right environment.  This, of course, is a challenge, especially when it comes to rule-based solutions. “If-then” controls are blunt instruments. Solutions powered by machine learning, on the other hand, detect and prevent threats without getting in the way. You can learn more about the limitations of traditional data loss prevention solutions in our report The State of Data Loss Prevention 2020.  11. Showing downtrending risks helps demonstrate the ROI of security solutions  Throughout the event, several speakers mentioned that preemptive controls are just as important as remediation. And it makes sense. Better to detect risky behavior before a security incident happens, especially given the time and resources required in the event of a data breach.  But tracking risky behavior is also important. That way, security leaders can clearly demonstrate the ROI of security solutions. Martyn Booth, CISO at Euromoney, explained how he uses Tessian Human Layer Security Intelligence to monitor user behavior, influence safer behavior, and track risk over time. “We record how many alerts are sent out and how employees interact with those alerts. Do they follow the acceptable use policy or not? Then, through our escalation workflows that ingest Tessian data, we can escalate or reinforce. From that, we’ve seen incidents involving data exfiltration trend downwards over time. This shows a really clear risk reduction,” he said. 12. Targeted attacks are becoming more difficult to spot and hackers are using more sophisticated techniques As we mentioned earlier, hackers take advantage of psychological vulnerabilities. But, social media has turbo-charged cybercrime, enabling cybercriminals to create more sophisticated attacks that can be directed at larger organizations. Yes, even those with strong cybersecurity. Our speakers mentioned several examples, including Garmin and Twitter. So, how do they do it? Research! LinkedIn, company websites, out-of-office messages, press releases, and news articles all provide valuable information that a hacker could use to craft a believable email. But, there are ways to limit open-source recon. See tips from David Kennedy, Co-Founder and Chief Hacking Officer at TrustedSec, below. 
13. Deepfakes are a serious concern Speaking of social media, Elvis M Chan, Supervisory Special Agent at the FBI and Nina Schick, Author of “Deep Fakes and the Infocalypse: What You Urgently Need to Know”,  took a deep dive into deepfakes. And, according to Nina, “This is not an emerging threat. This threat is here. Now.” While we tend to associate deepfakes with election security, it’s important to note that this is a threat that affects businesses, too.  In fact, Tim Fitzgerald, CISO at Arm, cited an incident in which his CEO was impersonated in a deepfake over Whatsapp. The ask? A request to move money. According to Tim, it was quite compelling.  Unfortunately, deepfakes are surprisingly easy to make and generation is outpacing detection. But, clear policies and procedures around authenticating and approving requests can ensure these scams aren’t successful. Not sure what a deepfake is? We cover everything you need to know in this article: Deepfakes: What Are They and Why Are They a Threat? 14. Supply chain attacks are, too  In conversation with Henry Treveleyan Thomas, Head of Customer Success at Tessian, Kevin Storli, Global CTO and UK CISO at PwC discussed how organizations with large supply chains are especially vulnerable to advanced impersonation attacks like spear phishing. “It’s one thing to ensure your own organization is secure. But, what about your supply chain? That’s a big focus for us: ensuring our supply chain has adequate security controls,” he said. Why is this so important? Because hackers know large organizations like PwC will have robust security strategies. So, they’ll look for vulnerabilities elsewhere to gain a foothold. That’s why strong cybersecurity can actually be a competitive differentiator and help businesses attract (and keep) more customers and clients.  15. People will generally make the right decisions if they’re given the right information 88% of data breaches start with people. But, that doesn’t mean people are careless or malicious. They’re just not security experts. That’s why it’s so important security leaders provide their employees with the right information at the right time. Both Sandeep Amar, CPO at MSCI and Tim Fitzgerald, CISO at Arm talked about this in detail.  It could be a guide on how to spot spear phishing attacks or – as we mentioned in point #6 – in-the-moment warnings that reinforce training.   Check out their sessions for more insights.  16. Success comes down to people While we’ve talked a lot about human error and psychological vulnerabilities, one thing was made clear throughout the Human Layer Security Summit. A business’s success is completely reliant on its people. And, we don’t just mean in terms of security. Howard Shultz, Former CEO at Starbucks, offered some incredible advice around leadership which we can all heed, regardless of our role. In particular, he recommended: Creating company values that really guide your organization Ensuring every single person understands how their role is tied to the goals of the organization Leading with truth, transparency, and humility
17. But people are dealing with a lot of anxiety right now Whether you’re a CEO or a CISO, you have to be empathetic towards your employees. And, the fact is, people are dealing with a lot of anxiety right now. Nearly every speaker mentioned this. We’re not just talking about the global pandemic.  We’re talking about racial and social inequality. Political unrest. New working environments. Bigger workloads. Mass lay-offs.  Joseph Blankenship, VP Research, Security & Risk at Forrester, summed it up perfectly, saying “We have an anxiety-ridden user base and an anxiety-ridden security base trying to work out how to secure these new environments. We call them users, but they’re actually human beings and they’re bringing all of that anxiety and stress to their work lives.” That means we all have to be human first. And, with all of this in mind, it’s clear that….. 18. The role of the CISO has changed  Sure, CISOs are – as the name suggests – responsible for security. But, to maintain security company-wide, initiatives have to be perfectly aligned with business objectives, and every individual department, team, and person has to understand the role they play. Kevin Storli, Global CTO and UK CISO at PwC touched on this in his session. “To be successful in implementing security change, you have to bring the larger organization along on the journey. How do you get them to believe in the mission? How do you communicate the criticality? How do you win the hearts and minds of the people? CISOs no longer live in the back office and address just tech aspects. It’s about being a leader and using security to drive value.” That’s a tall order and means that CISOs have to wear many hats. They need to be technology experts while also being laser-focused on the larger business. And, to build a strong security culture, they have to borrow tactics from HR and marketing.  The bottom line: The role of the CISO is more essential now than ever. It makes sense. Security is mission-critical, remember? If you’re looking for even more insights, make sure you watch the full event, which is available on-demand. You can also check out previous Human Layer Security Summits on YouTube.
Human Layer Security Spear Phishing
Why We Click: The Psychology Behind Phishing Scams and How to Avoid Being Hacked
07 September 2020
We all know the feeling, that awful sinking in your stomach when you realize you’ve clicked a link that you shouldn’t have. Maybe it was late at night, or you were in a hurry. Maybe you received an alarming email about a problem with your paycheck or your taxes. Whatever the reason, you reacted quickly and clicked a suspicious link or gave away personal information only to realize you made a dangerous mistake.  You’re not alone. In a recent survey conducted by my company Tessian, two-fifths (43%) of people admitted to making a mistake at work that had security repercussions, while nearly half (47%) of people working in the tech industry said they’ve clicked on a phishing email at work. In fact, most data breaches occur because of human error. Hackers are well aware of this and know exactly how to manipulate people into slipping up. That’s why emails scams — also known as phishing — are so successful.  Phishing has been a persistent problem during the COVID-19 pandemic. In April, Google alone saw more than 18 million daily email scams related to COVID-19 in a single week. Hackers are taking advantage of psychological factors like stress, social relationships and uncertainty that affect people’s decision-making. Here’s a look at some of the psychological factors that make people vulnerable and what to look out for in a scam. 
Stress and Anxiety Take A Toll Hackers thrive during times of uncertainty and unrest, and 2020 has been a heyday for them. In the last few months they’ve posed as government officials, urging recipients to return stimulus checks or unemployment benefits that were “overpaid” and threatening jail time. They’ve also impersonated health officials, prompting the World Health Organization to issue an alert warning people not to fall for scams implying association with the organization. Other COVID scams have lured users by offering antibody tests, PPE and medical equipment. Where chaos leads, hackers follow. The stressful events of this year mean that cybersecurity is not top-of-mind for many of us. But foundational principles of human psychology also suggest that these same events can easily lead to poor or impulsive decisions online. More than half (52%) of those in our survey said that stress causes them to make more mistakes. The reason for this has to do with how stress impacts our brains, specifically our ability to weigh risk and reward. Studies have shown that anxiety can disrupt neurons in the brain’s prefrontal cortex that help us make smart decisions, while stress can cause people to weigh the potential reward of a decision over possible risks, to the point where they even ignore negative information. When confronted with a potential scam, it’s important to stop, take a breath, and weigh the potential risks and negative information like suspicious language or misspelled words. Urgency can also add stress to an otherwise normal situation — and hackers know to take advantage of this. Look out for emails, texts or phone calls that demand money or personal information within a very short window. Hacking Your Network Some of the most common phishing scams impersonate someone in your “known” network, but your “unknown” network can also be manipulated. Your known network consists of your friends, family and colleagues — people you know and trust. Hackers exploit these relationships, betting they can sway someone to click on a link if they think it’s coming from someone they know. These impersonation scams can be quite effective because they introduce emotion to the decision-making progress. If a phone call or email claims your family member needs money for a lawyer or a medical procedure, fear or worry replace logic. Online scams promising money add greed into the equation, while phishing emails impersonating someone in authority or someone you admire, like a boss or colleague, cloud deductive reasoning with our desire to be liked. The difference between clicking a dangerous link or deleting the email can involve simply recognizing the emotions being triggered and taking a second look with logic in mind.  Meanwhile, the rise of social media and the abundance of personal information online has allowed hackers to impersonate your “unknown” network as well — people you might know. Hackers can easily find out where you work or where you went to school and use that information to send an email posing as a college alumnus to seek money or personal information. An easy way to check a suspicious email is by looking beyond the display name to examine the full email address of the sender by clicking the name. Scammers will often change, delete or add on a letter to an email address. 
The Impact of Distraction and New Surroundings The rise of remote work brought on by COVID-19 can also impact people’s psychological states and make them vulnerable to scams. Remote work can bring an overwhelming combination of video call fatigue, an “always on” mentality and household responsibilities like childcare. In fact, 57% of those surveyed in our report said they feel more distracted when working from home. Why is this a problem from a cybersecurity standpoint? Distraction can impair our decision-making abilities. Forty-seven percent of employees cited distraction as the top reason for falling for a phishing scam. While many people tend to have their guard up in a physical office, we tend to relax at home and may let our guard down, even if we’re working. With an estimated 70% of employees working from home part or full-time due to COVID-19, this creates an opportunity for hackers.  It’s also more difficult to verify a legitimate request from an impersonation when you’re not in the same office as a colleague. One common scam impersonates an HR staff member to request personal information from employees at home. When in doubt, don’t click any links, download attachments or provide sensitive data like passwords, financial information or a social security number until you can confirm a request with a colleague directly. Self-Care and Awareness  These scams will always be out there, but that doesn’t mean people should constantly worry and keep their guard up — that would be exhausting. A simple combination of awareness and self-care when online can make a big difference.  Once you know the tactics a hacker might use and the psychological factors like stress, emotions and distraction to look out for, it will be easier to spot an email scam without the anxiety. It’s also important to take breaks and prioritize self-care when you’re feeling stressed or tired. Step away from the computer when you can and have a conversation with your manager about why the pressure to be “always-on” when working remotely can have a negative impact psychologically and create cybersecurity risks. By understanding why people fall for these scams, we can start to find ways to easily identify and avoid them.  This article was originally published in Fast Company and was co-authored by Tim Sadler, CEO of Tessian and Jeff Hancock, Harry and Norman Chandler Professor of Communication at Stanford University 
Spear Phishing
How to Avoid Falling Victim to Voting Scams in the 2020 U.S. Election
By Laura Brooks
28 August 2020
Scammers thrive in times of crisis and confusion. This is perhaps why the controversy surrounding mail-in voting could prove to be another golden opportunity for cybercriminals.  Throughout 2020, we’ve seen a surge of cybercriminals capitalizing on key and newsworthy moments in the COVID-19 crisis, creating scams that take advantage of the stimulus checks, the Paycheck Protection Program and students heading back to school.  Knowing that people are seeking answers during uncertain times, hackers craft scams – usually in the form of phishing emails – that appear to provide the information people are looking for. Instead, victims are lured to fake websites that are designed to steal their valuable personal or financial information.  Hackers are creating websites related to mail-in voting Given the uncertainties surrounding election security and voters’ safety during the pandemic, fueled further by President Trump’s recent attacks against the US Postal Service, it’s highly likely that scammers could set their sights on creating scams associated with mail-in voting.  In fact, our researchers discovered that around 75 domains spoofing websites related to mail-in voting were registered between July 2 to August 6.  Some of these websites tout information about voting-by-mail, such as mymailinballot.com and mailinyourvote.com. Others encourage voters to request or track their ballot, such as requestmailinballot.com and myballotracking.com.  Anyone accessing these websites should be wary, though. Keep reading to find out why. What risks do these spoofed domains pose?  To understand the risks these spoofed domains pose, consider why hacker’s create them. They’re after sensitive information like your name, address, and phone number as well as financial information like your credit card details. For example, if a malicious website claims to offer visitors a way to register to vote or cast their vote – which several of these newly created domains did – there will be a form that collects personally identifiable information (PII). Likewise, if a malicious website is asking for donations, visitors will be asked to enter credit card details.  If any of this information falls into the wrong hands, it could be sold on the dark web, resulting in identity theft or payment card fraud.  Of course, not every domain that our researchers discovered can be deemed malicious. But, it’s important you stay vigilant and never provide personal information unless you trust the domain.
So, how can voters avoid falling for mail-in voting scams?  Here are some tips to help you avoid falling victim to voting scams in the upcoming election:  1. Find answers online, but don’t trust everything you read It’s perfectly reasonable to look online for answers about how to vote. There’s a lot of useful information about ordering absentee ballots and locating local secure ballot boxes. However, be aware that there is a lot of misinformation online, particularly around this year’s election. Source information from trusted websites like https://www.usa.gov/how-to-vote.  2. Think twice before sharing personal details Before entering any personal or financial details, always check the URL of the domain and verify the legitimacy of the service by calling them directly. Question domains or pop-ups that request personal information from you, especially as it relates to your voting preference or other personal information. 3. Never share direct deposit details, credit card information, or your Social Security number on an unfamiliar website This information should be kept private and confidential. If a website asks you to share details like this, walk away.  Keep up with our blog for more insights, analysis, and tips for staying safe online. 
Compliance Data Exfiltration DLP Spear Phishing
August Cybersecurity News Roundup
By Maddie Rosenthal
28 August 2020
The end of the month means another roundup of the top cybersecurity headlines. Keep reading for a summary of the top 12 stories from August. Bonus: We’ve included links to extra resources in case anything piques your interest and you want to take a deeper dive. Did we miss anything? Email [email protected] Russian charged with trying to recruit Tesla employee to plant malware  Earlier this week, news broke that the FBI had arrested Egor Igorevich Kriuchkov – a 27-year-old Russian citizen – for trying to recruit a fellow Tesla employee to plant malware inside the Gigafactory Nevada. The plan? Insert malware into the electric car maker’s system, causing a distributed denial of service (DDos) attack to occur. This would essentially give hackers free rein over the system.  But, instead of breaching the network, the Russian-speaking employee turned down Egor’s million-dollar offer (to be paid in cash or bitcoin) and instead worked closely with the FBI to thwart the attack. Feds warn election officials of potentially malicious ‘typosquatting’ websites Stories of election fraud have dominated headlines over the last several months. The latest story involves suspicious “typosquatting” websites that may be used for credential harvesting, phishing, and influence operations.
While the FBI hasn’t yet identified any malicious incidents, they have found dozens of illegitimate websites that could be used to interfere with the 2020 vote.   To stay safe, make sure you double-check any URLs you’ve typed in and never input any personal information unless you trust the domain.  Former Google engineer sent to prison for stealing robocar secrets An Insider Threat at Google who exfiltrated 14,000 files five years ago has been sentenced to 18 months in prison. The sentencing came four months after Anthony Levandowski plead guilty to stealing trade secrets, including diagrams and drawings related to simulations, radar technology, source code snippets, PDFs marked as confidential, and videos of test drives.  He’s also been ordered to pay more than $850,000. Looking for more information about the original incident? Check out this article: Insider Threats: Types and Real-World Examples. All the information you need is under Example #4. For six months, security researchers have secretly distributed an Emotet vaccine across the world Emotet – one of today’s most skilled malware groups – has caused security and IT leaders headaches since 2014.  But, earlier this year, James Quinn, a malware analyst working for Binary Defense, discovered a bug in Emotet’s code and was able to put together a PowerShell script that exploited the registry key mechanism to crash the malware. According to ZDNet, he essentially created “both an Emotet vaccine and killswitch at the same time.” Working with Team CYMRU, Binary Defense handed over the “vaccine” to national Computer Emergency Response Teams (CERTs), which then spread it around the world to companies in their respective jurisdictions. Online business fraud down, consumer fraud up New research from TransUnion shows that between March and July, hackers have started to change their tactics. Instead of targeting businesses, they’re now shifting their focus to consumers. Key findings include: Consumer fraud has increased 10%, while business fraud has declined 9% since the beginning of the pandemic Nearly one-third of consumers have been targeted by COVID-19 related fraud Phishing is the most common method used in fraud schemes You can read the full report here. FBI and CISA issue warning over increase in vishing attacks A joint warning from the Federal Bureau of Investigations (FBI) and Cybersecurity Infrastructure Security Agency (CISA) was released in mid-August, cautioning the public that they’ve seen a spike in voice phishing attacks (known as vishing).  They’ve attributed the increase in attacks to the shift to remote working. Why? Because people are no longer able to verify requests in-person. Not sure what vishing is? Check out this article, which outlines how hackers are able to pull off these attacks, how you can spot them, and what to do if you’re targeted.  TikTok sues U.S. government over Trump ban In last month’s cybersecurity roundup, we outlined why India had banned TikTok and why America might be next. 30 days later, we have a few updates. On August 3, President Trump said TikTok would be banned in the U.S. unless it was bought by Microsoft (or another company) before September 15. Three days later, Trump signed an executive order barring US businesses from making transactions with TikTok’s parent company, ByteDance. The order will go into effect 45 days after it was signed. A few weeks later, ByteDance filed a lawsuit against the U.S. government, arguing the company was denied due process to argue that it isn’t actually a national security threat. In the meantime, TikTok is continuing its sales conversations with Microsoft and Oracle. Stay tuned next month for an update on what happens in the next 30 days. A Stanford deception expert and cybersecurity CEO explain why people fall for online scams According to a new research report – The Psychology of Human Error – nearly half of employees have made a mistake at work that had security repercussions. But why? Employees say stress, distraction, and fatigue are part of the problem and drive them to make more mistakes at work, including sending emails to the wrong people and clicking on phishing emails.  And, as you might expect, the sudden transition to remote work has only added fuel to the fire. 57% of employees say they’re even more distracted when working from home.  To avoid making costly mistakes, Jeff Hancock, a professor at Stanford, recommends taking breaks and prioritizing self-care. Of course, cybersecurity solutions will help prevent employees from causing a breach, too. University of Utah pays $457,000 to ransomware gang On August 21, the University of Utah posted a statement on its website saying that they were the victim of a ransomware attack and, to avoid hackers leaking sensitive student information, they paid $457,000. But, according to the statement, the hackers only managed to encrypt .02% of the data stored on their servers. While the University hasn’t revealed which ransomware gang was behind the attack, they have confirmed that the attack took place on July 19, that it was the College of Social and Behavioral Sciences that was hacked, and that the university’s cyber insurance policy paid for part of the ransom. Verizon analyzed the COVID-19 data breach landscape This month, Verizon updates its annual Data Breach Landscape Report to include new facts and figures related to COVID-19. Here some of the trends to look out for based on their findings: Breaches caused by human error will increase. Why? Many organizations are operating with fewer staff than before due to either illness or layoffs. Some staff may also have limitations because of new remote working set-ups. When you combine that with larger workloads and more distractions, we’re bound to see more mistakes. Organizations should be especially wary of stolen-credential related hacking, especially as many IT and security teams are working to lock down and maintain remote access.  Ransomware attacks will increase in the coming months. SANS Institute Phishing Attack Leads to Theft of 28,000 Records  The SANS institute – a global cybersecurity training and certifications organization – revealed that nearly 30,000 accounts of PII were compromised in a phishing attack that convinced an end-user to install a self-hiding and malicious Office 365 add-on. While no passwords or financial information were compromised and all the affected individuals have been notified, the breach goes to show that anyone – even cybersecurity experts – can fall for phishing scams. The cybersecurity skills shortage is getting worse In March, Tessian released its Opportunity in Cybersecurity Report which set out to answer one (not-so-simple) question: Why are there over 4 million unfilled positions in cybersecurity and why is the workforce twice as likely to be male than female? The answer is multi-faceted and has a lot to do with a lack of knowledge of the industry and inaccurate perceptions of what it means to work in cybersecurity.  The bad news is, it looks like the problem is getting worse. A recent report, The Life and Times of Cybersecurity Professionals 2020, shows that only 7% of cybersecurity professionals say their organization has improved its position relative to the cybersecurity skills shortage in the last several years. Another 58% say their organizations should be doing more to bridge the gap. What do you think will help encourage more people to join the industry?  That’s all for this month! Keep up with us on social media and check our blog for more updates.
Spear Phishing
Deepfakes: What are They and Why are They a Threat?
By Ed Bishop
21 August 2020
According to a recent Tessian survey, 74% of IT leaders think deepfakes are a threat to their organizations’ and their employees’ security. Are they right to be worried? We take a look. What is a deepfake?
How could deepfakes compromise security? “Hacking humans” is a tried and tested method of attack used by cybercriminals to breach companies’ security, access valuable information and systems, and steal huge sums of money.  In the world of cybersecurity, attempts to “hack humans” are known as social engineering attacks. In layman’s terms, social engineering is simply an attempt to trick people. These tactics and techniques have been around for years and they are constantly evolving.  For example, cybercriminals have realized that the “spray-and-pray” phishing campaigns they previously used were losing their efficacy. Why? Because companies have strengthened their defenses against these bulk attacks and people have begun to recognize the cues that signalled a scam, such as poor grammar or typos.  As a result, hackers have moved to crafting more sophisticated and targeted spear phishing attacks, impersonating senior executives, third party suppliers, or other trusted authorities in emails to deceive employees.  Some even play the long game, building rapport with their targets over time before asking them to wire money or share credentials. Attackers will also directly spoof the sender’s domain and add company logos to their messages to make them look more legitimate. It’s working.  Last year alone, scammers made nearly $1.8 billion through Business Email Compromise attacks. While spear phishing attacks take more time and effort to create, they are more effective and the ROI for an attacker is much higher. So, what does this have to do with deep fakes? Deepfakes – either as videos or audio recordings – are the next iteration of advanced impersonation techniques malicious actors can use to abuse trust and manipulate people into complying with their requests.  These attacks have proven even more effective than targeted email attacks. As the saying goes, seeing – or hearing – is believing. If an employee believes that the person on the video call in front of them is the real deal – or if the person calling them is their CEO – then it’s unlikely that they would ignore the request. Why would they question it?
Examples of deepfakes In 2019, cybercriminals mimicked the voice of a CEO at a large energy firm, demanding a fraudulent transfer of £220,000. And, just last month, Twitter also experienced a major security breach after employees were targeted by a “phone spear phishing attack” or “vishing” attack. Targeted employees received phone calls from hackers posing as IT staff, tricking them into sharing passwords for internal tools and systems.  While it’s still early days and, in some cases, the deepfake isn’t that convincing, there’s no denying that deepfake technology will continue to get better, faster, and cheaper in the near future.  You just have to look at advanced algorithms like GPT-3 to see how quickly it can become a reality.  Earlier this year, OpenAI released GPT-3—an advanced natural language processing (NLP) algorithm that uses deep learning to produce human-like text. It’s so convincing, in fact, that a student used the tool to produce a fake blog post that landed in the top spot on Hacker News—proving that AI-written content can pass as human-authored.
It’s easy to see why the security community is scared about the potential impact of deepfakes.. Gone are the days of hackers drafting poorly written emails, full of typos and grammatical errors. Using AI, they can craft highly convincing messages that actually look like they’ve been written by the people they’re impersonating.  This is something we will explore further at the Tessian HLS Summit on September 9th. Register here. Who is most likely to be targeted by deepfake scams? The truth is, anyone could be a target. There is no one group of people more likely than another to be targeted by deepfakes.  Within your organization, though, it is important to identify who might be most vulnerable to these types of advanced impersonation scams and make them aware of how – and on what channels – they could be targeted.  For example, a less senior employee may have no idea what their CEO sounds like or even looks like. That makes them a prime target.  It’s a similar story for new joiners. Hackers will do their homework, trawl through LinkedIn, and prey on new members of staff, knowing that it’s unlikely they would have met senior members of the organization. New joiners, therefore, would not recognize their voices if they receive a call from them.  Attackers may also pretend to be someone from the IT team who’s carrying out a routine set-up exercise. This would be an opportune time to ask their targets to share account credentials.  As new joiners have no reference points to verify whether the person calling them is real or fake, -or if the request they’re being asked to carry out is even legitimate – it’s likely that they’ll fall for the scam. 
How easy are deepfakes to make? Researchers have shown that you only need about one minute of audio to create an audio deepfake, while “talking head” style fake videos require around 40 minutes of input data.  If your CEO has spoken at an industry conference, and there’s a recording of it online, hackers have the input data they need to train its algorithms and create a convincing deepfake. But crafting a deepfake can take hours or days, depending on the hacker’s skill level. For reference, Timothy Lee, a senior tech reporter at Ars Technica was able to create his own deepfake in two weeks and he spent just $552 doing it.  Deepfakes, then, are a relatively simple but effective way to hack an organization. Or even an election.
How could deepfakes compromise election security? There’s been a lot of talk about how deepfakes could be used to compromise the security of the 2020 U.S. presidential election. In fact, an overwhelming 76% of IT leaders believe deepfakes will be used as part of disinformation campaigns in the election.  Fake messages about polling site disruptions, opening hours, and voting methods could affect turnout or prevent groups of people from voting. Worse still, disinformation and deepfake campaigns -whereby criminals swap out the messages delivered by trusted voices like government officials or journalists – threaten to cause even more chaos and confusion among voters.  Elvis Chan, a Supervisory Special Agent assigned to the FBI who will be speaking at the Tessian HLS Summit in September, believes that people are right to be concerned.  “Deepfakes may be able to elicit a range of responses which can compromise election security,” he said. “On one end of the spectrum, deepfakes may erode the American public’s confidence in election integrity. On the other end of the spectrum, deepfakes may promote violence or suppress turnout at polling locations,” he said. So, how can you spot a deepfake and how can you protect your people from them? 
How to protect yourself and your organization from deepfakes Poorly-made video deepfakes are easy to spot – the lips are out of sync, the speaker isn’t blinking, or there may be a flicker on the screen. But, as the technology improves over time and NLP algorithms become more advanced, it’s going to be more difficult for people to spot deepfakes and other advanced impersonation scams.  Ironically, AI is one of the most powerful tools we have to combat AI-generated attacks.  AI can understand patterns and automatically detect unusual patterns and anomalies – like impersonations – faster and more accurately than a human can.  But, we can’t just rely on technology. Education and awareness amongst people is also incredibly important. It’s therefore encouraging to see that 61% of IT leaders are already educating their employees on the threat of deepfakes and another 27% have plans to do so.  To help you out, we’ve put together some of our top tips which you and your employees can follow if you are being targeted by a deepfake or vishing attack:  Pause and question whether it seems right for a colleague – senior or otherwise – to ask you to carry out the request. Verify the request with the person directly via another channel of communication, such as email or instant messaging. People will not mind if you ask.  Ask the person requesting an action something only you and they would know, to verify their identity. For example, ask them what their partner’s name is or what the office dog is called.  Report incidents to the IT team. With this knowledge, they will be able to put in place measures to prevent similar attacks in the future. Looking for more advice? At the Tessian HLS Summit on September 9th, the FBI’s Elvis Chan will discuss tactics such as reporting, content verification, and critical thinking training in order to help employees avoid deepfakes or advanced impersonation.  You can register for the event here. 
Human Layer Security Spear Phishing
Is Your Office 365 Email Secure?
By Subha Rama
20 August 2020
In July this year, Microsoft took down a massive fraud campaign that used knock-off domains and malicious applications to scam its customers in 62 countries around the world.  But this wasn’t the first time a successful phishing attack was carried out against Office 365 (O365) customers. In December 2019, the same hackers gained unauthorized access to hundreds of Microsoft customers’ business email accounts.  According to Microsoft, this scheme “enabled unauthorized access without explicitly requiring the victims to directly give up their login credentials at a fake website…as they would in a more traditional phishing campaign.” Why are O365 accounts so vulnerable to attacks? Exchange Online/Outlook – the cloud email application for O365 users – has always been a breeding ground for phishing, malware, and very targeted data breaches.  Though Microsoft has been ramping up its O365 email security features with Advanced Threat Protection (ATP) as an additional layer to Exchange Online Protection (EOP), both tools have failed to meet expectations because of their inability to stop newer and more innovative social engineering attacks, business email compromise (BEC), and impersonations.  One of the biggest challenges with ATP in particular is its time-of-click approach, which requires the user to click on URLs within emails to activate analysis and remediation.   Is O365 ATP enough to protect my email? We believe that O365’s native security controls do protect users against bulk phishing scams, spam, malware, and domain spoofing. And these tools are great when it comes to stopping broad-based, high-volume, low-effort attacks – they offer a baseline protection.  For example, you don’t need to add signature-based malware protection if you have EOP/ATP for your email, as these are proven to be quite efficient against such attacks. These tools employ the same approach used by network firewalls and email gateways – they rely on a repository of millions of signatures to identify ‘known’ malware.  But, this is a big problem because the threat landscape has changed in the last several years.  Email attacks have mutated to become more sophisticated and targeted and  hackers exploit user behavior to launch surgical and highly damaging campaigns on people and organizations. Attackers use automation to make small, random modifications to existing malware signatures and use transformation techniques to bypass these native O365 security tools. Unsuspecting – and often untrained – users fall prey to socially engineered attacks that mimic O365 protocols, domains, notifications, and more.  See below for a convincing example.
It is because such loopholes exist in O365 email security that Microsoft continues to be one of the most breached brands in the world.  What are the consequences of a compromised account? There is a lot at stake if an account is compromised.  With ~180 million O365 active email accounts, organizations could find themselves at risk of data loss or a breach, which means revenue loss, damaged reputation, customer churn, disrupted productivity, regulatory fines, and penalties for non-compliance. This means they need to quickly move beyond relying on largely rule- and reputation-based O365 email filters to more dynamic ways of detecting and mitigating email-originated risks. Enter machine learning and behavioral analysis. There has been a surge in the availability of platforms that use machine learning algorithms. Why? Because these platforms detect and mitigate threats in ways other solutions can’t and help enterprises improve their overall security posture. Instead of relying on static rules to predict human behavior, solutions powered by machine learning actually adapt and evolve in tandem with relationships and circumstances. Machine learning algorithms “study” the email behavior of users, learn from it, and – finally – draw conclusions from it.  But, not all of ML platforms are created equal. There are varying levels of complexity (going beyond IP addresses and metadata to natural language processing); algorithms learn to detect behavior anomalies at different speeds (static vs. in real-time); and they can achieve different scales (the number of data points they can simultaneously study and analyze). How does Tessian prevent threats that O365 security controls miss? Tessian’s Human Layer Security platform is designed to offset the rule-based and sandbox approaches of O365 ATP to detect and stop newer and previously unknown attacks from external sources, domain / brand / service impersonations, and data exfiltration by internal actors.  Learn more about why rule-based approaches to spear phishing attacks fail. By dynamically analyzing current and historical data, communication styles, language patterns, and employee project relationships both within and outside the organization, Tessian generates contextual employee relationship graphs to establish a baseline normal behavior. By doing this, Tessian turns both your employees and the email data into an organization’s biggest defenses against inbound and outbound email threats.  Conventional tools focus on just securing the machine layer – the network, applications, and devices. By uniquely focusing on the human layer, Tessian can make clear distinctions between legitimate and malicious email interactions and warn users in real-time to reinforce training and policies to promote safer behavior.  How can O365 ATP and Tessian work together?  Often, customers ask us which approach is better: the conventional, rule-based approach of the O365 native tools, or Tessian’s powered by machine learning? The answer is, each has their unique place in building a comprehensive email security strategy for O365. But, no organization that deals with sensitive, critical, and personal data can afford to overlook the benefits of an approach based on machine learning and behavioral analysis.  A layered approach that leverages the tools offered by O365 for high-volume attacks, reinforced with next-gen tools for detecting the unknown and evasive ones, would be your best bet.  A very short implementation time coupled with the algorithm’s ability to ‘learn’ from historical email data over the last year – all within 24 hours of deployment – means Tessian could give O365 users just the edge they need to combat modern day email threats. 
Spear Phishing
What is Social Engineering? 4 Types of Attacks
13 August 2020
You may have heard of social engineering, but do you know what it is?
Social engineering basics The key difference between social engineering attacks and brute force attacks is the techniques that hackers employ. Instead of trying to exploit weaknesses in security software, a social engineer will use coercive language, a sense of urgency, and even details about the person’s personal or work life to influence the target to hand over information or access to other accounts or systems. 
How does social engineering work? There is no set (or foolproof) ‘method’ that cybercriminals use to carry out social engineering attacks. But, the goal is generally the same: they want to take advantage of people in order to obtain personal information or get access to other systems or accounts. Why? Personal data and intellectual property are incredibly valuable.  While you can read more about the “types” of data that are compromised in this blog: Phishing Statistics 2020, you can learn more about the different types of social engineering attacks below.  Types of social engineering attacks When we say “social engineering”, we’re talking about the exploitation of human psychology. But, hackers can trick people in a few different ways and are always working hard to evade security solutions.  Phishing and spear phishing scams Phishing is one of the most common types of social engineering attacks and is generally delivered via email.  But, more and more often, we’re seeing attacks delivered via SMS, phone, and even social media.  Here are three hallmarks of phishing attacks: An attempt to obtain personal information such as names, dates of birth, addresses, passwords, etc.  Wording that evokes fear, a sense of urgency, or makes threats in an attempt to persuade the recipient to respond quickly. The use of shortened or misleading domains, links, buttons, or attachments. Spear phishing attacks are similar, but are much more targeted. Whereas phishing attacks are sent in bulk, spear phishing attacks are sent to a single person or small group of people and require a lot more forethought. For example, hackers will research targets on LinkedIn to find out who they work with and who they report to. This way, they can craft a more believable email. Want to learn more? We’ve covered phishing and spear phishing in more detail in these blogs: How to Identify and Prevent Phishing Attacks How to Catch a Phish: A Closer Look at Email Impersonation Phishing vs. Spear Phishing: Differences and Defense Strategies  COVID-19: Real-Life Examples of Opportunistic Phishing Emails Pretexting  While pretexting and phishing are categorized separately, they actually go hand-in-hand. In fact, pretexting is a tactic used in many phishing, spear phishing, vishing, and smishing attacks.  Here’s how it works: hackers create a strong, fabricated rapport with the victim. After establishing legitimacy and building trust, the hacker will either blatantly ask for or trick the victim into handing over personal information.   While there is an infinite number of examples we could give, ranging from BEC scams to CEO Fraud, we’ll use a consumer-focused example. Imagine you receive a call from someone who says they work at your bank. The person on the other end of the phone (the scammer) tells you they’ve seen unusual transactions on your account and that, in order to review the transactions and pause activity, you need to confirm your full name, address, and credit or debit card number. If you do share the information, the scammer will have everything they need to access your bank account and even carry out secondary attacks with the information they’ve learned. Together with phishing, pretexting represents 98% of social engineering incidents and 93% of breaches according to Verizon’s 2018 Data Breach Incident Report.  Physical and virtual baiting Like all other types of social engineering, baiting takes advantage of human nature. In particular: curiosity. Scammers will lure the target in (examples below) before stealing their personal data, usually by infecting their computer with some type of malware. The most common type of baiting attack involves the use of physical media – like a USB drive – to disperse malware. These malware-infected USB drives are left in conspicuous areas (like a bathroom, for example) where they are likely to be seen by potential victims. To really drive interest, hackers will sometimes even label the device with curious notes like “confidential” or logos from the target’s organization to make it seem more legitimate.  In an effort to identify who the owner of the USB (or simply because they can’t help themselves) employees often plug the USB be into their computer. Harmless, right? Unfortunately not. Once inserted, the USB deploys malware.  Baiting doesn’t necessarily have to take place in the physical world, though. After the outbreak of COVID-19, several new bait sites were set up. These sites feature fraudulent offers for special COVID-19 discounts, lure people into signing up for free testing, or claim to sell face masks and hand sanitizer.  Whaling attack ‘Whaling’ is a more sophisticated evolution of the phishing attack. In these attacks, hackers use very refined social engineering techniques to steal confidential information, trade secrets, personal data, and access credentials to restricted services, resources, or anything with economic or commercial value.  While this sounds similar to phishing and spear phishing, it’s different. How? Whaling tends to target business managers and executives (the ‘bigger fish’) who are likely to have access to higher-level data.  But, it’s not just their access to data.  Whaling is also seen as an effective attack vector because senior leaders themselves are perceived to be “easy targets”. Leaders tend to be extremely busy, too, and are therefore more likely to make mistakes and fall for scams.  Perhaps that’s why senior executives are 12x more likely to be the target of social engineering attacks compared to other employees. How to defend against social engineering attacks According to Verizon’s 2020 Data Breach Investigations Report (DBIR), 22% of breaches in 2019 involved phishing and other types of social engineering attacks. And, when you consider the cost of the average breach ($3.92 million) it’s absolutely essential that IT and security teams do everything they can to protect their employees. Here’s how: 1. Put strict policies in place The best place to start is by ensuring that you’ve got strong policies in place that govern the use of company IT systems, including work phones, email accounts, and cloud storage.  For example, you could ban the use of IT systems for personal reasons like accessing personal email accounts, social media, and non-work-related websites. You can learn more about why accessing personal email accounts and social media on work devices is dangerous on this blog: Remote Worker’s Guide to: Preventing Data Loss.  2. Educate your workforce Awareness training is key to help employees understand social engineering risks, learn how to spot these types of attacks, and what to do if and when they are targeted.  In addition to quarterly training sessions either online or in-person, organizations can also invest in phishing simulations. This way, employees get some “real-world” experience, without the risk of compromising data. But, it’s important to note that training alone isn’t enough. We explore this in detail in this blog: Pros and Cons of Phishing Awareness Training.  3. Filter inbound emails 90% of all data breaches begin with email. It’s one of the most common attack vectors used by hackers for social engineering purposes, and more.  But, with the right threat management tools, IT and security teams can mitigate the risk associated with social engineering attacks by monitoring and filtering inbound emails.  It’s important that solutions don’t impede employee productivity, though. For example, if a solution issues false positives, employees may become desensitized to warnings and end up ignoring them instead of heeding the advice.  Tessian protects employees from inbound email threats without getting in the way. How does Tessian detect and prevent social engineering? Powered by machine learning, Tessian Defender analyzes and learns from an organization’s current and historical email data and protects employees against inbound email security threats, including whaling, CEO Fraud, BEC, spear phishing, and other targeted social engineering attacks. Best of all, it does all of this silently in the background in real-time and, in-the-moment warnings help bolster training and reinforce policies. That means employee productivity isn’t affected and security reflexes improve over time. To learn more about how Tessian can protect your people and data against social engineering attacks on email, book a demo today.
Data Exfiltration DLP Human Layer Security Spear Phishing
Research Shows Employee Burnout Could Cause Your Next Data Breach
By Laura Brooks
12 August 2020
Understanding how stress impacts your employees’ cybersecurity behaviors could significantly reduce the chances of people’s mistakes compromising your company’s security, our latest research reveals.   Consider this. A shocking 93% of US and UK employees told us they feel tired and stressed at some point during their working week, with one in 10 feeling tired every day. And perhaps more worryingly, nearly half (46%) said they have experienced burnout in their career.  Then consider that nearly two-thirds of employees feel chained to their desks, as 61% of respondents in our report said there is a culture of presenteeism in their organization that makes them work longer hours than they need to. Nearly 70% of employees also agreed that there is an expectation within their company to respond to emails quickly.  Employees are overwhelmed, overworked and are feeling the pressure to keep pace with their organization’s demands. 
The effects of the pandemic  The events of 2020 haven’t helped matters either. In the wake of the global pandemic, people have experienced extremely stressful situations that affected their health and finances, against a backdrop of political uncertainty and social unrest, while simultaneously juggling the demands of their jobs. The sudden shift to remote working also meant that people were surrounded by new distractions, and over half of respondents (57%) told us they felt more distracted when working from home.  According to Jeff Hancock, a professor at Stanford University who collaborated with us on this report, people tend to make mistakes or decisions they later regret when they are stressed and distracted. This is because when our cognitive load is overwhelmed, and when our attention is split between multiple tasks, we aren’t able to fully concentrate on the task in front of us. What does this mean for security?  Not only are these findings incredibly concerning for employees’ health and wellbeing, these factors could also explain why mistakes that compromise cybersecurity are happening more than ever. The majority of employees (52%) we surveyed said they make more mistakes at work when they are stressed.  !function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async"); Younger employees seem to be more affected by stress than their older co-workers, though. Nearly two-thirds of workers aged 18-30 years old (62%) said they make more mistakes when they are stressed, compared to 45% of workers over 51 years old.  Our research also revealed that 43% and 41% of employees believe they are more error-prone when tired and distracted, respectively. In fact, people cited distraction as the top reason for why they fell for a phishing scam at work while 44% said they had accidentally sent an email to the wrong person (44%) because they were tired.  While these mistakes may seem trivial on the surface, phishing is the number one threat vector used by hackers today and one in five companies told us they have lost customers as a result of an employee sending an email to the wrong person. Far from red-faced embarrassment, these mistakes are compromising businesses’ cybersecurity.
The other problem is that hackers are preying on our vulnerable states, and using them to their advantage. Cybercriminals know people are stressed and looking for information about the pandemic and remote working. They know that some individuals are struggling financially and others have lost their jobs. The lure of a ‘too-good-to-be-true’ deal or ‘get a new job fast’ offer may suddenly look very appealing, especially if the email appears to have come from a trusted source, and cause people to click.  So what can businesses do to protect employees from mistakes caused by burnout?  Business and security leaders need to realise that it’s unrealistic for employees to act as the company’s first line of defence. You cannot expect every employee to spot every scam or make the right cybersecurity decision 100% of the time, particularly when they’re dealing with stressful situations and working in environments filled with distractions. When faced with never-ending to-do lists and back-to-back Zoom calls, cybersecurity is the last thing on people’s minds. In fact, a third of respondents told us they “rarely” or “never” think about security when at work.  Businesses, therefore, need to create a culture that doesn’t blame people for their mistakes and, instead, empowers them to do great work without security getting in the way. Understand how stress impacts people’s cybersecurity behaviors and tailor security policies and training so that they truly resonate for every employee.
Educating people on how hackers might take advantage of their stress and explaining the types of scams that people could be susceptible to is an important first step. For example, a hacker could impersonate a senior IT director, supposedly communicating the implementation of new software to accommodate the move back into the office, and asks employees to share their account credentials. Or a hacker may pose as a trusted government agency requesting personal information in relation to a new financial relief scheme.  Businesses should also implement solutions that can help employees make good cybersecurity decisions and reduce risk over time. Security solutions like Tessian use machine learning to understand employee behaviors to alert people to risks on email as and when they arise. By warning individuals in real-time, we can educate individuals as to why the email they were about to send or have received is a threat to company security. It helps to make people think twice before they do something they might regret.  With remote working here to stay, and with hackers continually finding ways to capitalize on people’s stress in order to manipulate them, businesses must prioritize cybersecurity at the human layer. Only by understanding why people make mistakes that compromise cybersecurity, can you begin to prevent burnout from causing your next data breach.
Spear Phishing
Smishing and Vishing: What You Need to Know About These Phishing Attacks
10 August 2020
Whether or not you’re familiar with the terms “smishing” and “vishing,” you may have been targeted by these attacks. This article will: Explain what smishing and vishing attacks are, and how they relate to phishing Provide examples of each type of attack alongside tips on how to identify them Discuss what you should do if you’re targeted by a smishing or vishing attack Smishing, Vishing, and Phishing Smishing and vishing are two types of phishing attacks, sometimes called “social engineering attacks.” While 96% of phishing attacks arrive via email, hackers can also use social media channels. Regardless of how the attack is delivered, the message will appear to come from a trusted sender and may ask the recipient to: Follow a link, either to download a file or to submit personal information Reply to the message with personal or sensitive information Carry out an action such as purchasing vouchers or transferring funds Types of phishing include “spear phishing,” where specific individuals are targeted by name, and “whaling,” where high-profile individuals such as CEOs or public officials are targeted. All these hallmarks of phishing can also be present in smishing and vishing attacks. What Is Smishing? Smishing — or “SMS phishing” — is phishing via SMS (text messages). The victim of a smishing attack receives a text message, supposedly from a trusted source, that aims to solicit their personal information. These messages often contain a link (generally a shortened URL) and, like other phishing attacks, they’ll encourage the recipient to take some “urgent” action, for example: Claiming a prize Claiming a tax refund Locking their online banking account Example of a Smishing Attack Just like phishing via email, the rates of smishing continue to rise year-on-year. According to Consumer Reports, the Federal Trade Commission (FCC) received 93,331 complaints about spam or fraudulent text messages in 2018 — an increase of 30% from 2017. Here’s an example of a smishing message:
The message above appears to be from the Driver and Vehicle Licensing Agency (DVLA) and invites the recipient to visit a link. Note that the link appears to lead to a legitimate website — gov.uk is a UK government-owned domain. The use of a legitimate-looking URL is an excellent example of the increasingly sophisticated methods that smishing attackers use to trick unsuspecting people into falling for their scams. How to Identify a Smishing Attack As we’ve said, cybercriminals are using increasingly sophisticated methods to make their messages as believable as possible. That’s why many thousands of people fall for smishing scams every year. In fact, according to a study carried out by Lloyds TSB, participants were shown 20 emails and texts, half of which were inauthentic. Only 18% of participants correctly identified all of the fakes. So, what should you look for? Just like a phishing attack via email, a smishing message will generally: Convey a sense of urgency Contain a link (even if the link appears legitimate, like in the example above) Contain a request personal information Other clues that a message might be from a hacker include the phone number it comes from (large institutions like banks will generally send text messages from short-code numbers, while smishing texts often come from “regular” 11-digit mobile numbers) and may contain typos. If you’re looking for more examples of phishing attacks (which might help you spot attacks delivered via text message) check out these articles: How to Identify and Prevent Phishing Attacks How to Catch a Phish: A Closer Look at Email Impersonation Phishing vs. Spear Phishing: Differences and Defense Strategies  COVID-19: Real-Life Examples of Opportunistic Phishing Emails What Is Vishing? Vishing — or “voice phishing” — is phishing via phone call. Vishing scams commonly use Voice over IP (VoIP) technology. Like targets of other types of phishing attacks, the victim of a vishing attack will receive a phone call (or a voicemail) from a scammer, pretending to be a trusted person who’s attempting to elicit personal information such as credit card or login details. So, how do hackers pull this off? They use a range of advanced techniques, including: Faking caller ID, so it appears that the call is coming from a trusted number Utilizing “war dialers” to call large numbers of people en masse Using synthetic speech and automated call processes A vishing scam often starts with an automated message, telling the recipient that they are the victim of identity fraud. The message requests that the recipient calls a specific number. When doing so, they are asked to disclose personal information. Hackers then may use the information themselves to gain access to other accounts or sell the information on the Dark Web.  The Latest Vishing News: Updated August 2020 On August 20, 2020, the Federal Bureau of Investigation (FBI) and Cybersecurity and Infrastructure Security Agency (CISA) issued a joint statement warning businesses about an ongoing vishing campaign. The agencies warn that cybercriminals have been exploiting remote-working arrangements throughout the COVID-19 pandemic.  The scam involves spoofing login pages for corporate Virtual Private Networks (VPNs), so as to steal employees’ credentials. These credentials can be used to obtain additional personal information about the employee. The attackers then use unattributed VoIP numbers to call employees on their personal mobile phones. The attackers pose as IT helpdesk agents, and use a fake verification process using stolen credentials to earn the employee’s trust. The FBI and CISA recommend several steps to help avoid falling victim to this scam, including restricting VPN connections to managed devices, improving 2-Step Authentication processes, and using an authentication process for employee-to-employee phone communications. Example of a Vishing Attack Again, just like phishing via email and smishing, the rates of vishing attacks are continually rising. According to one report, 49% of organizations surveyed were victims of a vishing attack in 2018.  Vishing made headlines most recently in July 2020 after the Twitter scam. After a vishing attack, high-profile users had their accounts hacked, and sent out tweets encouraging their followers to donate Bitcoin to a specific cryptocurrency wallet, supposedly in the name of charitable giving or COVID-19 relief. This vishing attack involved Twitter employees being manipulated, via phone, into providing access to internal tools that allowed the attackers to gain control over Twitter accounts, including those of Bill Gates, Joe Biden, and Kanye West. This is an example of spear phishing, conducted using vishing as an entry-point. It’s believed that the perpetrators earned at least $100,000 in Bitcoin before Twitter could contain the attack. You can read more cybersecurity headlines from the last month here.  How to Identify a Vishing Attack Vishing attacks share many of the same hallmarks as smishing attacks. In addition to these indicators, we can categorize vishing attacks according to the person the attacker is impersonating: Businesses or charities — Such scam calls may inform you that you have won a prize, present you with you an investment opportunity, or attempt to elicit a charitable donation. If it sounds too good to be true, it probably is. Banks — Banking phone scams will usually incite alarm by informing you about suspicious activity on your account. Always remember that banks will never ask you to confirm your full card number over the phone. Government institutions — These calls may claim that you are owed a tax refund or required to pay a fine. They may even threaten legal action if you do not respond.  Tech support — Posing as an IT technician, an attacker may claim your computer is infected with a virus. You may be asked to download software (which will usually be some form of malware or spyware) or allow the attacker to take remote control of your computer. How to Prevent Smishing and Vishing Attacks The key to preventing smishing and vishing attacks is security training.  While individuals can find resources online, employers should be providing all employees with IT security training. It’s actually a requirement of data security laws, such as the General Data Protection Regulation (GDPR) and the New York SHIELD Act. You can read more about how compliance standards affect cybersecurity on our compliance hub.  Training can help ensure all employees are familiar with the common signs of smishing and vishing attacks which could reduce the possibility that they will fall victim to such an attack. But, what do you do if you receive a suspicious message? The first rule is: don’t respond.  If you receive a text requesting that you follow a link, or a phone message requesting that you call a number or divulge personal information — ignore it, at least until you’ve confirmed whether or not it’s legitimate. The message itself can’t hurt them, but acting on it can.  If the message appears to be from a trusted institution, search for their phone number and call the institution directly. For example, if a message appears to be from your phone provider, search for your phone provider’s customer service number and discuss the request directly with the operator.   If you receive a vishing or smishing message at work or on a work device, make sure you report it to your IT or security team. If you’re on a personal device, you should report significant smishing and vishing attacks to the relevant authorities in your country, such as the Federal Communications Commission (FCC) or Information Commissioner’s Office (ICO).  For more tips on how to identify and prevent phishing attacks, including vishing and smishing, follow Tessian on LinkedIn or subscribe to our monthly newsletter. 
Human Layer Security Spear Phishing
Pros and Cons of Phishing Awareness Training
By Maddie Rosenthal
03 August 2020
Over the last several weeks, phishing, spear phishing, and social engineering attacks have dominated headlines. But, phishing isn’t a new problem. These scams have been circulating since the mid-’90s.  So, what can security leaders do to prevent being targeted? Unfortunately, not much. Hackers play the odds and fire off thousands of phishing emails at a time, hoping that at least a few will be successful. The key, then, is to train employees to spot these scams. That’s why phishing awareness training is such an essential part of any cybersecurity strategy. But is phishing awareness training alone enough? Keep reading to find out the pros and cons of phishing awareness training as well as the steps security leaders need to take to level up their inbound threat protection. Still wondering how big of a problem phishing really is? Check out the latest phishing statistics for 2020.
To make this article easy-to-navigate, we’ll start with a simple list of the pros and cons of phishing awareness training. For more information about each point, you can click the text to jump down on the page. 
Pros of phishing awareness training Phishing awareness training introduces employees to threats they might not be familiar with While people working in security, IT, or compliance are all-too-familiar with phishing, spear phishing, and social engineering, the average employee isn’t. The reality is, they might not have even heard of these terms. That means phishing awareness training is an essential first step. To successfully spot a phish, they have to know they exist. By showing employees examples of attacks – including the subject lines to watch out for, a high-level overview of domain impersonation, and the types of requests hackers will generally make – they’ll immediately be better placed to identify what is and isn’t a phishing attack.   Looking for resources to help train your employees? Check out this blog with a shareable PDF. It includes examples of phishing attacks and reasons why the email is suspicious.  Phishing awareness training can teach employees more about existing policies and procedures Again, showing employees what phishing attacks look like is step one. But ensuring they know what to do if and when they receive one is an essential next step and is your chance to remind employees of existing policies and procedures. For example, who to report attacks to within the security or IT team. Importantly, though, phishing awareness training should also reinforce the importance of other policies, specifically around creating strong passwords, storing them safely, and updating them frequently. After all, credentials are the number one “type” of data hackers harvest in phishing attacks.  Phishing awareness training can help security leaders identify particularly risky and at-risk employees By getting teams across departments together for training sessions and phishing simulations, security leaders will get a birds’ eye view of employee behavior. Are certain departments or individuals more likely to click a malicious link than others? Are senior executives skipping training sessions? Are new-starters struggling to pass post-training assessments?  These observations will help security leaders stay ahead of security incidents, can inform subsequent training sessions, and could help pinpoint gaps in the overall security framework.
Phishing awareness training can help satisfy compliance standards While you can read more about various compliance standards – including GDPR, CCPA, HIPAA, and GLBA – on our compliance hub, they all include a clause that outlines the importance of implementing proper data security practices. What are “proper data security practices?” This criterion has – for the most part – not been formally defined. But, phishing awareness training is certainly a step in the right direction and demonstrates a concerted effort to secure data company-wide.   Phishing awareness training can help foster a strong security culture In the last several years (due in part to increased regulation) cybersecurity has become business-critical. But, it takes a village to keep systems and data safe, which means accountability is required from everyone to make policies, procedures, and tech solutions truly effective.  That’s why creating and maintaining a strong security culture is so important. While this is easier said than done, training sessions can help encourage employees – whether in finance or sales – to become less passive in their roles as they relate to cybersecurity, especially when gamification is used to drive engagement. You can read more about creating a positive security culture on our blog. Phishing awareness training can enable employees to spot scams in their personal lives, too The point of phishing awareness training is to prevent successful attacks in the workplace. But, it’s important to remember that phishing attacks are targeted at consumers, too. That’s why the most frequently impersonated brands are household names like Netflix and Facebook. Why does this matter? Because phishing attacks have serious consequences, and not just for larger organizations. If an employee was scammed in a consumer attack, they could lose thousands of dollars or even have their identity stolen. It’s hard to imagine a world in which this wouldn’t affect their work. The bottom line: prevention is better than cure and knowledge is power. Phishing awareness training won’t just protect your organization’s data and assets, it’ll empower your people to protect themselves outside of the office, too. 
Cons of phishing awareness training Phishing awareness training can’t prevent human error While phishing awareness training will help employees spot phishing scams and make them think twice before clicking a link or downloading an attachment, it’s not a silver bullet.  Even the most security-conscious and tech-savvy employees can – and do – fall for phishing attacks. Case in point: Employees working in the tech industry are the most likely to click on links in phishing emails, with nearly half (47%)  admitting to having done it. This is 22% higher than the average across all industries. As the saying goes, to “err is human”. Phishing awareness training can’t evolve as quickly as threats do Hackers think and move quickly and are constantly crafting more sophisticated attacks to evade detection. That means that training that was relevant three months may not be today. We only have to look at the spike in COVID-19 themed phishing attacks starting in March for proof. Prior to the outbreak of the pandemic, very few phishing awareness programs would have trained employees to look for impersonations of the World Health Organization, for example. Likewise, impersonations of collaboration tools like Zoom took off as soon as workforces shifted to remote-working. (Click here for more real-life examples of COVID-19 phishing emails.) What could be next?  Phishing awareness training has hidden costs According to Mark Logsdon, Head of Cyber Assurance and Oversight at Prudential, there are three fundamental flaws in training: it’s boring, often irrelevant, and expensive. We’ll cover the first two below but, for now, let’s focus on the cost. Needless to say, the cost of training and simulation software varies vendor-by-vendor. But, the solution itself is far from the only cost to consider. What about lost productivity? Imagine you have a 1,000-person organization and, as a part of an aggressive inbound strategy, you’ve opted to hold training every quarter. Training lasts, on average, three hours. That’s 12,000 lost hours a year.  While – yes – a successful attack would cost more, we can’t forget that phishing awareness training alone doesn’t work. (See point 1: Phishing awareness training can’t prevent human error.)
Phishing awareness training isn’t targeted (or engaging) enough Going back to what Mark Logsdon said: Training is boring and often irrelevant. It’s easy to see why. You can’t apply one lesson to an entire organization – whether it’s 20 people or 20,0000 – and expect it to stick. It has to be targeted based on age, department, and tech-literacy. Age is especially important.  According to Tessian’s latest research, nearly three-quarters of respondents who admitted to clicking a phishing email were aged between 18-40 years old. In comparison, just 8% of people over 51 said they had done the same. However, the older generation was also the least likely to know what a phishing email was. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Jeff Hancock, the Harry and Norman Chandler Professor of Communication at Stanford University and expert in trust and deception, explained how tailored training programs could help. “A one-size-fits-all approach won’t work. Different generations have grown up with tech in different ways, and security training needs to reflect this. That’s not to say that we should think that people over 50 are tech-illiterate, though. Businesses need to consider what motivates each age group and tailor training accordingly.”  “Being respected at work is incredibly important to an older generation, so telling them that they don’t understand something isn’t an effective way to educate them on the threats. Instead, businesses should engage them in a conversation, helping them to identify how their strengths and weaknesses could be used against them in an attack.”  “Many younger employees, on the other hand, have never known a time without the internet and they don’t want to be told how to use it. This generation has a thirst for knowledge, so teach them the techniques that hackers will use to target them. That way, when they see a scam, they’ll be able to unpick it and recognize the tactics being used on them.”   Phishing awareness training can’t force employees to care about cybersecurity Unfortunately, the average employee is less focused on cybersecurity and more focused on getting their jobs done. That’s why one-third (33%) rarely or never think about security and work and over half (54%) of employees say they’ll find a workaround if security software or policies prevent them from doing their job.  While – yes – security leaders can certainly reinforce the importance of software and policies, training alone won’t help control employee’s behavior or inspire every single person to become champions of cybersecurity. Phishing awareness can’t change quick-to-click company cultures It’s widely accepted that time pressure negatively impacts decision accuracy. But did you know that individuals who are expected to respond to emails quickly are also the most likely to click on phishing emails?  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); It makes sense. If you’re rushing to read and fire off emails – especially when you’re working off of laptops, phones, and even watches – you’re more likely to make mistakes.
Should I create a phishing awareness training program? The short answer: Absolutely. Phishing awareness training programs can help teach employees what phishing is, how to spot phishing emails, what to do if they’re targeted, and the implications of falling for an attack. But, as we’ve said, training isn’t a silver bullet. It will curb the problem, but it won’t prevent mistakes from happening. That’s why security leaders need to bolster training with technology that detects and prevents inbound threats. That way, employees aren’t the last line of defense. But, given the frequency of attacks year-on-year, it’s clear that spam filters, antivirus software, and other legacy security solutions aren’t enough. That’s where Tessian comes in. How does Tessian detect and prevent targeted phishing attacks? Tessian fills a critical gap in security strategies that SEGs, spam filters, and training alone can’t.  By learning from historical email data, Tessian’s machine learning algorithms can understand specific user relationships and the context behind each email. This allows Tessian Defender to detect a wide range of impersonations, spanning more obvious, payload-based attacks to difficult-to-spot social-engineered ones like CEO Fraud and Business Email Compromise. Once detected, real-time warnings are triggered and explain exactly why the email was flagged, including specific information from the email. (See below.) This is an important function. Why? Because, according to Jeff, “People learn best when they get fast feedback and when that feedback is in context,” 
These in-the-moment warnings reinforce training and policies and help employees improve their security reflexes over time.  To learn more about how tools like Tessian Defender can prevent spear phishing attacks, speak to one of our experts and request a demo today.
Page