Step Into The Future of Cybersecurity — Save your spot at the Human Layer Security Summit for free.

Request a Demo of Tessian Today.
Automatically stop data breaches and security threats caused by employees on email. Powered by machine learning, Tessian detects anomalies in real-time, integrating seamlessly with your email environment within minutes and starting protection in a day. Provides you with unparalleled visibility into human security risks to remediate threats and ensure compliance.
Human Layer Security

90% of data breaches are caused by human error. Stay up to date on the latest tips, guides, and industry news on Human Layer Security.

Human Layer Security Podcast
Five Things I Learned From Launching A Podcast
By Tim Sadler
14 April 2021
At the start of this year, Tessian started a podcast. Why? Because since we launched the Human Layer Security category in 2013, the human factor has become one of the biggest considerations in cybersecurity today. Every day, we are speaking to CISOs, CIOs, business leaders and security professionals about how to secure the human layer. And I’m not just talking about conversations related to how to stop the ever-rising number of phishing attacks. We’re talking about insider threats and security incidents caused by simple human error, too. We’re discussing ways in which CISOs can better understand their employees’ behaviors and ways of working, in order to build security strategies that protect them and empower them to do great work. And we’re talking about how to get buy-in from boards. Rather than keeping the conversations to ourselves, we wanted the podcast to provide a platform for inspiring IT leaders, thought-provoking academics, and ethical hackers to discuss why it’s so important for businesses to protect their people – not just machines and data – and share their learnings so that how other security teams can do it too.
It’s been a lot of fun and I’ve spoken to some incredible people. So here are my highlights and my top learnings as we close out Season 1 of the RE:Human Layer Security podcast: 1. CISOs are doing an amazing job in their relentless roles. As Simon Hodgkinson, former CISO at bp said, the job of the CISO is truly 24/7. And it’s becoming “more and more challenging as the threats become more advanced and regulatory landscapes become even more complicated”. Hearing the work that CISOs like Jerry Perullo at ICE, Ray Espinoza at Cobalt, Tim Fitzgerald at ARM and Anne Benigsen at Bankers’ Bank of West are doing to not only navigate these landscapes and keep their companies safe, but also to help make their people into security champions and make security as seamless as possible is really inspiring. 2. … and they want to do more. It was clear from the leaders I spoke that they have a “duty of care to continue raising awareness” and “invest in making sure people are able to do the right thing.” Some believe, however, there are more engaging ways to do it, while others think there is more work to be done to get employees to buy-in to the security cultures. It was great to understand how they plan to do this.
3. Security can learn so much from psychology. In one of my favourite episodes, academics Dr Karen Renaud and Dr Marc Dupuis question why businesses continually use fear – a short term emotion – to try and engender long-term behavioral change in cybersecurity. They also explain why the role of employee self-efficacy is so important to encourage safer security practices. Their insight into what factors make people more or less likely to adopt safe cybersecurity behaviors makes me question whether FUD in security has had its day? 4. If you don’t get to know your people well, the bad guys certainly will. Ethical hackers and social engineering experts like Craig Hays and Jenny Radcliffe explained how cybercriminals select their targets and methods of attack, emphasizing the need for companies – at manager level – to know their people really well. As Jenny said, “the answer to becoming a more secure organization […] is to know your humans better than the bad guys.”
5. Employees aren’t the weakest link. The age-old saying that people are the weakest link in security is something our guests don’t believe in. To Dan Raywood, people are neither the strongest or weakest link, but rather “an essential part of your business”. Tim Fitzgerald agreed, stating that, as security leaders, “we try to take a look in the mirror and say, are we providing these people with the tools they need to help them avoid these types of threats or scenarios?” It’s been a privilege to speak with all of our guests on the RE:Human Security Layer podcast and, if you haven’t already, I encourage you to listen to their interviews and subscribe to the show.  We’re now planning Season 2 so stay tuned for that – and if you’d like to get involved or hear more about what we’re doing, please contact me on LinkedIn or Twitter.  
Human Layer Security
Machine vs. Machine: Setting the Record Straight on Offensive AI
By Trevor Luker
08 April 2021
In recent years, we’ve heard the term “Offensive AI” being used more frequently to describe a new type of cyber threat – one that sees cybercriminals using artificial intelligence (AI) to supercharge their cyber attacks, advance impersonation scams, and avoid detection. In response, organizations are being advised to “fight fire with fire” and invest in defensive AI solutions in order to stay ahead of the bad guys, a sort of modern day “spy on spy” warfare tactic. Sure, cybercriminals are using more sophisticated technologies to advance their attack campaigns, but let’s start by getting one thing straight: where we are at the moment is not “AI”. For a system to be considered intelligent, it needs to exhibit autonomous behavior and goal seeking. What we are seeing, though, is an emerging use of Machine Learning (ML) and adaptive algorithms, combined with large datasets, that are proving effective for cybercriminals in mounting attacks against their targets.  Semantics, I know. But it’s important that we manage the hype. Even the washing machine I just purchased says it includes “AI” functionality. It doesn’t.  Organizations do, though, need to be aware of attackers’ use of offensive ML, and every company needs to understand how to defend itself against it. I can help. 
So, what is offensive ML? At this stage, offensive ML is often the use of ML and large data-lakes to automate the first stages of cyber attacks. In particular the reconnaissance, weaponization, and delivery stages of the Cyber-Kill-Chain lend themselves to automation.  It allows attacks to be carried out on a much larger scale and faster than ever previously seen. It also helps attackers overcome their human-resource problem—yes, even cybercriminals have this problem; skilled cyber staff are hard to find.  Automation frees up the human’s time, keeping them involved for the later stages of an attack once a weakness that can be exploited has been found. To a large degree, many cyber attacks have become a data science issue, as opposed to requiring stereotypical ‘elite hackers’.  A good offensive ML will also have a feedback mechanism to tune the underlying models of an attack, for example, based on the success of a lure in front of a potential victim in a phishing attack. The models will start to favor successful approaches and, over time, increase in efficiency and effectiveness.  How is offensive ML being used today? One example of offensive ML I’ve observed is large-scale scanning of perimeter systems for fingerprinting purposes.  Fingerprinting the perimeter of organizations – assigning IP addresses with organizations, public data (DNS, MX lookup) and industry sectors – is a simple data-management issue. However, if this is combined with Common Vulnerabilities and Exposures (CVE) updates, and possibly dark web zero-day exploits, it provides attackers with a constantly updated list of vulnerable systems.  You can learn more about zero-day vulnerabilites here: What is a Zero-Day Vulnerability? 3 Real-World Examples. Organizations defending themselves against cybercrime frequently have to go through a time consuming testing process before deploying a patch and, in some cases, the systems are just not patched at all. This gives an attacker a window of opportunity to deploy automated scripts against any targets that have been selected by the ML as meeting the attack criteria. No humans need be involved except to set the parameters of the attack campaign: it’s fully automated. An attacker could, for example, have the ML algorithms send emails to known invalid email addresses at the target organization to see what information or responses they get—Do the email headers give clues about internal systems and defenses? Do any of the systems indicate unpatched vulnerabilities?  They can use ML to understand more about the employees they will target too, crawling through social media platforms like LinkedIn and Twitter to identify employees who recently joined an organization, any workers that have moved roles, or people that are dissatisfied with their company. Why? Because these people are prime targets to attempt to phish.  Combining this information is step one. Attackers then just need to understand how to get past defenses so that the phishing emails land into a target employee’s inbox. MX records – a mail exchanger record that specifies the mail server responsible for accepting email messages on behalf of a domain name – are public information and would give the ML information as to what Secure Email Gateway (SEG) a company is using so that an attacker could tailor the lure and have the most chance of getting through an organization’s defenses.  Another area in which offensive ML proves problematic for organizations is facial recognition. Attackers can deploy ML technology or facial recognition to match company photos with photos from across the Internet, and then build up a graph of relationships between people and their target. An exercise in understanding “who knows who?”.  With this information, bad actors could deploy social media bots under ML control to build trust with the target and their associates. From public sources, an attacker knows their target’s interests, who they work with, who they live with; all this is gold dust when it comes to the “phishing stage” as an attacker can make the scam more believable by referring to associates, shared interests, hobbies etc.  Using offensive ML in ransomware attacks There are other reasons to be concerned about the impact offensive ML can have on your organization’s security. Attackers can use it to advance their ransomware attacks.  Ransomware attacks – and any exploits used to deliver the ransomware – have a short shelf-life because defenses are constantly evolving too. Therefore, successful ROI for the attacker depends on whether they choose their targets carefully. Good reconnaissance will ensure resources are used more efficiently and effectively than using a simpler scatter-gun approach.  For any cybercriminal involved in “ransomware for hire”, offensive ML proves invaluable to earning a higher salary. They can use the data gathered above to set their pricing model for their customers. The better defended – or more valuable- the target, the higher the price. All this could be, and likely is, automated. So, how can organizations protect themselves from an offensive AI/ML attack? It’s the classic “spy vs spy” scenario; attacks evolve and so do defenses. With traditional, rule-based defensive systems, though, the defender is always at a disadvantage. Until an attack is observed, a rule can’t be written to counteract it. However, if an organization uses ML, the defensive systems don’t need to wait for new rules;  they can react to anomalous changes in behavior autonomously and adjust defensive thresholds accordingly. In addition, defensive ML systems can more accurately adjust thresholds based on the observed riskiness of behavior within a defender’s organization; there is no longer a need to have a one-size-fits-all defense.  A good ML-based system will adapt to each company, even each employee or department, and set corresponding defense levels. Traditional, rule-based systems can’t do this. In my opinion, the future of defensive security is a data-issue; the days of the traditional human-heavy Security Operations Center are numbered. What questions should organizations ask to ensure they have the right defenses in place? First and foremost, ask your IT service provider why they think their system is actually AI. Because it almost certainly isn’t. If the vendor maintains that they have a real AI solution, be very skeptical about them as a reliable vendor. Ask vendors how their system would react to a zero-day exploit: How long would their system need to deal with a novel attack? Would the user need to wait for a vendor update? Ask vendors about data and threat sharing. All companies are under reconnaissance and attack, and the more data that is shared about this, the better the defenses. So ask, does the vendor share attack data, even with their competitors?
Human Layer Security
Risk Management Made Easy: Introducing Tessian Human Layer Risk Hub
By Ed Bishop
06 April 2021
Today, comprehensive visibility into employee risk is one of the biggest challenges security and risk management leaders face.  Why? Because most security solutions offer a limited view of risk and don’t offer any insights into the likelihood of an employee falling for a phishing attack or exfiltrating data.  Worse still, when it is available, risk information is siloed and hard to interpret.  Insights around security awareness training exist in seperate systems from insights related to threats that have been detected and prevented. There’s no integration which means security leaders can’t get a full view of their risk profile. Without integration and visibility, it’s impossible to take a tailored, proactive approach to preventing threats. It’s an uphill battle. You may not even know where to start… But, we have a solution.  With Tessian Human Layer Risk Hub, our customers can now deeply understand their organization’s security posture with granular visibility into employee risk and insights into individual user risk levels and drivers.
This is the only solution that offers protection, training, and risk analytics all in one platform, giving you a clear picture of your organization’s risk and the tools needed to reduce that risk.  How does Tessian Human Layer Risk Hub work? With Tessian Human Layer Risk Hub, security leaders can quantify risk, take targeted actions, and offer the right training to continuously lower the risks posed by employees’ poor security decisions.  Let’s look at an example.  1. An employee in the Finance department is flagged as a high-risk user based on their access to sensitive information, their low level of security awareness training, and how frequently they’re targeted by spear phishing attacks.  Tessian looks at five risk drivers – accidental data loss, data exfiltration, social engineering, sensitive data handling, and security awareness – to generate individual risk scores. Each employee’s risk score is dynamically updated, decreasing when an employee makes the correct security decision, and increasing when they do something risky, such as clicking on a phishing email or sending company data to personal email accounts. 
2. Based on these insights, Tessian intelligently and automatically identifies actions teams can take within the platform (for example, custom protections for certain user groups) to reinforce policies, improve security awareness, and change behavior to help drive down risk.  Security teams can also implement additional processes and controls outside of Tessian to exercise better control over specific risks. 
3. With custom protections enabled, Tessian’s in-the-moment warnings help nudge employees towards safer behavior. For example, you could quickly and easily configure a trigger that always warns and educates users when they receive an email from a new domain, mentioning a wire transfer. But, even without custom protections,  Tessian Defender can detect spear phishing attacks with incredible accuracy. And, because the warnings are written in clear, easy-to-understand language, employees are continusouly learning and leveling up their security awareness. If targeted by a spear phishing attack, employees would receive a warning that looks something like this. 
4. With continuous protection and in-the-moment training, security leaders will see employees move from high-risk users to low-risk users over time. Risk scores and drivers are aggregated at employee, department, and company-level and are benchmarked against peers. This makes tracking and reporting on progress simple and effective. 
Benefits of Tessian Human Layer Risk Hub Tessian Human Layer Risk Hub enables security leaders to reduce risk and improve their organization’s security posture with unique insights you can’t get anywhere else. Targeted remediation at scale. With a bird’s eye view of your most risky and at-risk user groups, security leaders can make better decisions about how to distribute budget and resources, what mitigation measures to prioritize, and when to intervene. This goes beyond email. If you can see who has access to sensitive information – and how they’re handling that sensitive information – you’ll be able to create and update policies that really work.  More effective training. Every year, businesses spend nearly $300,000 and 276 hours on security awareness training. But, training is only effective when the messages are tailored and the employee is engaged. Tessian Human Layer Risk Hub gives security, risk management, and compliance leaders the insights they need to create tailored training programs that cut through. And, Tessian in-the-moment warnings help nudge employees towards safer behavior in real-time.  Clear ROI. Many solutions simply report risk; they don’t actually reduce risk. Tessian is different. Security leaders can easily measure and demonstrate how risk has changed over time, how the platform has proactively helped improve the organization’s security posture, and can even apply learnings from the platform to inform future decisions. The benefit? You’ll become a trusted partner across your organization.   Defensible audit. Tessian’s detailed reports and audit logs provide defensible proof against data breaches. If a risk is identified, you’ll be able to formally document all associated events, and track exposure, owner, mitigation decisions, and actions.  The bottom line: Tessian Human Layer Risk Hub gives security teams a unified view and a shared language to communicate risk to business, demonstrate progress towards lowering risk, and effectively secure their human layer.  Learn more about Tessian Interested in learning more about Tessian Human Layer Risk Hub? Current Tessian customers can get in touch with their Customer Success Manager. Not yet a Tessian customer? Learn more about the new Human Layer Risk Hub, explore our customer stories, or book a demo now. And, to be the first to hear about new product updates, sign-up for our newsletter below.
Human Layer Security Spear Phishing
Types of Email Attacks Every Business Should Prepare For
01 April 2021
Email remains the number one tool of business communication. The email network is open to practically anyone—and its flexibility, reliability, and convenience mean it’s not going away any time soon. But for all its benefits, email can also be a vector for serious cyberattacks. Social engineering attacks like phishing can lead to data breaches, malware attacks, and billions of dollars in losses for businesses worldwide. This article will explain the major types of email attacks, provide some data on how common they are, and consider the devastating impact that email attacks can have on your business. Types of email attacks First, we’ll walk you through some of the most common types of email attacks. Phishing Phishing can mean one of two things: An “umbrella term” meaning any social engineering attack that takes place via email. A type of email attack where the attacker sends a lot of malicious emails in an untargeted way. When we use “phishing” as an umbrella term, it refers to the most common type of email attack. Any malicious email that tries to trick you into clicking a link, opening a file, or taking any other action that causes harm, can be part of a phishing attack.  All of the other types of email attacks we’ll look at below are forms of phishing, if we use the term in this broad way. When we use “phishing” as a specific term, it means a “bulk” or “spray and pray” email attack, where the malicious email is sent to many unnamed recipients. Here’s an example:
What makes this a phishing email? There’s no addressee: It says “Hello,” not “Hello Rob.” The “update account now” button leads to a credential phishing page. Most importantly — Netflix didn’t send it! Further reading: ⚡  What is Phishing? ⚡ Spam vs. Phishing: The Difference Between Spam and Phishing ⚡ How Easy is it to Phish? ⚡ How to Avoid Falling For a Phishing Attack | 6 Useful Tips Spear phishing Spear phishing is an email attack targeting a specific individual. So, whereas bulk phishing uses a net — sending emails to as many potential victims as possible — spear phishing uses a spear to target one specific victim. Again, spear phishing is can also be an umbrella term, in that there are lots of different types of phishing attacks. Some of the examples below, including Business Email Compromise (BEC) and CEO fraud, are almost always spear phishing attacks. Why? Because whenever a phishing attack targets a specific individual, it’s a spear phishing attack. Here’s an example:
What makes this a spear phishing email? It targets a specific person. The “click here” link leads to a credential phishing website. Most importantly — you guessed it — DHL didn’t send it! Further reading: ⚡  What is Spear Phishing? ⚡ What’s the Difference Between Phishing and Spear Phishing? ⚡ Spear Phishing: Screenshots of Real Email Attacks Business Email Compromise (BEC) Business Email Compromise (BEC) is any phishing attack where the attacker uses a hacked, spoofed, or impersonated corporate email address. In the sense that the attacker is impersonating a business, the Netflix and DHL examples above are both BEC attacks. But we normally use “BEC” to refer to a more sophisticated form of email attack. For example, one of the biggest cyberattacks of all time is an example of BEC. Between 2013 and 2015, a Latvian cybercrime gang headed by Evaldas Rimasauskas scammed Facebook and Google out of around $121 million by impersonating their suppliers and sending fake invoices via email. Further reading: ⚡ What is Business Email Compromise (BEC)? ⚡  5 Real Examples of Business Email Compromise
CEO fraud In a CEO fraud attack, the attacker impersonates a company executive and targets a less senior employee. Here’s an example:
What makes this a CEO fraud attack? The sender’s email address impersonates a real company executive (note the method here is email impersonation — ”microsott.com” — but other methods such as email spoofing are also common). The sender (“Leon”) puts a lot of pressure on the recipient (Tess). Stressed people make poor decisions. The attack involves wire transfer fraud. While not all CEO fraud attacks involve wire transfer fraud, this is a very common tactic. Further reading: ⚡  What is CEO Fraud? ⚡ CEO Fraud Prevention: 3 Effective Solutions How common are email attacks? Email attacks are on the rise, and are now extremely common. According to the FBI’s Internet Crime Complaint Center (IC3), phishing incidents more than doubled from 2019 to 2020, costing victims over $54 million in direct losses. Verizon says 22% of breaches in 2019 involved phishing. Around 75% of organizations around the world experienced some kind of phishing attack in 2020. Want more data on phishing and other email attacks? See our article Phishing Statistics (Updated 2021). Consequences of email attacks What are the main consequences of email attacks on businesses and their customers? Data breaches: Attackers use techniques such as credential phishing to exfiltrate your customers’ personal information. Data breaches can attract investigations, regulatory fines, and class-action lawsuits. IBM estimates that the average data breach costs a business $3.86 million Malware: Some email attacks aim to deposit a malicious payload on the recipient’s device. This payload is normally some form of malware, for example: A virus, which can infect other devices on your network Spyware, which can log your keystrokes and online activity  Ransomware, which encrypts your valuable data and demands you pay a ransom to get it back. Wire transfer fraud: Spear phishing attacks—particularly if they involve BEC or CEO fraud—often attempt to persuade the target into transferring funds into a bank account controlled by the attacker. And it really works—that’s why the FBI calls BEC “the $26 billion scam”
Human Layer Security DLP Data Exfiltration
11 Examples of Data Breaches Caused By Misdirected Emails
17 March 2021
While phishing, ransomware, and brute force attacks tend to make headlines, misdirected emails (emails sent to the wrong person) are actually a much bigger problem. In fact, in organizations with 1,000 employees, at least 800 emails are sent to the wrong person every year. That’s two a day. You can find more insights in The Psychology of Human Error and The State of Data Loss Prevention 2020.  Are you surprised? Most people are. That’s why we’ve rounded up this list of 11 real-world (recent) examples of data breaches caused by misdirected emails. And, if you skip down to the bottom, you’ll see how you can prevent misdirected emails (and breaches!) in your organization.  If you’re looking for a bit more background, check out these two articles: What is a Misdirected Email? Consequences of Sending an Email to the Wrong Person 11 examples of data breaches caused by misdirected emails  1. University support service mass emails sensitive student information University and college wellbeing services deal with sensitive personal information, including details of the health, beliefs, and disabilities of students and their families.  Most privacy laws impose stricter obligations on organizations handling such sensitive personal information—and there are harsher penalties for losing control of such data. So imagine how awful the Wellbeing Adviser at the University of Liverpool must have felt when they emailed an entire school’s worth of undergraduates with details about a student’s recent wellbeing appointment. The email revealed that the student had visited the Adviser earlier that day, that he had been experiencing ongoing personal difficulties, and that the Adviser had advised the student to attend therapy. A follow-up email urged all the recipients to delete the message “immediately” and appeared to blame the student for providing the wrong email address. One recipient of the email reportedly said: “How much harder are people going to find it actually going to get help when something so personal could wind up in the inbox of a few hundred people?” 2. Trump White House emails Ukraine ‘talking points’ to Democrats Remember in 2019, when then-President Donald Trump faced accusations of pressuring Ukraine into investigating corruption allegations against now-President Joe Biden? Once this story hit the press, the White House wrote an email—intended for Trump’s political allies—setting out some “talking points” to be used when answering questions about the incident (including blaming the “Deep State media”). Unfortunately for the White House, they sent the email directly to political opponents in the Democratic Party. White House staff then attempted to “recall” the email. If you’ve ever tried recalling an email, you’ll notice that it doesn’t normally work.  Recalling an email only works if the recipient is on the same exchange server as you—and only if they haven’t read the email. Looking for information on this? Check out this article: You Sent an Email to the Wrong Person. Now What? Unsurprisingly, this was not the case for the Democrats who received the White House email, who subsequently leaked it on Twitter.  I would like to thank @WhiteHouse for sending me their talking points on how best to spin the disastrous Trump/Zelensky call in Trump’s favor. However, I will not be using their spin and will instead stick with the truth. But thanks though. — US Rep Brendan Boyle (@RepBrendanBoyle) September 25, 2019 3. Australia’s Department of Foreign Affairs and Trade  leaked 1,000 citizens’ email addresses On September 30, 2020, Australia’s Department of Foreign Affairs and Trade (DFAT) announced that the personal details of over 1,000 citizens were exposed after an employee failed to use BCC. So, who were the citizens Australians who have been stuck in other countries since inbound flights have been limited (even rationed) since the outbreak of COVID-19. The plan was to increase entry quotas and start an emergency loans scheme for those in dire need. Those who had their email addresses exposed were among the potential recipients of the loan. Immediately after the email was sent, employees at DFAT tried to recall the email, and event requested that recipients delete the email from their IT system and “refrain from any further forwarding of the email to protect the privacy of the individuals concerned.” 4. Serco exposes contact traces’ data in email error  In May 2020, an employee at Serco, a business services and outsourcing company, accidentally cc’d instead of bcc’ing almost 300 email addresses. Harmless, right? Unfortunately not.  The email addresses – which are considered personal data – belonged to newly recruited COVID-19 contact tracers. While a Serco spokesperson has apologized and announced that they would review and update their processes, the incident nonetheless has put confidentiality at risk and could leave the firm under investigation with the ICO.  5. Sonos accidentally exposes the email addresses of hundreds of customers in email blunder  In January 2020, 450+ email addresses were exposed after they were (similar to the example above) cc’d rather than bcc’d.  Here’s what happened: A Sonos employee was replying to customers’ complaints. Instead of putting all the email in BCC, they were CC’d, meaning that every customer who received the email could see the personal email addresses of everyone else on the list.  The incident was reported to the ICO and is subject to potential fines.
6. Gender identity clinic leaks patient email addresses In September 2019, a gender identity clinic in London exposed the details of close to 2,000 people on its email list after an employee cc’d recipients instead of bcc’ing them. Two separate emails were sent, with about 900 people cc’d on each.  While email addresses on their own are considered personal information, it’s important to bear in mind the nature of the clinic. As one patient pointed out, “It could out someone, especially as this place treats people who are transgender.”  The incident was reported to the ICO who is currently assessing the information provided. But, a similar incident may offer a glimpse of what’s to come.  In 2016, the email addresses of 800 patients who attended HIV clinics were leaked because they were – again – cc’d instead of bcc’d. An NHS Trust was £180,000. Bear in mind, this fine was issued before the introduction of GDPR. 7. University mistakenly emails 430 acceptance letters, blames “human error” In January 2019, The University of South Florida St. Petersburg sent nearly 700 acceptance emails to applicants. The problem? Only 250 of those students had actually been accepted. The other 400+ hadn’t. While this isn’t considered a breach (because no personal data was exposed) it does go to show that fat fingering an email can have a number of consequences.  In this case, the university’s reputation was damaged, hundreds of students were left confused and disappointed, and the employees responsible for the mistake likely suffered red-faced embarrassment on top of other, more formal ramifications. The investigation and remediation of the incident also will have taken up plenty of time and resources.  8. Union watchdog accidentally leaked secret emails from confidential whistleblower In January 2019, an official at Australia’s Registered Organisations Commission (ROC) accidentally leaked confidential information, including the identity of a whistleblower. How? The employee entered an incorrect character when sending an email. It was then forwarded to someone with the same last name – but different first initial –  as the intended recipient.  The next day, the ROC notified the whistleblower whose identity was compromised and disclosed the mistake to the Office of the Australian Information commissions as a potential privacy breach. 9. Major Health System Accidentally Shares Patient Information Due to Third-Party Software for the Second Time This Year In May 2018 Dignity Health – a major health system headquartered in San Francisco that operates 39 hospitals and 400 care centers around the west coast – reported a breach that affected 55,947 patients to the U.S. Department of Health and Human Services.  So, how did it happen? Dignity says the problem originated from a sorting error in an email list that had been formatted by one of its vendors. The error resulted in Dignity sending emails to the wrong patients, with the wrong names. Because Dignity is a health system, these emails also often contained the patient’s doctor’s name. That means PII and Protect health information (PHI) was exposed.  10. Inquiry reveals the identity of child sexual abuse victims This 2017 email blunder earned an organization a £200,000 ($278,552) fine from the ICO. The penalty would have been even higher if the GDPR has been in force at the time. When you look at the detail of this incident, it’s easy to see why the ICO wanted to impose a more severe fine. The Independent Inquiry into Child Sexual Abuse (IICSA) sent a Bcc email to 90 recipients, all of whom were involved in a public hearing about child abuse.  Sending a Bcc means none of the recipients can see each other’s details/ But the sender then sent a follow-up email to correct an error—using the “To” field by mistake. The organization made things even worse by sending three follow-up emails asking recipients to delete the original message—one of which generated 39 subsequent “Reply all” emails in response. The error revealed the email addresses of all 90 recipients and 54 people’s full names.  But is simply revealing someone’s name that big of a deal? Actually, a person’s name can be very sensitive data—depending on the context. In this case, IICSA’s error revealed that each of these 54 people might have been victims of child sexual abuse. 11. Boris Johnson’s dad’s email blunder nearly causes diplomatic incident Many of us know what it’s like to be embarrassed by our dad.  Remember when he interrogated your first love interest? Or that moment your friends overheard him singing in the shower. Or when he accidentally emailed confidential information about the Chinese ambassador to the BBC. OK, maybe not that last one. That happened to the father of U.K. Prime Minister Boris Johnson in February 2020. Johnson’s dad, Stanley Johnson, was emailing British officials following a meeting with Chinese ambassador Liu Xiaoming. He wrote that Liu was “concerned” about a lack of contact from the Prime Minister to the Chinese state regarding the coronavirus outbreak. The Prime Minister’s dad inexplicably copied the BBC into his email, providing some lucky journalists with a free scoop about the state of U.K.-China relations. It appears the incident didn’t cause any big diplomatic issues—but we can imagine how much worse it could have been if Johnson had revealed more sensitive details of the meeting.
Prevent misdirected emails (and breaches) with Tessian Guardian Regardless of your region or industry, protecting customer, client, and company information is essential. But, to err is human. So how do you prevent misdirected emails? With machine learning.  Tessian turns an organization’s email data into its best defense against human error on email. Our Human Layer Security technology understands human behavior and relationships and automatically detects and prevents emails from being sent to the wrong person. Yep, this includes typos, accidental “reply alls” and cc’ing instead of bcc’ing. Tessian Guardian can also detect when you’ve attached the wrong file. Interested in learning more about how Tessian can help prevent accidental data loss and data exfiltration in your organization? You can read some of our customer stories here or book a demo.
Human Layer Security
Email is the #1 Threat Vector. Here’s Why.
11 March 2021
Billions of people use email everyday — it’s the backbone of online collaboration, administration, and customer service. But businesses lose billions to email-based cyberattacks every year. Workers use email to exfiltrate sensitive company data. And simple human errors, like sending an email to the wrong person, can be highly problematic. The bottom line: for all its benefits, email communication is risky and, according to research, it’s the threat vector security leaders are most concerned about protecting.  This article will look at the main threats associated with using email — and consider what you can do to mitigate them. The scope of the problem Before we look at some of the risks of email communication, let’s consider the scope of the problem. After all, around 4 billion people worldwide use email regularly.  2020 estimates showed that people send and receive around 306.4 billion emails per day — up 4% from 2019. The Digital Marketing Association suggests that 90% of people check their email at least once per day.  Adobe data shows that email is the preferred contact method for marketing communications — by a long shot. So, with alternative platforms like Slack and Teams rising in popularity. why does email remain the world’s main artery of communication? Email is platform-independent, simple, and accessible. No company would consider cutting email out of its communication channels.  But for every “pro” involved in using email, there’s a “con.” If you’re relying on email communication, you need to mitigate the risks. Security risks involved in using email  A major risk of email communication is security. Because it’s so flexible and easy-to-use, email carries a unique set of security risks. Phishing attacks  Phishing is a type of online “social engineering” attack. The attacker impersonates somebody that their target is likely to trust and manipulates them into providing sensitive information, transferring money, or revealing login credentials. Around 90% of phishing occurs via email. Here are the main types: Spear phishing: The attacker targets a specific individual (instead of sending bulk phishing emails indiscriminately). Whaling: The attacker targets a CEO or other executive-level employee. Business Email Compromise (BEC): A phishing attack in which the attacker appears to be using a legitimate corporate email address. CEO fraud: The attacker impersonates a company’s CEO and targets a junior employee. Wire transfer phishing: The attacker persuades a company employee to transfer money to a fraudulent bank account. Credential phishing: The attacker steals login details, such as usernames or passwords While today, most people are attuned to the problem of phishing, the problem is only getting worse. Don’t believe us? Check out these 50+ must-know phishing statistics. That means phishing protection is an essential part of using email. Looking for more information on inbound email protection? Click here.  Insider threats As well as inbound email threats, like phishing, you must also consider the threats that can arise from inside your business. Tessian survey data suggests that 45% of employees download, save, send, or otherwise exfiltrate work-related documents before leaving their job. The most competitive industries — like tech, management consultancy, and finance — see the highest rates of this phenomenon.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Email is a quick and convenient way to send large amounts of data to external contacts — and can be a pipeline for disgruntled or corrupt employees to siphon off company assets. If you want to learn more about insider threats, including real-world examples, check out these articles: What is an Insider Threat? Insider Threat Types and Real-World Examples Insider Threat Statistic You Should Know Insider Threat Indicators: 11 Ways to Recognize an Insider Threat Remote working Phishing is a booming criminal industry — and there’s evidence that the new patterns of remote working are making phishing more common than ever. Tessian research shows that 65% of US and UK employees received a phishing email when working remotely in 2020 due to the COVID-19 pandemic, and 82% of IT leaders think their company is at greater risk of phishing attacks when employees are working from home. If your company operates a hybrid or remote working model, email security is even more crucial. Human error on email Innocent mistakes can be just as harmful as cyberattacks. In fact, 88% of data breaches are caused by human error. Misdirected emails Research shows that most people have sent at least one email to the wrong person, with nearly one-fifth admitting to sending an email to someone outside of their organization. Our platform data also shows that there are, on average, 800 misdirected emails per year in companies with more than 1000 employees.That’s two a day.  Sending an email to the wrong recipient is so common, you might not think they’re a big deal. But data from the UK’s Information Commissioner’s Office (ICO) consistently shows that misdirected emails are the number one cause of reportable data breaches. Misspelling, autocorrect, reply-all — these are all reasons you might send an email to the wrong recipient. It’s a serious risk of email communication — but you can prevent it. Misattached files Along with misdirected emails, “misattached files” are a major cause of data loss. New data shows some very worrying trends related to people sending emails with incorrect attachments. First, here’s what’s inside the documents people are sending in error: 42% contained company research or data  39% contained security information, such as login credentials  38% contained financial information and client information  36% contained employee data The survey also shows that – as a result of sending misattached files – one-third lost a customer or client — and 31% faced legal action. Email communication: how to mitigate the risks The risks we’ve described all depend on human vulnerabilities. Cyberattackers prey on people’s trust and deference to authority — and anyone can make a mistake when sending an email. That’s why email security is a must. Looking for help choosing a solution? We’ve put together this handy guide: 9 Questions That Will Help You Choose the Right Email Security Solution. If you want more tips, how-to guides, and checklists related to email security specifically and cybersecurity more broadly, sign-up for our newsletter!  While you’re here… Tessian software mitigates all types of risks associated with email communication: Tessian Defender: Automatically prevents spear phishing, account takeover, business email compromise, and other targeted email attacks. Tessian Enforcer: Automatically prevents data exfiltration over email. Tessian Guardian: Automatically prevents accidental data loss caused by misdirected emails and misattached files.
Human Layer Security
5 Cybersecurity Stats You Didn’t Know (But Should)
By Maddie Rosenthal
08 March 2021
When it comes to cybersecurity – specifically careers in cybersecurity –  there are a few things (most) people know. There’s a skills gap, with 3.12 million unfilled positions. There’s also a gender gap, with a workforce that’s almost twice as likely to be male.  But, we have good news. We surveyed 200 women working in cybersecurity and 1,000 recent grads (18-25 years old) for our latest research report, Opportunity in Cybersecurity Report 2021,  and the skills and gender gap seem to both be closing, and women working in the field are happier than ever, despite a tumultuous year.   Here’s five cybersecurity stats you didn’t know (but should). P.s. There are even more stats in the full report, and plenty of first-hand insights from women currently working in the field and recent grads considering a career in cybersecurity.
1. 94% of cybersecurity teams hired in 2020 As we all know, COVID-19 has had a profound impact on unemployment rates. But, as the global job market has contracted, cybersecurity appears to have expanded. According to our research, a whopping 94% of cybersecurity teams hired in 2020. Better still, this hiring trend isn’t isolated; it’s consistent across industries, from Healthcare to Finance. Want to know which industries were the most likely to hire in 2020? Download the full report. 2. Nearly half of women say COVID-19 POSITIVELY affected their career
This is one figure that we’re especially proud to report: 49% of women say COVID-19 positively affected their career in cybersecurity. In the midst of a global recession, this is truly incredible. Is it increased investment in IT that’s driving this contentment? The flexibility of working from home? An overwhelming sense of job security? We asked female cybersecurity professionals, and they answered. See what they had to say.  3. 76% of 18-25 year olds say cybersecurity is “interesting” Last year, we asked women working cybersecurity why others might not consider a job in the field. 42% said it’s because the industry isn’t considered “cool” or “exciting”. We went directly to the source and asked recent grads (18-25 years old) and our data tells a different story. 76% said of them said that cybersecurity is interesting.  This is encouraging, especially since… 4. ⅓ of recent grads would consider a job in cybersecurity !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); While we don’t have any data to compare and contrast this number to, we feel confident saying that interest in the field is growing. Perhaps fueled by the fact that it is – actually – interesting? 31% of recent grads say they would consider a job in cybersecurity. But men are almost twice as likely as women to float the idea.   Want to know why? We pulled together dozens of open-ended responses from  our survey respondents. Click here to see what they said.  5. There’s $43.1 billion up for grabs…
Today, the total value of the cybersecurity industry in the US is $107.7 billion. But, if the gender gap were closed, and the number of women working in the field equaled the number of men, the total value would jump to $138.1 billion. And, if women and men earned equal salaries, it’d increase even more.  The total (potential) value of the industry? $150.8 billion.
Human Layer Security Spear Phishing
Romance Fraud Scams Are On The Rise
By Laura Brooks
11 February 2021
Cybercriminals are exploiting “lockdown loneliness” for financial gain, according to various reports this week, which reveal that the number of incidents of romance fraud and romance scams increased in 2020.  UK Finance, for example, reported that bank transfer fraud related to romance scams rose by 20% in 2020 compared to 2019, while Action Fraud revealed that £68m was lost by people who had fallen victim to romance fraud last year – an increase on the year before. Why? Because people have become more reliant on online dating and dating apps to connect with others amid social distancing restrictions put in place for the Covid-19 pandemic.
With more people talking over the internet, there has been greater opportunity for cybercriminals to trick people online. Adopting a fake identity and posing as a romantic interest, scammers play on people’s emotions and build trust with their targets over time, before asking them to send money (perhaps for medical care), provide access to bank accounts or share personal information that could be used to later commit identity fraud. Cybercriminals will play the long-game; they have nothing but time on their hands.  A significant percentage of people have been affected by these romance scams. In a recent survey conducted by Tessian, one in five US and UK citizens has been a victim of romance fraud, with men and women being targeted equally.
Interestingly, people aged between 25-34 years old were the most likely to be affected by romance scams. Tessian data shows that of the respondents who said they had been a victim of romance fraud, 45% were aged between 25-34 versus just 4% of respondents who were aged over 55 years old.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); This may be because romance fraud victims are most commonly targeted on social media platforms like Facebook or Instagram, with a quarter of respondents (25%) saying they’d been successfully scammed on these channels.  This was closely followed by email (23%) while one in five people said they’d been targeted on mobile dating apps, and 16% said they’d been scammed via online dating websites.  This behavior is quite typical, say experts. Often romance fraud will start on dating apps or official dating websites but scammers will move to social media, email or text in order to reduce the trail of evidence.
How to avoid falling for a romance scam It’s important to remember that most dating apps and websites are completely safe. However, as social distancing restrictions remain in place for many regions, people should consider how they could be targeted by social engineering attacks and phishing scams at this time. We advise people to question any requests for personal or financial information from individuals they do not know or have not met in person, and to verify the identity of someone they’re speaking to via a video call. We also recommend the following: Never send money or a gift online to someone who you haven’t met in person. Be suspicious of requests from someone you’ve met on the internet. Scammers will often ask for money via wire transfers or reload cards because they’re difficult to reverse. Be wary of any email or DM you receive from someone you don’t know. Never click on a link or download an attachment from an unusual email address.  Keep social media profiles and posts private. Don’t accept friend requests or DMs from people you don’t know personally.  The FBI and Action Fraud have also provided citizens with useful advice on how to avoid falling for a romance scam and guidance for anyone who thinks they may have already been targeted by a scammer.  And if you want to learn more about social engineering attacks, you can read Tessian’s research How to Hack a Human. 
Human Layer Security DLP
Industry-First Product: Tessian Now Prevents Misattached Files on Email
By Harry Wetherald
11 February 2021
Misdirected emails – emails sent to the wrong person – are the number one security incident reported to the Information Commissioner’s Office. And, according to Tessian platform data, an average of 800 misdirected emails are sent every year in organizations with over 1,000 employees.  An unsolved problem We solved this years ago with Tessian Guardian, our solution for accidental data loss. But sending an email to the wrong person is just one part of the problem. What about sending the wrong attachment? After all, our data shows that 1 in 5 external emails contain an attachment and new Tessian research reveals that nearly half (48%) of employees have attached the wrong file to an email. We call these “misattached files” and we’re happy to announce a new, industry-first feature that prevents them from being sent.  The consequences of attaching the wrong file The consequences of a misattached file depend on what information is contained in the attachments.  According to Tessian’s survey results, 42% of documents sent in error contained company research and data. More worryingly, nearly two-fifths (39%) contained security information like passwords and passcodes, and another 38% contained financial information and client information.  36% of mistakenly attached documents contained employee data.  Any one of the above mistakes could result in lost customer data and IP, reputational damage, fines for non-compliance, and customer churn. In fact, one-third of respondents said their company lost a customer or client following this case of human error, and a further 31% said their company faced legal action.  Until now, there weren’t any email security tools that could consistently identify when wrong files were being shared. This meant attachment mistakes went undetected…until there were serious consequences.  How does Tessian detect misattached files? The latest upgrade to Tessian Guardian leverages historical learning to understand whether an employee is attaching the correct file or not. When an email is being sent, Guardian’s machine learning (ML) algorithm uses deep content inspection, natural language processing (NLP), and heuristics to detect attachment anomalies such as: Counterparty anomalies: The attachment is related to a company that isn’t typically discussed with the recipients. For example, attaching the wrong invoice. Name anomalies: The attachment is related to an individual who isn’t typically  discussed with the recipients. For example, attaching the wrong individual’s legal case files. Context anomalies: The attachment looks unusual based on the email context. For example, attaching financial-model.xlsx to an email about a “dinner reservation.” File type anomalies: The attachment file type hasn’t previously been shared with the receiving organization. For example, sending an .xlsx file to a press agency.
If a misattached file is detected, the sender is immediately alerted to the error before the email is sent. Best of all, the warnings are helpful, not annoying and flag rates are low. This means employees can do their jobs without security getting in the way.  Want to learn more about how Tessian detects attachment anomalies before they’re sent? Download the data sheet.
Benefits for Tessian customers Tessian is the only solution in the market that can solve the problem of misattached files, giving customers complete protection from accidental data loss on email.  In addition to preventing human error and subsequent breaches, Tessian Guardian has several features that help ease the burden of compliance on thinly-stretched security teams and give key key stakeholders peace of mind. These include: Automated protection: Tessian Guardian automatically detects and prevents misattached files. No rules or manual investigation required.   Flexible configuration options: With this new feature, customers will be able to configure Guardian’s algorithm to enable and/or disable specific use-cases. This allows administrators to balance user experience with the level of protection appropriate to their risk appetite. Data-rich dashboards: For the first time, customers will have visibility of how many misattached files are being sent in their organization and by whom. This demonstrates clear ROI and makes auditing and reporting easy. 
Learn more about Tessian Interested in learning more about Tessian Guardian’s new features? Current Tessian customers can get in touch with your Customer Success Manager. Not yet a Tessian customer? Learn more about our technology, explore our customer stories, or book a demo now.
Human Layer Security
Check out the Speaker Line-Up for Tessian Human Layer Security Summit!
By Maddie Rosenthal
05 February 2021
On March 3, Tessian is hosting the first Human Layer Security Summit of 2021. And, after a hugely successful series of summits in 2020,  we’re (once again) putting together an agenda that’ll help IT, compliance, legal, and business leaders overcome security challenges of today and tomorrow.
What’s on the agenda for Human Layer Security Summit? Panel discussions, fireside chats, and presentations will be focused on solving three key problems: Staying ahead of hackers to prevent advanced email threats like spear phishing, account takeover (ATO), and CEO fraud Reducing risk over time by building a strong security culture Building future-proof security strategies that engage everyone, from employees to the board So, who will be sharing their expertise to help you overcome these problems? 20+ speakers and partners. Some of the best and the brightest in the field.  If you want to learn more about what to expect, you can watch sessions from previous summits on-demand here.  Who’s speaking at Human Layer Security Summit? While we don’t want to give all the surprises away just yet, we will share a sneak peek at 11 speakers. Make sure to follow us on LinkedIn and Twitter and subscribe to our newsletter for the latest updates, including detailed information about each of the nine sessions. Elsa Ferriera, CISO at Evercore: For nearly 10 years, Elsa has managed risks, audited business processes, and maintained security at Evercore, one of the most respected investment banking firms in the world.  Gaynor Rich, Global Director of Cybersecurity Strategy at Unilever: Well-known for her expertise in cybersecurity, data protection, and risk management, Gaynor brings over 20 years of experience to the summit, the last six of which have been spent at one of the most universally recognized brands: Unilever.   Samy Kamkar, Renowned Ethical Hacker: As a teenager, Samy released one of the fastest-spreading computer viruses of all-time. Now, he’s a compassionate advocate for young hackers, whistleblower, and privacy and security researcher.  Marie Measures, CTO at Sanne Group: With over two decades of experience in the field, Marie has headed up information and technology at Capital One, Coventry Building Society, and now Sanne Group, the leading provider of alternative asset and corporate services.   Joe Mancini, SVP, EnterpriseRisk at BankProv: Joe is the Senior Vice President-Enterprise Risk at BankProv, an innovative commercial bank headquartered in Amesbury, MA. Joe has implemented a forward-thinking, business-enabling risk management strategy at BankProv which allows the fast-paced organization to safely expand its products and services to better suit their growing client base. Prior to his role at BankProv, he spent several years as the CISO at Radius Bank, and ten years at East Boston Savings Bank in various risk related roles. Joe is an expert in emerging technologies such as digital currency and blockchain, along with data security, and risk and compliance requirements in a digital world.  David Aird, IT Director at DAC Beachcroft: Having held the position of IT Director at DAC Beachcroft – one of the top 20 UK law firms – for nearly eight years, David has led the way to be named Legal Technology Team of the Year in 2019 and received awards in both 2017 and 2019 for Excellence in IT Security.  Dan Raywood, Former Security Analyst and Cybersecurity Journalist: Dan – the former Deputy Editor of Infosecurity Magazine and former analyst for 451 Research – is bringing decades of experience to the summit.  Jenny Radcliffe, “The People Hacker”: Jenny is a world-renowned Social Engineer, penetration tester, speaker, and the host of Human Factor Security podcast. Patricia Patton, Executive Coach: The former Global Head of Professional Development at Barclays and Executive Coach at LinkedIn, Patricia’s expertise will help security leaders forge better relationships with people and teams and teach attendees how to lead and influence. Nina Schick, Deep Fake Expert: Nina is an author, broadcaster, and advisor who specializes in AI and deepfakes. Over the last decade, she’s worked with Joe Biden, President of the United States, and has contributed to Bloomberg, CNN, TIME, and the BBC. Annick O’Brien, Data Protection Officer and Cyber Risk Officer: As an international compliance lawyer, certified Compliance Officer (ACCOI), member of the IAPP, and registered DPO, Annick specializes in privacy, GDPR program management, and training awareness projects.  Don’t miss out. Register for the Human Layer Security Summit now. It’s online, it’s free, and – for anyone who can’t make it on the day – you’ll be able to access all the sessions on-demand.
A word about our sponsors We’re thrilled to share a list of sponsors who are helping make this event the best it can be.  Digital Shadows, Detectify, The SASIG, Mischcon De Reya, HackerOne, AusCERT, and more.  Stay tuned for more announcements and resources leading up to the event.
Human Layer Security
The 7 Deadly Sins of SAT
02 February 2021
Security Awareness Training (SAT) just isn’t working: for companies, for employees, for anybody.  By 2022, 60% of large organizations will have comprehensive SAT programs (source: Gartner Magic Quadrant for SAT 2019), with global spending on security awareness training for employees predicted to reach $10 billion by 2027. While this adoption and market size seems impressive, SAT in its current form is fundamentally broken and needs a rethink. Fast.  There are 7 fundamental problems with SAT today: 1. It’s a tick box SAT is seen as a “quick win” when it comes to security – a tick box item that companies can do in order to tell their shareholders, regulators and customers that they’re taking security seriously. Often the evidence of these initiatives being conducted is much more important than the effectiveness of them. 2. It’s boring and forgettable Too many SAT programs are delivered once or twice a year in unmemorable sessions. However we dress it up, SAT just isn’t engaging. The training sessions are too long, videos are cringeworthy, and the experience is delivered through clunky interfaces reminiscent of CD-Rom multimedia from the 90s. What’s more, after just one day people forget more than 70% of what was taught in training, while 1 in 5 employees don’t even show up for SAT sessions. 3. It’s one-size-fits-all We give the same training content to everyone, regardless of their seniority, tenure, location, department etc. This is a mistake. Every employee has different security characteristics (strengths, weaknesses, access to data and systems) so why do we insist on giving the same material to everybody to focus on? 4. It’s phishing-centric Phishing is a huge risk when it comes to Human Layer Security, but it’s by no means the only one. So many SAT programs are overly focused on the threat of phishing and completely ignore other risks caused by human error, like sending emails and attachments to the wrong people or sending highly confidential information to personal email accounts. Learn more about the pros and cons of phishing awareness training.  5. It’s one-off Too many SAT programs are delivered once or twice a year in lengthy sessions. This makes it really hard for employees to remember the training they were given (when they completed it five months ago), and the sessions themselves have to cram in too much content to be memorable. 
6. It’s expensive So often companies only look at the license cost of a SAT program to determine costs—this is a grave mistake. SAT is one of the most expensive parts of an organization’s security program, because the total cost of ownership includes not just the license costs, but also the total cost of all employee time spent going through it, not to mention the opportunity cost of them doing something else with that time. 
7. It’s disconnected from other systems SAT platforms are generally standalone products, and they don’t talk to other parts of the security stack. This means that organizations aren’t leveraging the intelligence from these platforms to drive better outcomes in their security practice (preventing future breaches), nor are they using the intelligence to improve and iterate on the overall security culture of the company.  The solution? SAT 2.0 So, should we ditch our SAT initiative altogether? Absolutely not! People are now the gatekeepers to the most sensitive systems and data in the enterprise and providing security awareness and training to them is a crucial pillar of any cybersecurity initiative. It is, however, time for a new approach—one that’s automated, in-the-moment, and long-lasting. Read more about Tessian’s approach to SAT 2.0 here.
Human Layer Security
SAT is Dead. Long Live SAT.
By Tim Sadler
02 February 2021
Security Awareness Training (SAT)  just isn’t working: for companies, for employees, for anybody.  The average human makes 35,000 decisions every single day. On a weekday, the majority of these decisions are those made at work; decisions around things like data sharing, clicking a link in an email, entering our password credentials into a website. Employees have so much power at their fingertips, and if any one of these 35,000 decisions is, in fact a bad decision — like somebody breaking the rules, making a mistake or being tricked —it can lead to serious security incidents for a business.  The way we tackle this today? With SAT. By 2022, 60% of large organizations will have comprehensive SAT programs (source: Gartner Magic Quadrant for SAT 2019), with global spending on security awareness training for employees predicted to reach $10 billion by 2027. While this adoption and market size seems impressive, SAT in its current form is fundamentally broken and needs a rethink. Fast.  As Tessian’s customer Mark Lodgson put it, “there are three fundamental problems with any awareness campaign. First, it’s often irrelevant to the user. The second, that training is often boring. The third, it takes a big chunk of money out of the business.” 
The 3 big problems with security awareness training There are three fundamental problems with SAT today: SAT is a tick-box exercise SAT is seen as a “quick win” when it comes to security – a box ticking item that companies can do in order to tell their shareholders, regulators and customers that they’re taking security seriously. Often the evidence of these initiatives being conducted is much more important than the effectiveness of them.  Too many SAT programs are delivered once or twice a year in lengthy sessions. This makes it really hard for employees to remember the training they were given (when they completed it five months ago), and the sessions themselves have to cram in too much content to be memorable.  SAT is one-size-fits-all and boring We give the same training content to everyone, regardless of their seniority, tenure, location, department etc. This is a mistake. Every employee has different security characteristics (strengths, weaknesses, access to data and systems) so why do we insist on giving the same material to everybody to focus on? Also, however we dress it up, SAT just isn’t engaging. The training sessions are too long, videos are cringeworthy and the experience is delivered through clunky interfaces reminiscent of CD-Rom multimedia from the 90s. What’s more, after just one day people forget more than 70% of what was taught in training, while 1 in 5 employees don’t even show up for SAT sessions. (More on the pros and cons of phishing awareness training here.) SAT is expensive So often companies only look at the license cost of a SAT program to determine costs—this is a grave mistake. SAT is one of the most expensive parts of an organization’s security program, because the total cost of ownership includes not just the license costs, but also the total cost of all employee time spent going through it, not to mention the opportunity cost of them doing something else with that time.
Enter, security awareness training 2.0 So, should we ditch our SAT initiative altogether? Absolutely not! People are now the gatekeepers to the most sensitive systems and data in the enterprise and providing security awareness and training to them is a crucial pillar of any cybersecurity initiative. It is, however, time for a new approach. Enter SAT 2.0.  SAT 2.0 is automated, in-the-moment and continuous Rather than having SAT once or twice per year scheduled in hour long blocks, SAT should be continuously delivered through nudges that provide in-the-moment feedback to employees about suspicious activity or risky behavior, and help them improve their security behavior over time. For example, our SAT programs should be able to detect when an employee is about to send all of your customer data to their personal email account, stop the email from being sent, and educate the employee in-the-moment about why this isn’t OK.  SAT also shouldn’t have to rely on security teams to disseminate to employees. It should be as automated as possible, presenting itself when needed most and adapting automatically to the specific needs of the employee in the moment. Automated security makes people better at their jobs.  SAT 2.0 is engaging, memorable and specific to each employee Because each employee has different security strengths and vulnerabilities, we need to make sure that SAT is specifically tailored to suit their needs. For example, employees who work in the finance team might need extra support with BEC awareness, and people in the sales team might need extra support with preventing accidental data loss. Tailoring SAT means employees can spend their limited time learning the things that are most likely to drive impact for them and their organization.   SAT should put the real life threats that employees face into context. Today SAT platforms rely on simulating phishing threats by using pre-defined templates of common threats. This is a fair approach for generic phishing awareness (e.g. beware the fake O365 password login page), but it’s ineffective at driving awareness and preparing employees for the highly targeted phishing threats they’re increasingly likely to see today (e.g. an email impersonating their CFO with a spoofed domain).
SAT 2.0 delivers real ROI SAT 2.0 can actually save your company money, by preventing the incidents of human error that result in serious data breaches. What’s more, SAT platforms are rich in data and insights, which can be used in other security systems and tools. We can use this information as an input to other systems and tools and the SAT platform itself to provide adaptive protection for employees. For example, if my SAT platform tells me that an employee has a 50% higher propensity to click malicious links in phishing emails, I can use that data as input to my email security products to, by default, strip links from emails they receive, actively stopping the threat from happening. It’s also crucial to expand the scope of SAT beyond just phishing emails. We need to educate our employees about all of the other risks they face when controlling digital systems and data. Things like misdirected emails and attachments, sensitive data being shared with personal or unauthorized accounts, data protection and PII etc.
SAT 2.0 is win-win for your business and your employees The shift to SAT 2.0 is win-win for both the enterprise and employees.  Lower costs and real ROI for the business Today SAT Is one of the most expensive parts of an enterprise’s security program, but it doesn’t have to be this way. By delivering smaller nuggets of educational nudges to employees when it’s needed most it means no more wasted business hours. Not only this, but by being able to detect risky behavior in the moment, SAT 2.0 can meaningfully help reduce data breaches and deliver real ROI to security teams. Imagine being able to report the board that your SAT 2.0 program has actually saved your company money instead. SAT 2.0 builds employees’ confidence In a recent study about why fear appeals don’t work in cybersecurity, it was revealed that the most important thing for driving behavior change for your employees is to help them build self-efficacy: a belief in themselves that they are equipped with the awareness of threats and the knowledge of what to do if something goes wrong. This not only hones their security reflexes, but also increases their overall satisfaction with work, as they get to spend less time in boring training sessions and feel more empowered to do their job securely.  3 easy steps to SAT 2.0 A training program that stops threats – not business or employee productivity – might sound like a pipe dream, but it doesn’t have to be. SAT 2.0 is as easy as 1,2,3…  Step 1: Leverage your SAT data to build a Human Risk Score Your SAT platform likely holds rich data and information about your employees and their security awareness that you’re not currently leveraging. Start by using the output of your SAT platform (e.g. test results, completion times, confidence scores, phishing simulation click through rates) to manually build a Human Risk Score for each employee. This provides you with a baseline understanding of who your riskiest and safest employees are, and offers insight into their specific security strengths and weaknesses. You can also add to this score with external data sources from things like your data breach register or data from other security tools you use. Step 2: Tailor your SAT program to suit the needs of departments or employees Using the Human Risk Scores you’ve calculated, you can then start to tailor your SAT program to the needs of employees or particular departments. If you know your Finance team faces a greater threat from phishing and produces higher click through rates on simulations, you might want to double down on your phishing simulation training. If you know your Sales team has problems with sending customer data to the wrong place, you may want to focus training there. Your employees have a finite attention span, make sure you’re capturing their attention on the most critical things as part of your SAT program.  Step 3: Connect your SAT platform to your other security infrastructure Use the data and insights from your SAT platform and your Human Risk Scores to serve as input for the other security infrastructure you use. You might choose to have tighter DLP controls set for employees with a high Human Risk Score or stricter inbound email security controls for people who have a higher failure rate on phishing simulations.  Want an even easier path to SAT 2.0? Invest in a Human Layer Security platform Tessian’s Human Layer Security platform can help you automatically achieve all of this and transition your organization into the brave new world of SAT 2.0. Using stateful machine learning, Tessian builds an understanding of historical employee security behavior, to automatically map Human Risk Scores, remediate security threats caused by people, and nudge employees toward better security behavior through in-the-moment notifications and alerts. SAT is not “just a people problem” We so often hear in the security community that “the focus is too much on technology when it needs to be on people”.  I disagree. We need to ask more of technology to deliver more impact with SAT.  SAT 1.0 is reminiscent of a time when to legally drive a car all you had to do was pass a driving test. You’d been trained! The box had been checked! And then all you had to do was make sure you did the right thing 100% of the time and you’d be fine.  But that isn’t what happened.  People inevitably made mistakes, and it cost them their lives. Today, I still have to pass my driving test to get behind the wheel of a car.But now our cars are loaded with assistive technology to keep us safe doing the most dangerous thing we do in our daily lives. Seatbelts, anti-lock brakes, airbags, notifications that tell me when I’m driving too fast, when I lose grip or when I’m about to run out of fuel.  However hard we try, however good the training, you can never train away the risk of human error. Car companies realized this over 60 years ago—we need to leverage technology to protect people in the moment when they need it the most.  This is the same shift we need to drive (excuse the pun) in SAT.  One day, we’ll have self driving cars, with no driving tests. Maybe we’ll have self driving cybersecurity with no need for SAT. But until then, give your employees the airbags, the seatbelt and the anti-lock brakes, not just the driving test and the “good luck”. 
Page