Tessian Recognized as a Representative Vendor in 2021 Gartner Market Guide for Data Loss Prevention — Read more.

Request a Demo of Tessian Today.
Automatically stop data breaches and security threats caused by employees on email. Powered by machine learning, Tessian detects anomalies in real-time, integrating seamlessly with your email environment within minutes and starting protection in a day. Provides you with unparalleled visibility into human security risks to remediate threats and ensure compliance.
Human Layer Security

90% of data breaches are caused by human error. Stay up to date on the latest tips, guides, and industry news on Human Layer Security.

Human Layer Security
June Human Layer Security Summit: Meet the Speakers
By Maddie Rosenthal
17 May 2021
Calling all cybersecurity trailblazers! Tessian’s quarterly flagship is back on June 3 with our best agenda yet.  Hundreds of security, compliance, and business leaders have already saved their spot to  learn more about human-centric security strategies, get first-hand insights from industry heavy-weights, and engage with peers through Q&As and a live chat function. What’s on the agenda? With over a dozen speakers across six sessions, we’ll be exploring: How to scale your enterprise security programs What CISOs can do to prevent the next SolarWinds attack How to prove the ROI of security and effectively communicate value to different stakeholders And much more… Keep reading to learn more about our speakers and partners. 
Meet the speakers While we don’t want to give all the surprises away just yet, we can share a sneak peek of 7 speakers joining us on June 3.  Make sure to follow us on LinkedIn and Twitter and subscribe to our weekly newsletter for the latest updates, including detailed information about each of the nine sessions. Bobby Ford, Senior Vice President and CSO at HP: Bobby – who has joined us as a speaker once before – has an incredible wealth of experience. He’s held senior security leadership titles at organizations across industries, including government, consumer goods, healthcare, and now technology. And, having secured organizations with hundreds of thousands of employees, he truly knows how to implement successful security strategies at the enterprise level. Punit Rajpara, Global Head of IT and Business Systems at GoCardless: Having led IT and security teams at Uber, WeWork, and now GoCardless, Punit has a proven track record of scaling security at hyper-growth companies. His goal? To ensure security is a business enabler, not a blocker and to change security’s reputation amongst the C-suite and employees. He’ll be sharing insights into how he delivers IT as a partnership, and a service to the business. Ian Bishop-Laggett, CISO at Schroders Personal Wealth: Now leading InfoSec at Schroders Personal Wealth, Ian has been working in financial services in security roles for over 10 years. That means he’s in the perfect position to talk about risks unique to the industry and the specific challenges human layer risks pose.  Jerry Perullo, CISO at ICE | New York Stock Exchange: With over 25 years of experience in cybersecurity, Jerry has an impressive resume. He’s served as the CISO of NYSE: ICE for 20 years, currently sits on the Board of Directors for FS-ISAC, the Analysis and Resilience Center (ARC) for Systemic Risk, and is the Founding Vice-Chair of the Global Exchange Cybersecurity Working Group under the World Federation of Exchanges.  Katerina Sibinovska, CISO at Intertrust Group: Katerina has a background in law, a passion for tech, and holds a number of IT and compliance certifications, including the CRISC and the GDPR F. Before graduating to CISO at Intertrust Group, she was the Head of IT Change & Compliance, and has a proven track record of balancing security with business operations and strategy. James McQuiggan, Security Awareness Advocate at KnowBe4: In addition to being a Security Awareness Advocate at KnowBe4, where he trains and engages with employees and security leaders about the importance of security awareness training, James also teaches Identify Security at a collegiate level and is the Education Director for the Florida Cyber Alliance. On June 3, he’ll be identifying key strategies to help you improve your training programs. Samy Kamkar, Renowned Ethical Hacker: As a teenager, Samy released one of the fastest-spreading computer viruses of all-time. Now, he’s a compassionate advocate for young hackers, whistleblower, and privacy and security researcher.  To learn more about our speakers and their approaches to cybersecurity, save your spot now and join a community of thousands on June 3.  If you can’t make it on the day – don’t worry. You’ll be able to access all the sessions on-demand if you sign-up.  Want to get a sneak peek of what you can expect on June 3? You can watch sessions from previous Human Layer Security Summits on-demand here. 
Human Layer Security Spear Phishing
Must-Know Phishing Statistics: Updated 2021
By Maddie Rosenthal
17 May 2021
Looking for something more visual? Check out this infographic with key statistics.
The frequency of phishing attacks According to the FBI, phishing was the most common type of cybercrime in 2020—and phishing incidents nearly doubled in frequency, from 114,702 incidents in 2019, to 241,324 incidents in 2020.  The FBI said there were more than 11 times as many phishing complaints in 2020 compared to 2016. According to Verizon’s 2021 Data Breach Investigations Report (DBIR), phishing is the top “action variety” seen in breaches in the last year and 43% of breaches involved phishing and/or pretexting. The frequency of attacks varies industry-by-industry (click here to jump to key statistics about the most phished). But 75% of organizations around the world experienced some kind of phishing attack in 2020. Another 35% experienced spear phishing, and 65% faced BEC attacks. But, there’s a difference between an attempt and a successful attack. 74% of organizations in the United States experienced a successful phishing attack. This is 30% higher than the global average, and 14% higher than last year. ESET’s Threat Report reveals that malicious email detections rose 9% between Q2 and Q3, 2020. This followed a 9% rise from Q1 to Q2, 2020. ⚡  Want to learn how to prevent successful attacks? Check out this page all about BEC prevention. How phishing attacks are delivered 96% of phishing attacks arrive by email. Another 3% are carried out through malicious websites and just 1% via phone. When it’s done over the telephone, we call it vishing and when it’s done via text message, we call it smishing. According to Sonic Wall’s 2020 Cyber Threat report, in 2019, PDFs and Microsoft Office files (sent via email) were the delivery vehicles of choice for today’s cybercriminals. Why? Because these files are universally trusted in the modern workplace.  When it comes to targeted attacks, 65% of active groups relied on spear phishing as the primary infection vector. This is followed by watering hole websites (23%), trojanized software updates (5%), web server exploits (2%), and data storage devices (1%). 
The most common subject lines According to Symantec’s 2019 Internet Security Threat Report (ISTR), the top five subject lines for business email compromise (BEC) attacks: Urgent Request Important Payment Attention Analysis of real-world phishing emails revealed these to be the most common subject lines in Q4, 2020: IT: Annual Asset Inventory Changes to your health benefits Twitter: Security alert: new or unusual Twitter login Amazon: Action Required | Your Amazon Prime Membership has been declined Zoom: Scheduled Meeting Error Google Pay: Payment sent Stimulus Cancellation Request Approved Microsoft 365: Action needed: update the address for your Xbox Game Pass for Console subscription RingCentral is coming! Workday: Reminder: Important Security Upgrade Required
The prevalence of phishing websites Google Safe Browsing uncovers unsafe URLs across the web. The latest data shows a world-wide-web rife with phishing websites. Since 2016, phishing has replaced malware as the leading type of unsafe website. While there were once twice as many malware sites as phishing sites, there are now nearly 75 times as many phishing sites as there are malware sites. Google has registered 2,145,013 phishing sites as of Jan 17, 2021. This is up from 1,690,000 on Jan 19, 2020 (up 27% over 12 months). This compares to malware sites rising from 21,803 to 28,803 over the same period (up 32%). Here you can see how phishing sites have rocketed ahead of malware sites over the years.
Further reading: ⚡ How to Identify a Malicious Website The most common malicious attachments Many phishing emails contain malicious payloads such as malware files. ESET’s Threat Report reports that in Q3 2020, these were the most common type of malicious files attached to phishing emails: Windows executables (74%) Script files (11%) Office documents (5%) Compressed archives (4%) PDF documents (2%) Java files (2%) Batch files (2%) Shortcuts (>1%) Android executables (>1%) You can learn more about malicious payloads here. The data that’s compromised in phishing attacks The top three “types” of data that are compromised in a phishing attack are: Credentials (passwords, usernames, pin numbers) Personal data (name, address, email address) Medical (treatment information, insurance claims) When asked about the impact of successful phishing attacks, security leaders around the world cited the following consequences:  60% of organizations lost data 52% of organizations had credentials or accounts compromised 47% of organizations were infected with ransomware 29% of organizations were infected with malware 18% of organizations experienced financial losses
The cost of a breach According to IBM’s Cost of a Data Breach Report, the average cost per compromised record has steadily increased over the last three years. In 2019, the cost was $150. For some context, 5.2 million records were stolen in Marriott’s most recent breach. That means the cost of the breach could amount to $780 million. But, the average breach costs organizations $3.92 million. This number will generally be higher in larger organizations and lower in smaller organizations.  According to Verizon, organizations also see a 5% drop in stock price in the 6 months following a breach. Losses from business email compromise (BEC) have skyrocketed over the last year. The FBI’s Internet Crime Report shows that in 2020, BEC scammers made over $1.8 billion—far more than via any other type of cybercrime. And, this number is only increasing. According to the Anti-Phishing Working Group’s Phishing Activity Trends Report, the average wire-transfer loss from BEC attacks in the second quarter of 2020 was $80,183. This is up from $54,000 in the first quarter. This cost can be broken down into several different categories, including: Lost hours from employees Remediation Incident response Damaged reputation Lost intellectual property Direct monetary losses Compliance fines Lost revenue Legal fees Costs associated remediation generally account for the largest chunk of the total.  Importantly, these costs can be mitigated by cybersecurity policies, procedures, technology, and training. Artificial Intelligence platforms can save organizations $8.97 per record.  The most targeted industries Last year, Public Administration saw the most breaches from social engineering (which caused 69% of the industry’s breaches), followed by Mining and Utilities and Professional Services. But, according to another report, employees working in Wholesale Trade are the most frequently targeted by phishing attacks, with 1 in every 22 users being targeted by a phishing email last year.  According to yet another data set, the most phished industries vary by company size. Nonetheless, it’s clear Manufacturing and Healthcare are among the highest risk industries. The industries most at risk in companies with 1-249 employees are: Healthcare & Pharmaceuticals Education Manufacturing The industries most at risk in companies with 250-999 employees are: Construction Healthcare & Pharmaceuticals Business Services The industries most at risk in companies with 1,000+ employees are: Technology Healthcare & Pharmaceuticals Manufacturing The most impersonated brands New research found the brands below to be the most impersonated brands used in phishing attacks throughout Q4, 2020. In order of the total number of instances the brand appeared in phishing attacks: Microsoft (related to 43% of all brand phishing attempts globally) DHL (18%) LinkedIn (6%) Amazon (5%) Rakuten (4%) IKEA (3%) Google (2%) Paypal (2%) Chase (2%) Yahoo (1%) The common factor between all of these consumer brands? They’re trusted and frequently communicate with their customers via email. Whether we’re asked to confirm credit card details, our home address, or our password, we often think nothing of it and willingly hand over this sensitive information.
Facts and figures related to COVID-19 scams Because hackers tend to take advantage of key calendar moments (like Tax Day or the 2020 Census) and times of general uncertainty, individuals and organizations saw a spike in COVID-19 phishing attacks starting in March. But, according to one report, COVID-19 related scams reached their peak in the third and fourth weeks of April. And, it looks like hackers were laser-focused on money. Incidents involving payment and invoice fraud increased by 112% between Q1 2020 and Q2 2020. It makes sense, then, that finance employees were among the most frequently targeted employees. In fact, attacks on finance employees increased by 87% while attacks on the C-Suite decreased by 37%. Further reading: ⚡ COVID-19: Screenshots of Phishing Emails ⚡How Hackers Are Exploiting the COVID-19 Vaccine Rollout ⚡ Coronavirus and Cybersecurity: How to Stay Safe From Phishing Attacks. Phishing and remote working According to Microsoft’s New Future of Work Report:  80% of security professionals surveyed said they had encountered increased security threats since the shift to remote work began.  Of these, 62% said phishing campaigns had increased more than any other type of threat. Employees said they believed IT departments would be able to mitigate these phishing attacks if they had been working in the offic Further reading: ⚡ The Future of Hybrid Work  ⚡ 7 Concerns Security Leaders Have About Permanent Remote Working
What can individuals and organizations do to prevent being targeted by phishing attacks? While you can’t stop hackers from sending phishing or spear phishing emails, you can make sure you (and your employees) are prepared if and when one is received. You should start with training. Educate employees about the key characteristics of a phishing email and remind them to be scrupulous and inspect emails, attachments, and links before taking any further action. Review the email address of senders and look out for impersonations of trusted brands or people (Check out our blog CEO Fraud Email Attacks: How to Recognize & Block Emails that Impersonate Executives for more information.) Always inspect URLs in emails for legitimacy by hovering over them before clicking Beware of URL redirects and pay attention to subtle differences in website content Genuine brands and professionals generally won’t ask you to reply divulging sensitive personal information. If you’ve been prompted to, investigate and contact the brand or person directly, rather than hitting reply But, humans shouldn’t be the last line of defense. That’s why organizations need to invest in technology and other solutions to prevent successful phishing attacks. But, given the frequency of attacks year-on-year, it’s clear that spam filters, antivirus software, and other legacy security solutions aren’t enough. That’s where Tessian comes in. By learning from historical email data, Tessian’s machine learning algorithms can understand specific user relationships and the context behind each email. This allows Tessian Defender to not only detect, but also prevent a wide range of impersonations, spanning more obvious, payload-based attacks to subtle, social-engineered ones. Further reading: ⚡ Tessian Defender: Product Data Sheet  
Human Layer Security DLP Compliance Data Exfiltration
The State of Data Loss Prevention in the Financial Services Sector
By Maddie Rosenthal
10 May 2021
In our latest research report, we took a deep dive into Data Loss Prevention in Financial Services and revealed that data loss incidents are happening up to 38x more frequently than IT leaders currently estimate.  And, while data loss is a big problem across all industries, it’s especially problematic in those that handle highly sensitive data. One of those industries is Financial Services. Before we dive into how frequently data loss incidents are happening and why, let’s define what exactly a data loss incident is in the context of this report. We focused on outbound data loss on email. This could be either intentional data exfiltration by a disgruntled or financially motivated employee or it could be accidental data loss.  Here’s what we found out. The majority of employees have accidentally or intentionally exfiltrated data  Tessian platform data shows that in organizations with 1,000 employees, 800 emails are sent to the wrong person every year. This is 1.6x more than IT leaders estimated. Likewise, in organizations of the same size, 27,500 emails containing company data are sent to personal accounts. We call these unauthorized emails, and IT leaders estimated just 720 are sent annually. That’s a big difference.
But, what about in this particular sector? Over half (57%) of Financial Services professionals across the US and the UK admit to sending at least one misdirected email and 67% say they’ve sent unauthorized emails. But, when you isolate the US employees, the percentage almost doubles. 91% of Financial Services professionals in the US say they’ve sent company data to their personal accounts.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); And, because Financial Services is highly competitive, professionals working in this industry are among the most likely to download, save, or send company data to personal accounts before leaving or after being dismissed from a job, with 47% of employees saying they’ve done it. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); To really understand the consequences of incidents like this, you have to consider the type of data this industry handles and the compliance standards and data privacy regulations they’re obligated to satisfy. Every day, professionals working in Financial Services send and receive: Bank Account Numbers Loan Account Numbers Credit/Debit Card Numbers Social Security Numbers M&A Data In order to protect that data, they must comply with regional and industry-specific laws, including: GLBA COPPA FACTA FDIC 370 HIPAA CCPA GDPR So, what happens if there’s a breach? The implications are far-reaching, ranging from lost customer trust and a damaged reputation to revenue loss and regulatory fines.  For more information on these and other compliance standards, visit our Compliance Hub. Remote-working is making Data Loss Prevention (DLP) more challenging  The sudden transition from office to home has presented a number of challenges to both employees and security, IT, and compliance leaders.  To start, 65% of professionals working in Financial Services say they feel less secure working from home than they do in the office. It makes sense. People aren’t working from their normal work stations and likely don’t have the same equipment. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); A further 56% say they’re less likely to follow safe data practices when working remotely. Why? The most common reason was that IT isn’t watching, followed by being distracted.  Most of us can relate. When working remotely – especially from home – people have other responsibilities and distractions like childcare and roommates and, the truth is, the average employee is just trying to do their job, not be a champion of cybersecurity.  That’s why it’s so important that security and IT teams equip employees with the solutions they need to work securely, wherever they are. Current solutions aren’t empowering employees to work securely  Training, policies, and rule-based technology all have a place in security strategies. But, based on our research, these solutions alone aren’t working. In fact, 64% of professionals working in Financial Services say they’ll find a workaround to security software or policies if they impede productivity. This is 10% higher than the average across all industries. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js");
How does Tessian prevent data loss on email? Tessian uses machine learning to address the problem of accidental or deliberate data loss by applying human understanding to email behavior. Our machine learning models analyze email data to understand how people work and communicate. They have been trained on more than two billion emails and they continue to adapt and learn from your own data as human relationships evolve over time. This enables Tessian Guardian to look at email communications and determine in real time if particular emails look like they’re about to be sent to the wrong person. Tessian Enforcer, meanwhile, can identify when sensitive data is about to be sent to an unsafe place outside an organization’s email network. Finally, Tessian Defender detects and prevents inbound attacks like spear phishing, account takeover (ATO), and CEO Fraud. Enforcer and Guardian do all of this silently in the background. That means workflows aren’t disrupted and there’s no impact on productivity. Employees can do what they were hired to do without security getting in the way. Tessian bolsters training, complements rule-based solutions, and helps reinforce the policies security teams have worked so hard to create and embed in their organizations. That’s why so many Financial Services firms have adopted Tessian’s technology, including: Man Group Evercore BDO Affirm Armstrong Watson JTC DC Advisory Many More
Human Layer Security Spear Phishing
Phishing Awareness Training: How Effective is Security Training?
By Maddie Rosenthal
30 April 2021
Phishing awareness training is an essential part of any cybersecurity strategy. But is it enough on its own? This article will look at the pros and cons of phishing awareness training—and consider how you can make your security program more effective. Still wondering how big of a problem phishing really is? Check out this collection of 50+ phishing statistics. Don’t feel like scrolling? For more information about each point, you can click the text below to jump down on the page. 
✅ Pros of phishing awareness training Employees learn how to spot phishing attacks While people working in security, IT, or compliance are all too familiar with phishing, spear phishing, and social engineering, the average employee isn’t. The reality is, they might not have even heard of these terms, let alone know how to identify them. But, by showing employees examples of attacks – including the subject lines to watch out for, a high-level overview of domain impersonation, and the types of requests hackers will generally make – they’ll immediately be better placed to identify what is and isn’t a phishing attack.   Looking for resources to help train your employees? Check out this blog with a shareable PDF. It includes examples of phishing attacks and reasons why the email is suspicious.  It’s a good chance to remind employees of existing policies and procedures Enabling employees to identify phishing attacks is important. But you have to make sure they know what to do if and when they receive one, too. Training is the perfect opportunity to remind employees of existing policies and procedures. For example, who to report attacks to within the security or IT team. Training should also reinforce the importance of other policies, specifically around creating strong passwords, storing them safely, and updating them frequently. After all, credentials are the number one “type” of data hackers harvest in phishing attacks.  Security leaders can identify particularly risky and at-risk employees By getting teams across departments together for training sessions and phishing simulations, security leaders will get a birds’ eye view of employee behavior. Are certain departments or individuals more likely to click a malicious link than others? Are senior executives skipping training sessions? Are new-starters struggling to pass post-training assessments?  These observations will help security leaders stay ahead of security incidents, can inform subsequent training sessions, and can help pinpoint gaps in the overall security strategy.
Training satisfies compliance standards While you can read more about various compliance standards – including GDPR, CCPA, HIPAA, and GLBA – on our compliance hub, they all include a clause that outlines the importance of implementing proper data security practices. What are “proper data security practices?” This criterion has – for the most part – not been formally defined. But, phishing awareness training is certainly a step in the right direction and demonstrates a concerted effort to secure data company-wide.   It helps organizations foster a strong security culture In the last several years (due in part to increased regulation) cybersecurity has become business-critical. But, it takes a village to keep systems and data safe, which means accountability is required from everyone to make policies, procedures, and tech solutions truly effective.  That’s why creating and maintaining a strong security culture is so important. While this is easier said than done, training sessions can help encourage employees – whether in finance or sales – to become less passive in their roles as they relate to cybersecurity, especially when gamification is used to drive engagement. You can read more about creating a positive security culture on our blog.
❌ Cons of phishing awareness training Training alone can’t prevent human error People make mistakes. Even if you hold a three-hour-long cybersecurity training session every day of the week, you’ll never be able to eliminate the possibility of human error. Don’t believe us? Take it from the U.K.’s National Cyber Security Centre (NCSC): “Spotting phishing emails is hard, and spear phishing is even harder to detect. Even experts from the NCSC struggle.  The advice given in many training packages, based on standard warnings and signs, will help your users spot some phishing emails, but they cannot teach everyone to spot all phishing emails.” That’s right, even the U.K.’s top cybersecurity experts can’t always spot a phishing scam. Social engineering incidents—attacks that play on people’s emotions and undermine their trust—are becoming increasingly sophisticated.  For example, using Account Takeover techniques, cybercriminals can hack your vendors’ email accounts and intercept email conversations with your employees. The signs of an account take-over attack, such as minor changes in the sender’s writing style, are imperceptible to humans. Phishing awareness training is always one step behind Hackers think and move quickly and are constantly crafting more sophisticated attacks to evade detection. That means that training that was relevant three months may not be today. In the last year, we’ve seen bad actors leverage COVID-19, Tax Day, furlough schemes, unemployment checks, and the vaccine roll-out to trick unsuspecting targets.  What could be next?  Training is expensive According to Mark Logsdon, Head of Cyber Assurance and Oversight at Prudential, there are three fundamental flaws in training: it’s boring, often irrelevant, and expensive. We’ll cover the first two below but, for now, let’s focus on the cost. Needless to say, the cost of training and simulation software varies vendor-by-vendor. But, the solution itself is far from the only cost to consider. What about lost productivity? Imagine you have a 1,000-person organization and, as a part of an aggressive inbound strategy, you’ve opted to hold training every quarter. Training lasts, on average, three hours. That’s 12,000 lost hours a year.  While – yes – a successful attack would cost more, we can’t forget that training alone doesn’t work. (See point 1: Phishing awareness training can’t prevent human error.)
Phishing awareness training isn’t targeted (or engaging) enough Going back to what Mark Logsdon said: Training is boring and often irrelevant. It’s easy to see why. You can’t apply one lesson to an entire organization – whether it’s 20 people or 20,0000 – and expect it to stick. It has to be targeted based on age, department, and tech-literacy. Age is especially important.  According to Tessian’s latest research, nearly three-quarters of respondents who admitted to clicking a phishing email were aged between 18-40 years old. In comparison, just 8% of people over 51 said they had done the same. However, the older generation was also the least likely to know what a phishing email was. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Jeff Hancock, the Harry and Norman Chandler Professor of Communication at Stanford University and expert in trust and deception, explained how tailored training programs could help.
Should I create a phishing awareness training program? The short answer: “Yes”. These programs can help teach employees what phishing is, how to spot phishing emails, what to do if they’re targeted, and the implications of falling for an attack. But, as we’ve said, training isn’t a silver bullet. It will curb the problem, but it won’t prevent mistakes from happening. That’s why security leaders need to bolster training with technology that detects and prevents inbound threats. That way, employees aren’t the last line of defense. But, given the frequency of attacks year-on-year, it’s clear that spam filters, antivirus software, and other legacy security solutions aren’t enough. That’s where Tessian comes in. How does Tessian detect and prevent targeted phishing attacks? Tessian fills a critical gap in security strategies that SEGs, spam filters, and training alone can’t.  By learning from historical email data, Tessian’s machine learning algorithms can understand specific user relationships and the context behind each email. This allows Tessian Defender to detect a wide range of impersonations, spanning more obvious, payload-based attacks to difficult-to-spot social-engineered ones like CEO Fraud and Business Email Compromise. Once detected, real-time warnings are triggered and explain exactly why the email was flagged, including specific information from the email. Best of all? These warnings are written in plain, easy-to-understand language. 
These in-the-moment warnings reinforce training and policies and help employees improve their security reflexes over time.  To learn more about how tools like Tessian Defender can prevent spear phishing attacks, speak to one of our experts and request a demo today. Not ready for a demo? Sign-up for our weekly blog digest to get more cybersecurity content, straight to your inbox.  Just fill out the form below.
Human Layer Security DLP
What is Email DLP? Overview of DLP on Email
15 April 2021
Data loss prevention (DLP) and insider threat management are both top priorities for security leaders to protect data and meet compliance requirements.  And, while there are literally thousands of threat vectors – from devices to file sharing applications to physical security – email is the threat vector security leaders are most concerned about protecting. It makes sense, especially with remote or hybrid working environments. According to Tessian platform data, employees send nearly 400 emails a month. When you think about the total for an organization with 1,000+ employees, that’s 400,000 emails, many of which contain sensitive data. That’s 400,000 opportunities for a data breach.  The solution? Email data loss prevention.
This article will explain how email DLP works, consider the different types of email DLP, and help you decide whether you need to consider it as a part of your overall data protection strategy.  Looking for information about DLP more broadly? Check out this article instead: A Complete Overview of Data Loss Prevention. 
➡ What is email data loss prevention? Essentially, email DLP tools monitor a company’s email communications to determine whether data is at risk of loss or theft. There are several methods of email DLP, which we’ll look at below. But they all attempt to: Monitor data sent and received via email Detect suspicious email activity Flag or block email activity that leads to data loss ➡ Do I need email data loss prevention? Unless you’re working with a limitless security budget (lucky you!), it’s important to prioritize your company’s resources and target areas that represent key security vulnerabilities.  Implementing security controls is mandatory under data protection laws and cybersecurity frameworks, like the General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), and Health Insurance Portability and Accountability Act (HIPAA). And there’s a good reason to prioritize preventing data loss on email. As we’ve said, email is the threat vector security leaders are most concerned about. We’ll explain why.  📩 Inbound email security threats How can malicious external actors use email to steal data? There are many methods. Phishing—social engineering attacks designed to trick your employees into handing over sensitive data. According to the FBI, phishing is the leading cause of internet crime, and the number of phishing incidents doubled in 2020. Spear phishing—like phishing, but targeted at a specific individual. Spear phishing attacks are more sophisticated than the “bulk” phishing attacks many employees are used to. Malware—phishing emails can contain a “malicious payload”, such as a trojan, that installs itself on a user’s device and exfiltrates or corrupts data. Email DLP can help prevent criminals from exfiltrating your company’s data. 🏢 Internal email security threats While it’s crucial to guard against external security threats, security teams are increasingly concerned with protecting company data from internal actors. There are two types of internal security threats: accidental and malicious. 🙈 Accidental data loss Accidents happen. Don’t believe us?  Human error is the leading cause of data breaches. Tessian platform data shows that in organizations with 1,000 or more employees, people send an average of 800 misdirected emails (emails sent to the wrong recipient) every year. That’s two every day.  How can a misdirected email cause data loss? Misspelling the recipient’s address, attaching the wrong file, accidental “reply-all”—any of these common issues can lead to sensitive company data being emailed to the wrong person.  And remember—if the email contains information about an individual (personal data), this might be a data breach. Misdirected emails are the top cause of information security incidents according to the UK’s data regulator. We can’t forget that misattached files are also a big problem. In fact, nearly half (48%) of employees say they’ve attached the wrong file to an email. Worse will, according to survey data: 42% of documents sent in error contained company research and data 39% contained security information like passwords and passcodes 38% contained financial information and client information 36% contained employee data But, not all data loss incidents are an accident.  🕵 Insider threats  Employees or contractors can steal company data from the inside. While less common than accidental data loss, employees that steal data—or simply overstep the mark—are more common than you might think. Some employees steal company data to gain a competitive advantage in a new venture—or for the benefit of a third party. We covered some of these incidents in our article, 11 Real Insider Threats. But more commonly, employees are breaking the rules for less nefarious reasons. For example, employees send company data to a personal email address for convenience. For example, to work on a project at home or on another device. Sending unauthorized emails is a security risk, though. Tessian platform data shows that it occurs over 27,500 times per year in companies with 1,000 employees or more. And, while – yes – it’s often not done maliciously, the consequences are no less dire, especially in highly regulated industries.  So, how do you prevent these things from happening?  ➡ Email DLP solutions to consider Research shows that the majority of security leaders say that security awareness training and the implementation of policies and procedures are the best ways to prevent data loss. And both are very important.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); But – as well-intentioned as most employees are – mistakes still happen despite frequent training and despite stringent policies. That means a more holistic approach to email DLP – including technology – is your best bet.  Broadly, there are two “types” of DLP technology: ruled-based DLP and machine learning DLP. 📏 Rule-based email DLP Using rule-based DLP, IT administrators can tag sensitive domains, activities, or types of data. When the DLP software detects blacklisted data or behavior, it can flag it or block it. Like training and policies, rule-based DLP certainly has its place in security strategies. But there are limitations of ruled-based DLP. This “data-centric” model does not fully account for the range of behavior that is appropriate in different situations. For example, say an IT administrator asks email DLP software to block all correspondence arriving from “freemail” domains (such as gmail.com), which are often used to launch cyberattacks. What happens when you need to communicate with a contractor or customer using a freemail address? What’s more, rule-based DLP is very admin-intensive. Creating and managing rules and analyzing events takes a lot of time, which isn’t ideal for thinly-stretched security teams.  Want to learn more? We explore situations where rule-based DLP falls short. For more information, read The Drawbacks of Traditional DLP on Email. 🤖 Machine learning email DLP Machine learning email DLP is a “human-centric” approach. By learning how every member of your company communicates, machine learning DLP understands the context behind every human interaction with data. How does machine learning email DLP work? This DLP model processes large amounts of data and learns your employees’ communications patterns.  The software understands when a communication is anomalous or suspicious by constantly reclassifying data according to the relationship between a business and customers, suppliers, and other third parties. No rules required.  This type of DLP solution enables employees to work unimpeded until something goes wrong, and makes preventing data loss effortless for security teams.
💡 Learn more about how Tessian’s email DLP solutions Tessian uses contextual machine learning to address the problem of accidental or deliberate data loss by applying human understanding to email behavior. Our contextual machine learning models have been trained on more than two billion emails – rich in information on the kind of data people send and receive every day. And they continue to adapt and learn as human relationships evolve over time. This enables Tessian Guardian to look at email communications and determine in real-time if particular emails look like they’re about to be sent to the wrong person or if an employee has attached the wrong file. Tessian Enforcer, meanwhile, can identify when sensitive data is about to be sent to an unsafe place outside an organization’s email network. And, finally, Tessiden Defender prevents inbound threats, like spear phishing, business email compromise, and CEO fraud.  To learn more about data exfiltration and how Tessian uses machine learning to keep data safe, check out our customer stories or talk to one of our experts today. You can also subscribe to our monthly newsletter below to get more updates about DLP, compliance, spear phishing, industry trends, and more. 
Human Layer Security Podcast
Five Things I Learned From Launching A Podcast
By Tim Sadler
14 April 2021
At the start of this year, Tessian started a podcast. Why? Because since we launched the Human Layer Security category in 2013, the human factor has become one of the biggest considerations in cybersecurity today. Every day, we are speaking to CISOs, CIOs, business leaders and security professionals about how to secure the human layer. And I’m not just talking about conversations related to how to stop the ever-rising number of phishing attacks. We’re talking about insider threats and security incidents caused by simple human error, too. We’re discussing ways in which CISOs can better understand their employees’ behaviors and ways of working, in order to build security strategies that protect them and empower them to do great work. And we’re talking about how to get buy-in from boards. Rather than keeping the conversations to ourselves, we wanted the podcast to provide a platform for inspiring IT leaders, thought-provoking academics, and ethical hackers to discuss why it’s so important for businesses to protect their people – not just machines and data – and share their learnings so that how other security teams can do it too.
It’s been a lot of fun and I’ve spoken to some incredible people. So here are my highlights and my top learnings as we close out Season 1 of the RE:Human Layer Security podcast: 1. CISOs are doing an amazing job in their relentless roles. As Simon Hodgkinson, former CISO at bp said, the job of the CISO is truly 24/7. And it’s becoming “more and more challenging as the threats become more advanced and regulatory landscapes become even more complicated”. Hearing the work that CISOs like Jerry Perullo at ICE, Ray Espinoza at Cobalt, Tim Fitzgerald at ARM and Anne Benigsen at Bankers’ Bank of West are doing to not only navigate these landscapes and keep their companies safe, but also to help make their people into security champions and make security as seamless as possible is really inspiring. 2. … and they want to do more. It was clear from the leaders I spoke that they have a “duty of care to continue raising awareness” and “invest in making sure people are able to do the right thing.” Some believe, however, there are more engaging ways to do it, while others think there is more work to be done to get employees to buy-in to the security cultures. It was great to understand how they plan to do this.
3. Security can learn so much from psychology. In one of my favourite episodes, academics Dr Karen Renaud and Dr Marc Dupuis question why businesses continually use fear – a short term emotion – to try and engender long-term behavioral change in cybersecurity. They also explain why the role of employee self-efficacy is so important to encourage safer security practices. Their insight into what factors make people more or less likely to adopt safe cybersecurity behaviors makes me question whether FUD in security has had its day? 4. If you don’t get to know your people well, the bad guys certainly will. Ethical hackers and social engineering experts like Craig Hays and Jenny Radcliffe explained how cybercriminals select their targets and methods of attack, emphasizing the need for companies – at manager level – to know their people really well. As Jenny said, “the answer to becoming a more secure organization […] is to know your humans better than the bad guys.”
5. Employees aren’t the weakest link. The age-old saying that people are the weakest link in security is something our guests don’t believe in. To Dan Raywood, people are neither the strongest or weakest link, but rather “an essential part of your business”. Tim Fitzgerald agreed, stating that, as security leaders, “we try to take a look in the mirror and say, are we providing these people with the tools they need to help them avoid these types of threats or scenarios?” It’s been a privilege to speak with all of our guests on the RE:Human Security Layer podcast and, if you haven’t already, I encourage you to listen to their interviews and subscribe to the show.  We’re now planning Season 2 so stay tuned for that – and if you’d like to get involved or hear more about what we’re doing, please contact me on LinkedIn or Twitter.  
Human Layer Security
Machine vs. Machine: Setting the Record Straight on Offensive AI
By Trevor Luker
08 April 2021
In recent years, we’ve heard the term “Offensive AI” being used more frequently to describe a new type of cyber threat – one that sees cybercriminals using artificial intelligence (AI) to supercharge their cyber attacks, advance impersonation scams, and avoid detection. In response, organizations are being advised to “fight fire with fire” and invest in defensive AI solutions in order to stay ahead of the bad guys, a sort of modern day “spy on spy” warfare tactic. Sure, cybercriminals are using more sophisticated technologies to advance their attack campaigns, but let’s start by getting one thing straight: where we are at the moment is not “AI”. For a system to be considered intelligent, it needs to exhibit autonomous behavior and goal seeking. What we are seeing, though, is an emerging use of Machine Learning (ML) and adaptive algorithms, combined with large datasets, that are proving effective for cybercriminals in mounting attacks against their targets.  Semantics, I know. But it’s important that we manage the hype. Even the washing machine I just purchased says it includes “AI” functionality. It doesn’t.  Organizations do, though, need to be aware of attackers’ use of offensive ML, and every company needs to understand how to defend itself against it. I can help. 
So, what is offensive ML? At this stage, offensive ML is often the use of ML and large data-lakes to automate the first stages of cyber attacks. In particular the reconnaissance, weaponization, and delivery stages of the Cyber-Kill-Chain lend themselves to automation.  It allows attacks to be carried out on a much larger scale and faster than ever previously seen. It also helps attackers overcome their human-resource problem—yes, even cybercriminals have this problem; skilled cyber staff are hard to find.  Automation frees up the human’s time, keeping them involved for the later stages of an attack once a weakness that can be exploited has been found. To a large degree, many cyber attacks have become a data science issue, as opposed to requiring stereotypical ‘elite hackers’.  A good offensive ML will also have a feedback mechanism to tune the underlying models of an attack, for example, based on the success of a lure in front of a potential victim in a phishing attack. The models will start to favor successful approaches and, over time, increase in efficiency and effectiveness.  How is offensive ML being used today? One example of offensive ML I’ve observed is large-scale scanning of perimeter systems for fingerprinting purposes.  Fingerprinting the perimeter of organizations – assigning IP addresses with organizations, public data (DNS, MX lookup) and industry sectors – is a simple data-management issue. However, if this is combined with Common Vulnerabilities and Exposures (CVE) updates, and possibly dark web zero-day exploits, it provides attackers with a constantly updated list of vulnerable systems.  You can learn more about zero-day vulnerabilites here: What is a Zero-Day Vulnerability? 3 Real-World Examples. Organizations defending themselves against cybercrime frequently have to go through a time consuming testing process before deploying a patch and, in some cases, the systems are just not patched at all. This gives an attacker a window of opportunity to deploy automated scripts against any targets that have been selected by the ML as meeting the attack criteria. No humans need be involved except to set the parameters of the attack campaign: it’s fully automated. An attacker could, for example, have the ML algorithms send emails to known invalid email addresses at the target organization to see what information or responses they get—Do the email headers give clues about internal systems and defenses? Do any of the systems indicate unpatched vulnerabilities?  They can use ML to understand more about the employees they will target too, crawling through social media platforms like LinkedIn and Twitter to identify employees who recently joined an organization, any workers that have moved roles, or people that are dissatisfied with their company. Why? Because these people are prime targets to attempt to phish.  Combining this information is step one. Attackers then just need to understand how to get past defenses so that the phishing emails land into a target employee’s inbox. MX records – a mail exchanger record that specifies the mail server responsible for accepting email messages on behalf of a domain name – are public information and would give the ML information as to what Secure Email Gateway (SEG) a company is using so that an attacker could tailor the lure and have the most chance of getting through an organization’s defenses.  Another area in which offensive ML proves problematic for organizations is facial recognition. Attackers can deploy ML technology or facial recognition to match company photos with photos from across the Internet, and then build up a graph of relationships between people and their target. An exercise in understanding “who knows who?”.  With this information, bad actors could deploy social media bots under ML control to build trust with the target and their associates. From public sources, an attacker knows their target’s interests, who they work with, who they live with; all this is gold dust when it comes to the “phishing stage” as an attacker can make the scam more believable by referring to associates, shared interests, hobbies etc.  Using offensive ML in ransomware attacks There are other reasons to be concerned about the impact offensive ML can have on your organization’s security. Attackers can use it to advance their ransomware attacks.  Ransomware attacks – and any exploits used to deliver the ransomware – have a short shelf-life because defenses are constantly evolving too. Therefore, successful ROI for the attacker depends on whether they choose their targets carefully. Good reconnaissance will ensure resources are used more efficiently and effectively than using a simpler scatter-gun approach.  For any cybercriminal involved in “ransomware for hire”, offensive ML proves invaluable to earning a higher salary. They can use the data gathered above to set their pricing model for their customers. The better defended – or more valuable- the target, the higher the price. All this could be, and likely is, automated. So, how can organizations protect themselves from an offensive AI/ML attack? It’s the classic “spy vs spy” scenario; attacks evolve and so do defenses. With traditional, rule-based defensive systems, though, the defender is always at a disadvantage. Until an attack is observed, a rule can’t be written to counteract it. However, if an organization uses ML, the defensive systems don’t need to wait for new rules;  they can react to anomalous changes in behavior autonomously and adjust defensive thresholds accordingly. In addition, defensive ML systems can more accurately adjust thresholds based on the observed riskiness of behavior within a defender’s organization; there is no longer a need to have a one-size-fits-all defense.  A good ML-based system will adapt to each company, even each employee or department, and set corresponding defense levels. Traditional, rule-based systems can’t do this. In my opinion, the future of defensive security is a data-issue; the days of the traditional human-heavy Security Operations Center are numbered. What questions should organizations ask to ensure they have the right defenses in place? First and foremost, ask your IT service provider why they think their system is actually AI. Because it almost certainly isn’t. If the vendor maintains that they have a real AI solution, be very skeptical about them as a reliable vendor. Ask vendors how their system would react to a zero-day exploit: How long would their system need to deal with a novel attack? Would the user need to wait for a vendor update? Ask vendors about data and threat sharing. All companies are under reconnaissance and attack, and the more data that is shared about this, the better the defenses. So ask, does the vendor share attack data, even with their competitors?
Human Layer Security
Risk Management Made Easy: Introducing Tessian Human Layer Risk Hub
By Ed Bishop
06 April 2021
Today, comprehensive visibility into employee risk is one of the biggest challenges security and risk management leaders face.  Why? Because most security solutions offer a limited view of risk and don’t offer any insights into the likelihood of an employee falling for a phishing attack or exfiltrating data.  Worse still, when it is available, risk information is siloed and hard to interpret.  Insights around security awareness training exist in seperate systems from insights related to threats that have been detected and prevented. There’s no integration which means security leaders can’t get a full view of their risk profile. Without integration and visibility, it’s impossible to take a tailored, proactive approach to preventing threats. It’s an uphill battle. You may not even know where to start… But, we have a solution.  With Tessian Human Layer Risk Hub, our customers can now deeply understand their organization’s security posture with granular visibility into employee risk and insights into individual user risk levels and drivers.
This is the only solution that offers protection, training, and risk analytics all in one platform, giving you a clear picture of your organization’s risk and the tools needed to reduce that risk.  How does Tessian Human Layer Risk Hub work? With Tessian Human Layer Risk Hub, security leaders can quantify risk, take targeted actions, and offer the right training to continuously lower the risks posed by employees’ poor security decisions.  Let’s look at an example.  1. An employee in the Finance department is flagged as a high-risk user based on their access to sensitive information, their low level of security awareness training, and how frequently they’re targeted by spear phishing attacks.  Tessian looks at five risk drivers – accidental data loss, data exfiltration, social engineering, sensitive data handling, and security awareness – to generate individual risk scores. Each employee’s risk score is dynamically updated, decreasing when an employee makes the correct security decision, and increasing when they do something risky, such as clicking on a phishing email or sending company data to personal email accounts. 
2. Based on these insights, Tessian intelligently and automatically identifies actions teams can take within the platform (for example, custom protections for certain user groups) to reinforce policies, improve security awareness, and change behavior to help drive down risk.  Security teams can also implement additional processes and controls outside of Tessian to exercise better control over specific risks. 
3. With custom protections enabled, Tessian’s in-the-moment warnings help nudge employees towards safer behavior. For example, you could quickly and easily configure a trigger that always warns and educates users when they receive an email from a new domain, mentioning a wire transfer. But, even without custom protections,  Tessian Defender can detect spear phishing attacks with incredible accuracy. And, because the warnings are written in clear, easy-to-understand language, employees are continusouly learning and leveling up their security awareness. If targeted by a spear phishing attack, employees would receive a warning that looks something like this. 
4. With continuous protection and in-the-moment training, security leaders will see employees move from high-risk users to low-risk users over time. Risk scores and drivers are aggregated at employee, department, and company-level and are benchmarked against peers. This makes tracking and reporting on progress simple and effective. 
Benefits of Tessian Human Layer Risk Hub Tessian Human Layer Risk Hub enables security leaders to reduce risk and improve their organization’s security posture with unique insights you can’t get anywhere else. Targeted remediation at scale. With a bird’s eye view of your most risky and at-risk user groups, security leaders can make better decisions about how to distribute budget and resources, what mitigation measures to prioritize, and when to intervene. This goes beyond email. If you can see who has access to sensitive information – and how they’re handling that sensitive information – you’ll be able to create and update policies that really work.  More effective training. Every year, businesses spend nearly $300,000 and 276 hours on security awareness training. But, training is only effective when the messages are tailored and the employee is engaged. Tessian Human Layer Risk Hub gives security, risk management, and compliance leaders the insights they need to create tailored training programs that cut through. And, Tessian in-the-moment warnings help nudge employees towards safer behavior in real-time.  Clear ROI. Many solutions simply report risk; they don’t actually reduce risk. Tessian is different. Security leaders can easily measure and demonstrate how risk has changed over time, how the platform has proactively helped improve the organization’s security posture, and can even apply learnings from the platform to inform future decisions. The benefit? You’ll become a trusted partner across your organization.   Defensible audit. Tessian’s detailed reports and audit logs provide defensible proof against data breaches. If a risk is identified, you’ll be able to formally document all associated events, and track exposure, owner, mitigation decisions, and actions.  The bottom line: Tessian Human Layer Risk Hub gives security teams a unified view and a shared language to communicate risk to business, demonstrate progress towards lowering risk, and effectively secure their human layer.  Learn more about Tessian Interested in learning more about Tessian Human Layer Risk Hub? Current Tessian customers can get in touch with their Customer Success Manager. Not yet a Tessian customer? Learn more about the new Human Layer Risk Hub, explore our customer stories, or book a demo now. And, to be the first to hear about new product updates, sign-up for our newsletter below.
Human Layer Security Spear Phishing
Types of Email Attacks Every Business Should Prepare For
01 April 2021
Email remains the number one tool of business communication. The email network is open to practically anyone—and its flexibility, reliability, and convenience mean it’s not going away any time soon. But for all its benefits, email can also be a vector for serious cyberattacks. Social engineering attacks like phishing can lead to data breaches, malware attacks, and billions of dollars in losses for businesses worldwide. This article will explain the major types of email attacks, provide some data on how common they are, and consider the devastating impact that email attacks can have on your business. Types of email attacks First, we’ll walk you through some of the most common types of email attacks. Phishing Phishing can mean one of two things: An “umbrella term” meaning any social engineering attack that takes place via email. A type of email attack where the attacker sends a lot of malicious emails in an untargeted way. When we use “phishing” as an umbrella term, it refers to the most common type of email attack. Any malicious email that tries to trick you into clicking a link, opening a file, or taking any other action that causes harm, can be part of a phishing attack.  All of the other types of email attacks we’ll look at below are forms of phishing, if we use the term in this broad way. When we use “phishing” as a specific term, it means a “bulk” or “spray and pray” email attack, where the malicious email is sent to many unnamed recipients. Here’s an example:
What makes this a phishing email? There’s no addressee: It says “Hello,” not “Hello Rob.” The “update account now” button leads to a credential phishing page. Most importantly — Netflix didn’t send it! Further reading: ⚡  What is Phishing? ⚡ Spam vs. Phishing: The Difference Between Spam and Phishing ⚡ How Easy is it to Phish? ⚡ How to Avoid Falling For a Phishing Attack | 6 Useful Tips Spear phishing Spear phishing is an email attack targeting a specific individual. So, whereas bulk phishing uses a net — sending emails to as many potential victims as possible — spear phishing uses a spear to target one specific victim. Again, spear phishing is can also be an umbrella term, in that there are lots of different types of phishing attacks. Some of the examples below, including Business Email Compromise (BEC) and CEO fraud, are almost always spear phishing attacks. Why? Because whenever a phishing attack targets a specific individual, it’s a spear phishing attack. Here’s an example:
What makes this a spear phishing email? It targets a specific person. The “click here” link leads to a credential phishing website. Most importantly — you guessed it — DHL didn’t send it! Further reading: ⚡  What is Spear Phishing? ⚡ What’s the Difference Between Phishing and Spear Phishing? ⚡ Spear Phishing: Screenshots of Real Email Attacks Business Email Compromise (BEC) Business Email Compromise (BEC) is any phishing attack where the attacker uses a hacked, spoofed, or impersonated corporate email address. In the sense that the attacker is impersonating a business, the Netflix and DHL examples above are both BEC attacks. But we normally use “BEC” to refer to a more sophisticated form of email attack. For example, one of the biggest cyberattacks of all time is an example of BEC. Between 2013 and 2015, a Latvian cybercrime gang headed by Evaldas Rimasauskas scammed Facebook and Google out of around $121 million by impersonating their suppliers and sending fake invoices via email. Further reading: ⚡ What is Business Email Compromise (BEC)? ⚡  5 Real Examples of Business Email Compromise
CEO fraud In a CEO fraud attack, the attacker impersonates a company executive and targets a less senior employee. Here’s an example:
What makes this a CEO fraud attack? The sender’s email address impersonates a real company executive (note the method here is email impersonation — ”microsott.com” — but other methods such as email spoofing are also common). The sender (“Leon”) puts a lot of pressure on the recipient (Tess). Stressed people make poor decisions. The attack involves wire transfer fraud. While not all CEO fraud attacks involve wire transfer fraud, this is a very common tactic. Further reading: ⚡  What is CEO Fraud? ⚡ CEO Fraud Prevention: 3 Effective Solutions How common are email attacks? Email attacks are on the rise, and are now extremely common. According to the FBI’s Internet Crime Complaint Center (IC3), phishing incidents more than doubled from 2019 to 2020, costing victims over $54 million in direct losses. Verizon says 22% of breaches in 2019 involved phishing. Around 75% of organizations around the world experienced some kind of phishing attack in 2020. Want more data on phishing and other email attacks? See our article Phishing Statistics (Updated 2021). Consequences of email attacks What are the main consequences of email attacks on businesses and their customers? Data breaches: Attackers use techniques such as credential phishing to exfiltrate your customers’ personal information. Data breaches can attract investigations, regulatory fines, and class-action lawsuits. IBM estimates that the average data breach costs a business $3.86 million Malware: Some email attacks aim to deposit a malicious payload on the recipient’s device. This payload is normally some form of malware, for example: A virus, which can infect other devices on your network Spyware, which can log your keystrokes and online activity  Ransomware, which encrypts your valuable data and demands you pay a ransom to get it back. Wire transfer fraud: Spear phishing attacks—particularly if they involve BEC or CEO fraud—often attempt to persuade the target into transferring funds into a bank account controlled by the attacker. And it really works—that’s why the FBI calls BEC “the $26 billion scam”
Human Layer Security DLP Data Exfiltration
11 Examples of Data Breaches Caused By Misdirected Emails
17 March 2021
While phishing, ransomware, and brute force attacks tend to make headlines, misdirected emails (emails sent to the wrong person) are actually a much bigger problem. In fact, in organizations with 1,000 employees, at least 800 emails are sent to the wrong person every year. That’s two a day. You can find more insights in The Psychology of Human Error and The State of Data Loss Prevention 2020.  Are you surprised? Most people are. That’s why we’ve rounded up this list of 11 real-world (recent) examples of data breaches caused by misdirected emails. And, if you skip down to the bottom, you’ll see how you can prevent misdirected emails (and breaches!) in your organization.  If you’re looking for a bit more background, check out these two articles: What is a Misdirected Email? Consequences of Sending an Email to the Wrong Person 11 examples of data breaches caused by misdirected emails  1. University support service mass emails sensitive student information University and college wellbeing services deal with sensitive personal information, including details of the health, beliefs, and disabilities of students and their families.  Most privacy laws impose stricter obligations on organizations handling such sensitive personal information—and there are harsher penalties for losing control of such data. So imagine how awful the Wellbeing Adviser at the University of Liverpool must have felt when they emailed an entire school’s worth of undergraduates with details about a student’s recent wellbeing appointment. The email revealed that the student had visited the Adviser earlier that day, that he had been experiencing ongoing personal difficulties, and that the Adviser had advised the student to attend therapy. A follow-up email urged all the recipients to delete the message “immediately” and appeared to blame the student for providing the wrong email address. One recipient of the email reportedly said: “How much harder are people going to find it actually going to get help when something so personal could wind up in the inbox of a few hundred people?” 2. Trump White House emails Ukraine ‘talking points’ to Democrats Remember in 2019, when then-President Donald Trump faced accusations of pressuring Ukraine into investigating corruption allegations against now-President Joe Biden? Once this story hit the press, the White House wrote an email—intended for Trump’s political allies—setting out some “talking points” to be used when answering questions about the incident (including blaming the “Deep State media”). Unfortunately for the White House, they sent the email directly to political opponents in the Democratic Party. White House staff then attempted to “recall” the email. If you’ve ever tried recalling an email, you’ll notice that it doesn’t normally work.  Recalling an email only works if the recipient is on the same exchange server as you—and only if they haven’t read the email. Looking for information on this? Check out this article: You Sent an Email to the Wrong Person. Now What? Unsurprisingly, this was not the case for the Democrats who received the White House email, who subsequently leaked it on Twitter.  I would like to thank @WhiteHouse for sending me their talking points on how best to spin the disastrous Trump/Zelensky call in Trump’s favor. However, I will not be using their spin and will instead stick with the truth. But thanks though. — US Rep Brendan Boyle (@RepBrendanBoyle) September 25, 2019 3. Australia’s Department of Foreign Affairs and Trade  leaked 1,000 citizens’ email addresses On September 30, 2020, Australia’s Department of Foreign Affairs and Trade (DFAT) announced that the personal details of over 1,000 citizens were exposed after an employee failed to use BCC. So, who were the citizens Australians who have been stuck in other countries since inbound flights have been limited (even rationed) since the outbreak of COVID-19. The plan was to increase entry quotas and start an emergency loans scheme for those in dire need. Those who had their email addresses exposed were among the potential recipients of the loan. Immediately after the email was sent, employees at DFAT tried to recall the email, and event requested that recipients delete the email from their IT system and “refrain from any further forwarding of the email to protect the privacy of the individuals concerned.” 4. Serco exposes contact traces’ data in email error  In May 2020, an employee at Serco, a business services and outsourcing company, accidentally cc’d instead of bcc’ing almost 300 email addresses. Harmless, right? Unfortunately not.  The email addresses – which are considered personal data – belonged to newly recruited COVID-19 contact tracers. While a Serco spokesperson has apologized and announced that they would review and update their processes, the incident nonetheless has put confidentiality at risk and could leave the firm under investigation with the ICO.  5. Sonos accidentally exposes the email addresses of hundreds of customers in email blunder  In January 2020, 450+ email addresses were exposed after they were (similar to the example above) cc’d rather than bcc’d.  Here’s what happened: A Sonos employee was replying to customers’ complaints. Instead of putting all the email in BCC, they were CC’d, meaning that every customer who received the email could see the personal email addresses of everyone else on the list.  The incident was reported to the ICO and is subject to potential fines.
6. Gender identity clinic leaks patient email addresses In September 2019, a gender identity clinic in London exposed the details of close to 2,000 people on its email list after an employee cc’d recipients instead of bcc’ing them. Two separate emails were sent, with about 900 people cc’d on each.  While email addresses on their own are considered personal information, it’s important to bear in mind the nature of the clinic. As one patient pointed out, “It could out someone, especially as this place treats people who are transgender.”  The incident was reported to the ICO who is currently assessing the information provided. But, a similar incident may offer a glimpse of what’s to come.  In 2016, the email addresses of 800 patients who attended HIV clinics were leaked because they were – again – cc’d instead of bcc’d. An NHS Trust was £180,000. Bear in mind, this fine was issued before the introduction of GDPR. 7. University mistakenly emails 430 acceptance letters, blames “human error” In January 2019, The University of South Florida St. Petersburg sent nearly 700 acceptance emails to applicants. The problem? Only 250 of those students had actually been accepted. The other 400+ hadn’t. While this isn’t considered a breach (because no personal data was exposed) it does go to show that fat fingering an email can have a number of consequences.  In this case, the university’s reputation was damaged, hundreds of students were left confused and disappointed, and the employees responsible for the mistake likely suffered red-faced embarrassment on top of other, more formal ramifications. The investigation and remediation of the incident also will have taken up plenty of time and resources.  8. Union watchdog accidentally leaked secret emails from confidential whistleblower In January 2019, an official at Australia’s Registered Organisations Commission (ROC) accidentally leaked confidential information, including the identity of a whistleblower. How? The employee entered an incorrect character when sending an email. It was then forwarded to someone with the same last name – but different first initial –  as the intended recipient.  The next day, the ROC notified the whistleblower whose identity was compromised and disclosed the mistake to the Office of the Australian Information commissions as a potential privacy breach. 9. Major Health System Accidentally Shares Patient Information Due to Third-Party Software for the Second Time This Year In May 2018 Dignity Health – a major health system headquartered in San Francisco that operates 39 hospitals and 400 care centers around the west coast – reported a breach that affected 55,947 patients to the U.S. Department of Health and Human Services.  So, how did it happen? Dignity says the problem originated from a sorting error in an email list that had been formatted by one of its vendors. The error resulted in Dignity sending emails to the wrong patients, with the wrong names. Because Dignity is a health system, these emails also often contained the patient’s doctor’s name. That means PII and Protect health information (PHI) was exposed.  10. Inquiry reveals the identity of child sexual abuse victims This 2017 email blunder earned an organization a £200,000 ($278,552) fine from the ICO. The penalty would have been even higher if the GDPR has been in force at the time. When you look at the detail of this incident, it’s easy to see why the ICO wanted to impose a more severe fine. The Independent Inquiry into Child Sexual Abuse (IICSA) sent a Bcc email to 90 recipients, all of whom were involved in a public hearing about child abuse.  Sending a Bcc means none of the recipients can see each other’s details/ But the sender then sent a follow-up email to correct an error—using the “To” field by mistake. The organization made things even worse by sending three follow-up emails asking recipients to delete the original message—one of which generated 39 subsequent “Reply all” emails in response. The error revealed the email addresses of all 90 recipients and 54 people’s full names.  But is simply revealing someone’s name that big of a deal? Actually, a person’s name can be very sensitive data—depending on the context. In this case, IICSA’s error revealed that each of these 54 people might have been victims of child sexual abuse. 11. Boris Johnson’s dad’s email blunder nearly causes diplomatic incident Many of us know what it’s like to be embarrassed by our dad.  Remember when he interrogated your first love interest? Or that moment your friends overheard him singing in the shower. Or when he accidentally emailed confidential information about the Chinese ambassador to the BBC. OK, maybe not that last one. That happened to the father of U.K. Prime Minister Boris Johnson in February 2020. Johnson’s dad, Stanley Johnson, was emailing British officials following a meeting with Chinese ambassador Liu Xiaoming. He wrote that Liu was “concerned” about a lack of contact from the Prime Minister to the Chinese state regarding the coronavirus outbreak. The Prime Minister’s dad inexplicably copied the BBC into his email, providing some lucky journalists with a free scoop about the state of U.K.-China relations. It appears the incident didn’t cause any big diplomatic issues—but we can imagine how much worse it could have been if Johnson had revealed more sensitive details of the meeting.
Prevent misdirected emails (and breaches) with Tessian Guardian Regardless of your region or industry, protecting customer, client, and company information is essential. But, to err is human. So how do you prevent misdirected emails? With machine learning.  Tessian turns an organization’s email data into its best defense against human error on email. Our Human Layer Security technology understands human behavior and relationships and automatically detects and prevents emails from being sent to the wrong person. Yep, this includes typos, accidental “reply alls” and cc’ing instead of bcc’ing. Tessian Guardian can also detect when you’ve attached the wrong file. Interested in learning more about how Tessian can help prevent accidental data loss and data exfiltration in your organization? You can read some of our customer stories here or book a demo.
Human Layer Security
Email is the #1 Threat Vector. Here’s Why.
11 March 2021
Billions of people use email everyday — it’s the backbone of online collaboration, administration, and customer service. But businesses lose billions to email-based cyberattacks every year. Workers use email to exfiltrate sensitive company data. And simple human errors, like sending an email to the wrong person, can be highly problematic. The bottom line: for all its benefits, email communication is risky and, according to research, it’s the threat vector security leaders are most concerned about protecting.  This article will look at the main threats associated with using email — and consider what you can do to mitigate them. The scope of the problem Before we look at some of the risks of email communication, let’s consider the scope of the problem. After all, around 4 billion people worldwide use email regularly.  2020 estimates showed that people send and receive around 306.4 billion emails per day — up 4% from 2019. The Digital Marketing Association suggests that 90% of people check their email at least once per day.  Adobe data shows that email is the preferred contact method for marketing communications — by a long shot. So, with alternative platforms like Slack and Teams rising in popularity. why does email remain the world’s main artery of communication? Email is platform-independent, simple, and accessible. No company would consider cutting email out of its communication channels.  But for every “pro” involved in using email, there’s a “con.” If you’re relying on email communication, you need to mitigate the risks. Security risks involved in using email  A major risk of email communication is security. Because it’s so flexible and easy-to-use, email carries a unique set of security risks. Phishing attacks  Phishing is a type of online “social engineering” attack. The attacker impersonates somebody that their target is likely to trust and manipulates them into providing sensitive information, transferring money, or revealing login credentials. Around 90% of phishing occurs via email. Here are the main types: Spear phishing: The attacker targets a specific individual (instead of sending bulk phishing emails indiscriminately). Whaling: The attacker targets a CEO or other executive-level employee. Business Email Compromise (BEC): A phishing attack in which the attacker appears to be using a legitimate corporate email address. CEO fraud: The attacker impersonates a company’s CEO and targets a junior employee. Wire transfer phishing: The attacker persuades a company employee to transfer money to a fraudulent bank account. Credential phishing: The attacker steals login details, such as usernames or passwords While today, most people are attuned to the problem of phishing, the problem is only getting worse. Don’t believe us? Check out these 50+ must-know phishing statistics. That means phishing protection is an essential part of using email. Looking for more information on inbound email protection? Click here.  Insider threats As well as inbound email threats, like phishing, you must also consider the threats that can arise from inside your business. Tessian survey data suggests that 45% of employees download, save, send, or otherwise exfiltrate work-related documents before leaving their job. The most competitive industries — like tech, management consultancy, and finance — see the highest rates of this phenomenon.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Email is a quick and convenient way to send large amounts of data to external contacts — and can be a pipeline for disgruntled or corrupt employees to siphon off company assets. If you want to learn more about insider threats, including real-world examples, check out these articles: What is an Insider Threat? Insider Threat Types and Real-World Examples Insider Threat Statistic You Should Know Insider Threat Indicators: 11 Ways to Recognize an Insider Threat Remote working Phishing is a booming criminal industry — and there’s evidence that the new patterns of remote working are making phishing more common than ever. Tessian research shows that 65% of US and UK employees received a phishing email when working remotely in 2020 due to the COVID-19 pandemic, and 82% of IT leaders think their company is at greater risk of phishing attacks when employees are working from home. If your company operates a hybrid or remote working model, email security is even more crucial. Human error on email Innocent mistakes can be just as harmful as cyberattacks. In fact, 88% of data breaches are caused by human error. Misdirected emails Research shows that most people have sent at least one email to the wrong person, with nearly one-fifth admitting to sending an email to someone outside of their organization. Our platform data also shows that there are, on average, 800 misdirected emails per year in companies with more than 1000 employees.That’s two a day.  Sending an email to the wrong recipient is so common, you might not think they’re a big deal. But data from the UK’s Information Commissioner’s Office (ICO) consistently shows that misdirected emails are the number one cause of reportable data breaches. Misspelling, autocorrect, reply-all — these are all reasons you might send an email to the wrong recipient. It’s a serious risk of email communication — but you can prevent it. Misattached files Along with misdirected emails, “misattached files” are a major cause of data loss. New data shows some very worrying trends related to people sending emails with incorrect attachments. First, here’s what’s inside the documents people are sending in error: 42% contained company research or data  39% contained security information, such as login credentials  38% contained financial information and client information  36% contained employee data The survey also shows that – as a result of sending misattached files – one-third lost a customer or client — and 31% faced legal action. Email communication: how to mitigate the risks The risks we’ve described all depend on human vulnerabilities. Cyberattackers prey on people’s trust and deference to authority — and anyone can make a mistake when sending an email. That’s why email security is a must. Looking for help choosing a solution? We’ve put together this handy guide: 9 Questions That Will Help You Choose the Right Email Security Solution. If you want more tips, how-to guides, and checklists related to email security specifically and cybersecurity more broadly, sign-up for our newsletter!  While you’re here… Tessian software mitigates all types of risks associated with email communication: Tessian Defender: Automatically prevents spear phishing, account takeover, business email compromise, and other targeted email attacks. Tessian Enforcer: Automatically prevents data exfiltration over email. Tessian Guardian: Automatically prevents accidental data loss caused by misdirected emails and misattached files.
Human Layer Security
5 Cybersecurity Stats You Didn’t Know (But Should)
By Maddie Rosenthal
08 March 2021
When it comes to cybersecurity – specifically careers in cybersecurity –  there are a few things (most) people know. There’s a skills gap, with 3.12 million unfilled positions. There’s also a gender gap, with a workforce that’s almost twice as likely to be male.  But, we have good news. We surveyed 200 women working in cybersecurity and 1,000 recent grads (18-25 years old) for our latest research report, Opportunity in Cybersecurity Report 2021,  and the skills and gender gap seem to both be closing, and women working in the field are happier than ever, despite a tumultuous year.   Here’s five cybersecurity stats you didn’t know (but should). P.s. There are even more stats in the full report, and plenty of first-hand insights from women currently working in the field and recent grads considering a career in cybersecurity.
1. 94% of cybersecurity teams hired in 2020 As we all know, COVID-19 has had a profound impact on unemployment rates. But, as the global job market has contracted, cybersecurity appears to have expanded. According to our research, a whopping 94% of cybersecurity teams hired in 2020. Better still, this hiring trend isn’t isolated; it’s consistent across industries, from Healthcare to Finance. Want to know which industries were the most likely to hire in 2020? Download the full report. 2. Nearly half of women say COVID-19 POSITIVELY affected their career
This is one figure that we’re especially proud to report: 49% of women say COVID-19 positively affected their career in cybersecurity. In the midst of a global recession, this is truly incredible. Is it increased investment in IT that’s driving this contentment? The flexibility of working from home? An overwhelming sense of job security? We asked female cybersecurity professionals, and they answered. See what they had to say.  3. 76% of 18-25 year olds say cybersecurity is “interesting” Last year, we asked women working cybersecurity why others might not consider a job in the field. 42% said it’s because the industry isn’t considered “cool” or “exciting”. We went directly to the source and asked recent grads (18-25 years old) and our data tells a different story. 76% said of them said that cybersecurity is interesting.  This is encouraging, especially since… 4. ⅓ of recent grads would consider a job in cybersecurity !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); While we don’t have any data to compare and contrast this number to, we feel confident saying that interest in the field is growing. Perhaps fueled by the fact that it is – actually – interesting? 31% of recent grads say they would consider a job in cybersecurity. But men are almost twice as likely as women to float the idea.   Want to know why? We pulled together dozens of open-ended responses from  our survey respondents. Click here to see what they said.  5. There’s $43.1 billion up for grabs…
Today, the total value of the cybersecurity industry in the US is $107.7 billion. But, if the gender gap were closed, and the number of women working in the field equaled the number of men, the total value would jump to $138.1 billion. And, if women and men earned equal salaries, it’d increase even more.  The total (potential) value of the industry? $150.8 billion.
Page
[if lte IE 8]
[if lte IE 8]