Proofpoint closes acquisition of Tessian. Read More ->

Request a demo
Request a demo
Request a demo
Request a demo
Request a demo

Why Confidence Matters: How We Improved Defender’s Confidence Scores to Fight Phishing Attacks

Tuesday, January 4th 2022
Why Confidence Matters: How We Improved Defender’s Confidence Scores to Fight Phishing Attacks

‘Why Confidence Matters’ is a weekly three-part series. In this first article, we’ll explore why a reliable confidence score is important for our users. In part two, we’ll explain more about how we measured improvements in our scores using responses from our users. And finally, in part three, we’ll go over the pipeline we used to test different approaches and the resulting impact in production.

 

Part One: Why Confidence Matters

 

Across many applications of machine learning (ML), being able to quantify the uncertainty associated with the prediction of a model is almost as important as the prediction itself. 

 

Take, for example, chatbots designed to resolve customer support queries. A bot which provides an answer when it is very uncertain about it, will likely cause confusion and dissatisfied users.


In contrast, a bot that can quantify its own uncertainty, admit it doesn’t understand a question, and ask for clarification is much less likely to generate nonsense messages and cause frustration amongst its users.

The importance of quantifying uncertainty

 

Almost no ML model gets every prediction right every time – there’s always some uncertainty associated with a prediction. For many product features, the cost of errors can be quite high.



For example, mis-labelling an important email as phishing and quarantining it could result in a customer missing a crucial invoice, or mislabelling a bank transaction as fraudulent could result in an abandoned purchase for an online merchant. 

 

 

Hence, ML models that make critical decisions need to predict two key pieces of information:

1. the best answer to provide a user
2. a confidence score to quantify uncertainty about the answer.

Quantifying the uncertainty associated with a prediction can help us to decide if, and what actions should be taken.

How does Tessian Defender work?

How Defender works - keep your data

 

Every day, Tessian Defender checks millions of emails to prevent phishing and spear phishing attacks. In order to maximise coverage,  Defender is made up of multiple machine learning models, each contributing to the detection of a particular type of email threat (see our other posts on phishing, spear phishing, and account takeover). 

 

 

Each model identifies phishing emails based on signals relevant to the specific type of attack it targets. Then, beyond this primary binary classification task, Defender also generates two key outputs for any email that is identified as potentially malicious across any of the models:

 

  1. A confidence score, which is related to the probability that the email flagged is actually a phishing attack. This score is a value between 0 (most likely safe) and 1 (most certainly phishing), which is then broken down into 4 categories of Priority (from Low to Very High). This score is important for various reasons, which we further expand on in the next section.
  2. An explanation of why Defender flagged the email. This is an integral part of Tessian’s approach to Human Layer Security: we aim not only to detect phishy emails, but also to educate users in-the-moment so they can continually get better at spotting future phishing emails.

Defender warningIn the banner, we aim to concisely explain the type of email attack, as well as why Defender thinks it is suspicious. Users who see these emails can then provide feedback about whether they think the email is indeed malicious or not. Developing explainable AI is a super interesting challenge which probably deserves its own content, so we won’t focus on it in this particular series. Watch this space! 

 

Why Confidence Scores Matters 

 

Beyond Defender’s capability to warn on suspicious emails, there were several key product features we wanted to unlock for our customers that could only be done with a robust confidence score. These were:

Email quarantine
Defender in action - keep your data

Based on the score, Defender first aims to quarantine the highest priority emails to prevent malicious emails from ever reaching their employees’ mailboxes. This not only reduces the risk exposure for the company from an employee still potentially interacting with a malicious email; it also removes burden and responsibility from the user to make a decision, and reduces interruption to their work.

 

Therefore, for malicious emails that we’re most confident about, quarantining is extremely useful. In order for quarantine to work effectively, we must:

 

  1. Identify malicious emails with very high precision (i.e. very few false positives). We understand the reliance of our customers on emails to conduct their business, and so we needed to make sure that any important communications must still come through to their inboxes unimpeded. This was very important so that Tessian’s Defender can secure the human layer without security getting in our user’s way.
  2. Identify a large enough subset of high confidence emails to quarantine. It would be easy to achieve a very high precision by quarantining very few emails with a very high score (a low recall), but this would greatly limit the impact of quarantine on how many threats we can prevent. In order to be a useful tool, Defender would need to quarantine a sizable volume of malicious emails.

 

Both these objectives directly depend on the quality of the confidence score. A good score would allow for a large proportion of flags to be quarantined with high precision.

Prioritizing phishy emails

In today’s threat landscape, suspicious emails come into inboxes in large volumes, with varying levels of importance. That means it’s critical to provide security admins who review these flagged emails with a meaningful way to order and prioritize the ones that they need to act upon. A good score will provide a useful ranking of these emails, from most to least likely to be malicious, ensuring that an admin’s limited time is focused on mitigating the most likely threats, while having the assurance that Defender continues to warn and educate users on other emails that contain suspicious elements.

 

The bottom line: Being able to prioritize emails makes Defender a much more intelligent tool that is effective at improving workflows and saving our customers time, by drawing their attention to where it is most needed.

 

Removing false positives

We want to make sure that all warnings Tessian Defender shows employees are relevant and help prevent real attacks. 

 

False positives occur when Defender warns on a safe email. If this happens too often, warnings could become a distraction, which could have a big impact on productivity for both security admins and email users. Beyond a certain point, a high false positive rate could mean that warnings lose their effectiveness altogether, as users may ignore it completely. Being aware of these risks, we take extra care to minimize the number of false positives flagged by Defender. 

 

Similarly to quarantine, a good confidence score can be used to filter out false positives without impacting the number of malicious emails detected. For example, emails with a confidence score below a given threshold could be removed to avoid showing employees unnecessary warnings.

What’s next?

 

Overall, you can see there were plenty of important use cases for improving Tessian Defender’s confidence score. The next thing we had to do was to look at how we could measure any improvements to the score. You can find a link to part two in the series below

(Co-authored by Gabriel Goulet-Langlois and Cassie Quek)