Human Layer Security
8 Reasons To Register Now For Tessian Human Layer Security Summit
By Maddie Rosenthal
Monday, August 17th, 2020
If your calendar is filling up with virtual events, make sure you leave space for Tessian Human Layer Security Summit on September 9. What is it? A (virtual) event featuring industry leaders from the world’s top organizations that was designed to help business, security, compliance, and IT professionals prepare for what’s next…whatever that may be.   Keep reading to find out what you’ll learn, who the speakers are, and why you have to register now. 1. You’ll get an FBI agent’s perspective on election hacking  With the US election coming up in November, people and media around the world are talking about election hacking. That’s why we’re bringing Elvis Chan from the FBI to the Human Layer Security Summit “stage”.  Elvis will review key events from the 2016 elections, highlight the tactics nation-state hackers are most likely to use this year, and offer advice on how to protect yourself and your organization from being hacked. 2. You’ll hear from Howard Schultz and other industry leaders from AWS, Salesforce, and PwC about how they’re leading their organizations through change If you’re struggling to keep up with the pace of new cyber threats while also supporting stressed employees as they continue working remotely, you’re not alone. So, why not lean on your peers and learn from their experiences? At this event, experts from AWS, Salesforce, PwC, TrustedSec, MSCI, Euromoney Institutional Investor, and more will be sharing their anecdotes and advice to help you create future-proof security strategies. You’ll also hear from business titan and the former CEO of Starbucks, Howard Schultz. But, adapting to the ‘new normal” isn’t the only thing we’ll be talking about…
3. A Stanford psychology professor will explain why people make mistakes that lead to breaches (and what you can do about it) Tessian’s latest research report, The Psychology of Human Error, shows that nearly half (43%) of people have made mistakes at work that compromised cybersecurity. But, why do people make mistakes? Register now and you’ll find out on September 9. Jeff Hancock, Professor at Stanford University will identify factors that make people – just like you and me – more likely to fall for phishing scams and fire off emails to the wrong people.  Spoiler Alert: Burnout and distraction are two of the top contributors.  4. You can be a part of the conversation Just because the event is virtual doesn’t mean you can’t get involved… Throughout the three-hour-long event, you’ll be able to submit questions to be answered live. Whether you want to ask Rachel Beard, the Principal Security Technical Architect at Salesforce how she’s combatting hacker’s increasingly sophisticated phishing tactics or want to probe David Kennedy about penetration testing post-pandemic, this is your opportunity. Don’t miss out! 5. You’ll walk away with truly actionable advice As we’ve said, Tessian Human Layer Security Summit was designed to help business and security leaders prepare for what’s next. The key, then, is to make sure that all attendees walk away (er, log off) with advice they can actually put into action. You should expect to learn how to stop your employees from falling for social engineering attacks, ways in which you can tailor training for better results, why people-centric security strategies are more essential now than ever, and more. Click here for a full agenda.  6. You’ll learn what the future holds, according to a Forrester security analyst Because of Forrester’s insights, reports, and analysis, the firm is trusted by business and security leaders around the world and across industries.  We’re delighted, then, to be welcoming Joseph Blankenship, Forrester’s VP, Research Director serving Security & Risk Professionals. He’ll be offering his expert opinion on where the industry is heading next and best practices to help you implement strategies in emerging areas of security.  Remember: You can ask questions! What do you want to ask Joseph?  7. It’s the last HLS Summit of the year In March, Tessian hosted the world’s first Human Layer Security Summit. In June, we hosted the world’s second Human Layer Security Summit. In September, we’re hosting the world’s third Human Layer Security Summit and it’s the last big HLS event of 2020. And, because we’ve taken feedback from over two thousand people who have attended previously, this will be the best one yet. Want to know what to expect? Check out these videos, featuring Stephane Kasriel, the former CEO of Upwork, Bobby Ford, Global CISO of Unilever, and more.  8. It’s free! That’s right. The event is completely free. All you have to do is sign-up. You’ll be in good company! Register now to save your spot and we’ll “see” you on September 9. Can’t make it on September 9? Don’t worry, by registering, you’ll have on-demand access to watch the full series of keynotes, panel discussions, and more after the live session. Do you know anyone else who should attend? Whether it’s your CEO or your sister, just send them this link. 
Spear Phishing
What is Social Engineering? 4 Types of Attacks
Thursday, August 13th, 2020
You may have heard of social engineering, but do you know what it is?
Social engineering basics The key difference between social engineering attacks and brute force attacks is the techniques that hackers employ. Instead of trying to exploit weaknesses in security software, a social engineer will use coercive language, a sense of urgency, and even details about the person’s personal or work life to influence the target to hand over information or access to other accounts or systems. 
How does social engineering work? There is no set (or foolproof) ‘method’ that cybercriminals use to carry out social engineering attacks. But, the goal is generally the same: they want to take advantage of people in order to obtain personal information or get access to other systems or accounts. Why? Personal data and intellectual property are incredibly valuable.  While you can read more about the “types” of data that are compromised in this blog: Phishing Statistics 2020, you can learn more about the different types of social engineering attacks below.  Types of social engineering attacks When we say “social engineering”, we’re talking about the exploitation of human psychology. But, hackers can trick people in a few different ways and are always working hard to evade security solutions.  Phishing and spear phishing scams Phishing is one of the most common types of social engineering attacks and is generally delivered via email.  But, more and more often, we’re seeing attacks delivered via SMS, phone, and even social media.  Here are three hallmarks of phishing attacks: An attempt to obtain personal information such as names, dates of birth, addresses, passwords, etc.  Wording that evokes fear, a sense of urgency, or makes threats in an attempt to persuade the recipient to respond quickly. The use of shortened or misleading domains, links, buttons, or attachments. Spear phishing attacks are similar, but are much more targeted. Whereas phishing attacks are sent in bulk, spear phishing attacks are sent to a single person or small group of people and require a lot more forethought. For example, hackers will research targets on LinkedIn to find out who they work with and who they report to. This way, they can craft a more believable email. Want to learn more? We’ve covered phishing and spear phishing in more detail in these blogs: How to Identify and Prevent Phishing Attacks How to Catch a Phish: A Closer Look at Email Impersonation Phishing vs. Spear Phishing: Differences and Defense Strategies  COVID-19: Real-Life Examples of Opportunistic Phishing Emails Pretexting  While pretexting and phishing are categorized separately, they actually go hand-in-hand. In fact, pretexting is a tactic used in many phishing, spear phishing, vishing, and smishing attacks.  Here’s how it works: hackers create a strong, fabricated rapport with the victim. After establishing legitimacy and building trust, the hacker will either blatantly ask for or trick the victim into handing over personal information.   While there is an infinite number of examples we could give, ranging from BEC scams to CEO Fraud, we’ll use a consumer-focused example. Imagine you receive a call from someone who says they work at your bank. The person on the other end of the phone (the scammer) tells you they’ve seen unusual transactions on your account and that, in order to review the transactions and pause activity, you need to confirm your full name, address, and credit or debit card number. If you do share the information, the scammer will have everything they need to access your bank account and even carry out secondary attacks with the information they’ve learned. Together with phishing, pretexting represents 98% of social engineering incidents and 93% of breaches according to Verizon’s 2018 Data Breach Incident Report.  Physical and virtual baiting Like all other types of social engineering, baiting takes advantage of human nature. In particular: curiosity. Scammers will lure the target in (examples below) before stealing their personal data, usually by infecting their computer with some type of malware. The most common type of baiting attack involves the use of physical media – like a USB drive – to disperse malware. These malware-infected USB drives are left in conspicuous areas (like a bathroom, for example) where they are likely to be seen by potential victims. To really drive interest, hackers will sometimes even label the device with curious notes like “confidential” or logos from the target’s organization to make it seem more legitimate.  In an effort to identify who the owner of the USB (or simply because they can’t help themselves) employees often plug the USB be into their computer. Harmless, right? Unfortunately not. Once inserted, the USB deploys malware.  Baiting doesn’t necessarily have to take place in the physical world, though. After the outbreak of COVID-19, several new bait sites were set up. These sites feature fraudulent offers for special COVID-19 discounts, lure people into signing up for free testing, or claim to sell face masks and hand sanitizer.  Whaling attack ‘Whaling’ is a more sophisticated evolution of the phishing attack. In these attacks, hackers use very refined social engineering techniques to steal confidential information, trade secrets, personal data, and access credentials to restricted services, resources, or anything with economic or commercial value.  While this sounds similar to phishing and spear phishing, it’s different. How? Whaling tends to target business managers and executives (the ‘bigger fish’) who are likely to have access to higher-level data.  But, it’s not just their access to data.  Whaling is also seen as an effective attack vector because senior leaders themselves are perceived to be “easy targets”. Leaders tend to be extremely busy, too, and are therefore more likely to make mistakes and fall for scams.  Perhaps that’s why senior executives are 12x more likely to be the target of social engineering attacks compared to other employees. How to defend against social engineering attacks According to Verizon’s 2020 Data Breach Investigations Report (DBIR), 22% of breaches in 2019 involved phishing and other types of social engineering attacks. And, when you consider the cost of the average breach ($3.92 million) it’s absolutely essential that IT and security teams do everything they can to protect their employees. Here’s how: 1. Put strict policies in place The best place to start is by ensuring that you’ve got strong policies in place that govern the use of company IT systems, including work phones, email accounts, and cloud storage.  For example, you could ban the use of IT systems for personal reasons like accessing personal email accounts, social media, and non-work-related websites. You can learn more about why accessing personal email accounts and social media on work devices is dangerous on this blog: Remote Worker’s Guide to: Preventing Data Loss.  2. Educate your workforce Awareness training is key to help employees understand social engineering risks, learn how to spot these types of attacks, and what to do if and when they are targeted.  In addition to quarterly training sessions either online or in-person, organizations can also invest in phishing simulations. This way, employees get some “real-world” experience, without the risk of compromising data. But, it’s important to note that training alone isn’t enough. We explore this in detail in this blog: Pros and Cons of Phishing Awareness Training.  3. Filter inbound emails 90% of all data breaches begin with email. It’s one of the most common attack vectors used by hackers for social engineering purposes, and more.  But, with the right threat management tools, IT and security teams can mitigate the risk associated with social engineering attacks by monitoring and filtering inbound emails.  It’s important that solutions don’t impede employee productivity, though. For example, if a solution issues false positives, employees may become desensitized to warnings and end up ignoring them instead of heeding the advice.  Tessian protects employees from inbound email threats without getting in the way. How does Tessian detect and prevent social engineering? Powered by machine learning, Tessian Defender analyzes and learns from an organization’s current and historical email data and protects employees against inbound email security threats, including whaling, CEO Fraud, BEC, spear phishing, and other targeted social engineering attacks. Best of all, it does all of this silently in the background in real-time and, in-the-moment warnings help bolster training and reinforce policies. That means employee productivity isn’t affected and security reflexes improve over time. To learn more about how Tessian can protect your people and data against social engineering attacks on email, book a demo today.
Data Exfiltration, DLP, Human Layer Security, Spear Phishing
Research Shows Employee Burnout Could Cause Your Next Data Breach
By Maddie Rosenthal
Wednesday, August 12th, 2020
Understanding how stress impacts your employees’ cybersecurity behaviors could significantly reduce the chances of people’s mistakes compromising your company’s security, our latest research reveals.   Consider this. A shocking 93% of US and UK employees told us they feel tired and stressed at some point during their working week, with one in 10 feeling tired every day. And perhaps more worryingly, nearly half (46%) said they have experienced burnout in their career.  Then consider that nearly two-thirds of employees feel chained to their desks, as 61% of respondents in our report said there is a culture of presenteeism in their organization that makes them work longer hours than they need to. Nearly 70% of employees also agreed that there is an expectation within their company to respond to emails quickly.  Employees are overwhelmed, overworked and are feeling the pressure to keep pace with their organization’s demands. 
The effects of the pandemic  The events of 2020 haven’t helped matters either. In the wake of the global pandemic, people have experienced extremely stressful situations that affected their health and finances, against a backdrop of political uncertainty and social unrest, while simultaneously juggling the demands of their jobs. The sudden shift to remote working also meant that people were surrounded by new distractions, and over half of respondents (57%) told us they felt more distracted when working from home.  According to Jeff Hancock, a professor at Stanford University who collaborated with us on this report, people tend to make mistakes or decisions they later regret when they are stressed and distracted. This is because when our cognitive load is overwhelmed, and when our attention is split between multiple tasks, we aren’t able to fully concentrate on the task in front of us. What does this mean for security?  Not only are these findings incredibly concerning for employees’ health and wellbeing, these factors could also explain why mistakes that compromise cybersecurity are happening more than ever. The majority of employees (52%) we surveyed said they make more mistakes at work when they are stressed.  !function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async"); Younger employees seem to be more affected by stress than their older co-workers, though. Nearly two-thirds of workers aged 18-30 years old (62%) said they make more mistakes when they are stressed, compared to 45% of workers over 51 years old.  Our research also revealed that 43% and 41% of employees believe they are more error-prone when tired and distracted, respectively. In fact, people cited distraction as the top reason for why they fell for a phishing scam at work while 44% said they had accidentally sent an email to the wrong person (44%) because they were tired.  While these mistakes may seem trivial on the surface, phishing is the number one threat vector used by hackers today and one in five companies told us they have lost customers as a result of an employee sending an email to the wrong person. Far from red-faced embarrassment, these mistakes are compromising businesses’ cybersecurity.
The other problem is that hackers are preying on our vulnerable states, and using them to their advantage. Cybercriminals know people are stressed and looking for information about the pandemic and remote working. They know that some individuals are struggling financially and others have lost their jobs. The lure of a ‘too-good-to-be-true’ deal or ‘get a new job fast’ offer may suddenly look very appealing, especially if the email appears to have come from a trusted source, and cause people to click.  So what can businesses do to protect employees from mistakes caused by burnout?  Business and security leaders need to realise that it’s unrealistic for employees to act as the company’s first line of defence. You cannot expect every employee to spot every scam or make the right cybersecurity decision 100% of the time, particularly when they’re dealing with stressful situations and working in environments filled with distractions. When faced with never-ending to-do lists and back-to-back Zoom calls, cybersecurity is the last thing on people’s minds. In fact, a third of respondents told us they “rarely” or “never” think about security when at work.  Businesses, therefore, need to create a culture that doesn’t blame people for their mistakes and, instead, empowers them to do great work without security getting in the way. Understand how stress impacts people’s cybersecurity behaviors and tailor security policies and training so that they truly resonate for every employee.
Educating people on how hackers might take advantage of their stress and explaining the types of scams that people could be susceptible to is an important first step. For example, a hacker could impersonate a senior IT director, supposedly communicating the implementation of new software to accommodate the move back into the office, and asks employees to share their account credentials. Or a hacker may pose as a trusted government agency requesting personal information in relation to a new financial relief scheme.  Businesses should also implement solutions that can help employees make good cybersecurity decisions and reduce risk over time. Security solutions like Tessian use machine learning to understand employee behaviours to alert people to risks on email as and when they arise. By warning individuals in real-time, we can educate individuals as to why the email they were about to send or have received is a threat to company security. It helps to make people think twice before they do something they might regret.  With remote working here to stay, and with hackers continually finding ways to capitalize on people’s stress in order to manipulate them, businesses must prioritize cybersecurity at the human layer. Only by understanding why people make mistakes that compromise cybersecurity, can you begin to prevent burnout from causing your next data breach.
Spear Phishing
Smishing and Vishing: What You Need to Know About These Phishing Attacks
Monday, August 10th, 2020
Whether or not you’re familiar with the terms “smishing” and “vishing,” you may have been targeted by these attacks. This article will: Explain what smishing and vishing attacks are, and how they relate to phishing Provide examples of each type of attack alongside tips on how to identify them Discuss what you should do if you’re targeted by a smishing or vishing attack Smishing, Vishing, and Phishing Smishing and vishing are two types of phishing attacks, sometimes called “social engineering attacks.” While 96% of phishing attacks arrive via email, hackers can also use social media channels. Regardless of how the attack is delivered, the message will appear to come from a trusted sender and may ask the recipient to: Follow a link, either to download a file or to submit personal information Reply to the message with personal or sensitive information Carry out an action such as purchasing vouchers or transferring funds Types of phishing include “spear phishing,” where specific individuals are targeted by name, and “whaling,” where high-profile individuals such as CEOs or public officials are targeted. All these hallmarks of phishing can also be present in smishing and vishing attacks. What Is Smishing? Smishing — or “SMS phishing” — is phishing via SMS (text messages). The victim of a smishing attack receives a text message, supposedly from a trusted source, that aims to solicit their personal information. These messages often contain a link (generally a shortened URL) and, like other phishing attacks, they’ll encourage the recipient to take some “urgent” action, for example: Claiming a prize Claiming a tax refund Locking their online banking account Example of a Smishing Attack Just like phishing via email, the rates of smishing continue to rise year-on-year. According to Consumer Reports, the Federal Trade Commission (FCC) received 93,331 complaints about spam or fraudulent text messages in 2018 — an increase of 30% from 2017. Here’s an example of a smishing message:
The message above appears to be from the Driver and Vehicle Licensing Agency (DVLA) and invites the recipient to visit a link. Note that the link appears to lead to a legitimate website — gov.uk is a UK government-owned domain. The use of a legitimate-looking URL is an excellent example of the increasingly sophisticated methods that smishing attackers use to trick unsuspecting people into falling for their scams. How to Identify a Smishing Attack As we’ve said, cybercriminals are using increasingly sophisticated methods to make their messages as believable as possible. That’s why many thousands of people fall for smishing scams every year. In fact, according to a study carried out by Lloyds TSB, participants were shown 20 emails and texts, half of which were inauthentic. Only 18% of participants correctly identified all of the fakes. So, what should you look for? Just like a phishing attack via email, a smishing message will generally: Convey a sense of urgency Contain a link (even if the link appears legitimate, like in the example above) Contain a request personal information Other clues that a message might be from a hacker include the phone number it comes from (large institutions like banks will generally send text messages from short-code numbers, while smishing texts often come from “regular” 11-digit mobile numbers) and may contain typos. If you’re looking for more examples of phishing attacks (which might help you spot attacks delivered via text message) check out these articles: How to Identify and Prevent Phishing Attacks How to Catch a Phish: A Closer Look at Email Impersonation Phishing vs. Spear Phishing: Differences and Defense Strategies  COVID-19: Real-Life Examples of Opportunistic Phishing Emails What Is Vishing? Vishing — or “voice phishing” — is phishing via phone call. Vishing scams commonly use Voice over IP (VoIP) technology. Like targets of other types of phishing attacks, the victim of a vishing attack will receive a phone call (or a voicemail) from a scammer, pretending to be a trusted person who’s attempting to elicit personal information such as credit card or login details. So, how do hackers pull this off? They use a range of advanced techniques, including: Faking caller ID, so it appears that the call is coming from a trusted number Utilizing “war dialers” to call large numbers of people en masse Using synthetic speech and automated call processes A vishing scam often starts with an automated message, telling the recipient that they are the victim of identity fraud. The message requests that the recipient calls a specific number. When doing so, they are asked to disclose personal information. Hackers then may use the information themselves to gain access to other accounts or sell the information on the Dark Web.  The Latest Vishing News: Updated August 2020 On August 20, 2020, the Federal Bureau of Investigation (FBI) and Cybersecurity and Infrastructure Security Agency (CISA) issued a joint statement warning businesses about an ongoing vishing campaign. The agencies warn that cybercriminals have been exploiting remote-working arrangements throughout the COVID-19 pandemic.  The scam involves spoofing login pages for corporate Virtual Private Networks (VPNs), so as to steal employees’ credentials. These credentials can be used to obtain additional personal information about the employee. The attackers then use unattributed VoIP numbers to call employees on their personal mobile phones. The attackers pose as IT helpdesk agents, and use a fake verification process using stolen credentials to earn the employee’s trust. The FBI and CISA recommend several steps to help avoid falling victim to this scam, including restricting VPN connections to managed devices, improving 2-Step Authentication processes, and using an authentication process for employee-to-employee phone communications. Example of a Vishing Attack Again, just like phishing via email and smishing, the rates of vishing attacks are continually rising. According to one report, 49% of organizations surveyed were victims of a vishing attack in 2018.  Vishing made headlines most recently in July 2020 after the Twitter scam. After a vishing attack, high-profile users had their accounts hacked, and sent out tweets encouraging their followers to donate Bitcoin to a specific cryptocurrency wallet, supposedly in the name of charitable giving or COVID-19 relief. This vishing attack involved Twitter employees being manipulated, via phone, into providing access to internal tools that allowed the attackers to gain control over Twitter accounts, including those of Bill Gates, Joe Biden, and Kanye West. This is an example of spear phishing, conducted using vishing as an entry-point. It’s believed that the perpetrators earned at least $100,000 in Bitcoin before Twitter could contain the attack. You can read more cybersecurity headlines from the last month here.  How to Identify a Vishing Attack Vishing attacks share many of the same hallmarks as smishing attacks. In addition to these indicators, we can categorize vishing attacks according to the person the attacker is impersonating: Businesses or charities — Such scam calls may inform you that you have won a prize, present you with you an investment opportunity, or attempt to elicit a charitable donation. If it sounds too good to be true, it probably is. Banks — Banking phone scams will usually incite alarm by informing you about suspicious activity on your account. Always remember that banks will never ask you to confirm your full card number over the phone. Government institutions — These calls may claim that you are owed a tax refund or required to pay a fine. They may even threaten legal action if you do not respond.  Tech support — Posing as an IT technician, an attacker may claim your computer is infected with a virus. You may be asked to download software (which will usually be some form of malware or spyware) or allow the attacker to take remote control of your computer. How to Prevent Smishing and Vishing Attacks The key to preventing smishing and vishing attacks is security training.  While individuals can find resources online, employers should be providing all employees with IT security training. It’s actually a requirement of data security laws, such as the General Data Protection Regulation (GDPR) and the New York SHIELD Act. You can read more about how compliance standards affect cybersecurity on our compliance hub.  Training can help ensure all employees are familiar with the common signs of smishing and vishing attacks which could reduce the possibility that they will fall victim to such an attack. But, what do you do if you receive a suspicious message? The first rule is: don’t respond.  If you receive a text requesting that you follow a link, or a phone message requesting that you call a number or divulge personal information — ignore it, at least until you’ve confirmed whether or not it’s legitimate. The message itself can’t hurt them, but acting on it can.  If the message appears to be from a trusted institution, search for their phone number and call the institution directly. For example, if a message appears to be from your phone provider, search for your phone provider’s customer service number and discuss the request directly with the operator.   If you receive a vishing or smishing message at work or on a work device, make sure you report it to your IT or security team. If you’re on a personal device, you should report significant smishing and vishing attacks to the relevant authorities in your country, such as the Federal Communications Commission (FCC) or Information Commissioner’s Office (ICO).  For more tips on how to identify and prevent phishing attacks, including vishing and smishing, follow Tessian on LinkedIn or subscribe to our monthly newsletter. 
Compliance, Data Exfiltration, DLP, Human Layer Security
You Sent an Email to the Wrong Person. Now What?
By Maddie Rosenthal
Tuesday, August 4th, 2020
So, you’ve sent an email to the wrong person. Don’t worry, you’re not alone. According to Tessian research, over half (58%) of employees say they’ve sent an email to the wrong person.  We call this a misdirected email and it’s really, really easy to do. It could be a simple spelling mistake, it could be the fault of Autocomplete, or it could be an accidental “Reply All”. But, what are the consequences of firing off an email to the wrong person and what can you do to prevent it from happening?  We’ll get to that shortly. But first, let’s answer one of the internet’s most popular (and pressing) questions: Can I stop or “un-send” an email?
Can I un-send an email? The short (and probably disappointing) answer is no. Once an email has been sent, it can’t be “un-sent”. But, with some email clients, you can recall unread messages that are sent to people within your organization.  Below, we’ll cover Outlook/Office 365 and Gmail. Recalling messages in Outlook & Office 365 Before reading any further, please note: these instructions will only work on the desktop client, not the web-based version. They also only apply if both you (the sender) and the recipient use a Microsoft Exchange account in the same organization or if you both use Microsoft 365.  In layman’s terms: You’ll only be able to recall unread emails to people you work with, not customers or clients. But, here’s how to do it. Step 1: Open your “Sent Items” folder Step 2: Double-click on the email you want to recall Step 3: Click the “Message” tab in the upper left-hand corner of the navigation bar (next to “File”) → click “Move” → click “More Move Actions” → Click “Recall This Message” in the dropdown menu Step 4: A pop-up will appear, asking if you’d like to “Delete unread copies of the message” or “Delete unread copies and replace with a new message” Step 5: If you opt to draft a new message, a second window will open and you’ll be able to edit your original message While this is easy enough to do, it’s not foolproof. The recipient may still receive the message. They may also receive a notification that a message has been deleted from their inbox. That means that, even if they aren’t able to view the botched message, they’ll still know it was sent.  More information about recalling emails in Outlook here. Recalling messages in Gmail Again, we have to caveat our step-by-step instructions with an important disclaimer: this option to recall messages in Gmail only works if you’ve enabled the “Delay” function prior to fat fingering an email. The “Delay” function gives you a maximum of 30 seconds to “change your mind” and claw back the email.  Here’s how to enable the “Delay” function. Step 1: Navigate to the “Settings” icon → click “See All Settings” Step 2: In the “General” tab, find “Undo Send” and choose between 5, 10, 20, and 30 seconds.  Step 3: Now, whenever you send a message, you’ll see “Undo” or “View Message” in the bottom left corner of your screen. You’ll have 5, 10, 20, or 30 seconds to click “Undo” to prevent it from being sent.  Note: If you haven’t set-up the “Delay” function, you will not be able to “Undo” or “Recall” the message.  More information about delaying and recalling emails in Gmail here. So, what happens if you can’t recall the email? We’ve outlined the top six consequences of sending an email to the wrong person below. 
What are the consequences of sending a misdirected email? We asked employees in the US and UK what they considered the biggest consequences of sending a misdirected email. Here’s what they had to say. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Importantly, though, the consequences of sending a misdirected email depend on who the email was sent to and what information was contained within the email. For example, if you accidentally sent a snarky email about your boss to your boss, you’ll have to suffer red-faced embarrassment (which 36% of employees were worried about). If, on the other hand, the email contained sensitive customer, client, or company information and was sent to someone outside of the relevant team or outside of the organization entirely, the incident would be considered a data loss incident or data breach. That means your organization could be in violation of data privacy and compliance standards and may be fined. But, incidents or breaches don’t just impact an organization’s bottom line. It could result in lost customer trust, a damaged reputation, and more. Let’s take a closer look at each of these consequences. Fines under compliance standards. Both regional and industry-specific data protection laws outline fines and penalties for the failure to implement effective security controls that prevent data loss incidents. Yep, that includes sending misdirected emails. Under GDPR, for example, organizations could face fines of up to 4% of annual global turnover, or €20 million, whichever is greater.  And these incidents are happening more often than you might think. Misdirected emails are the number one security incident reported to the Information Commissioner’s Office (ICO). They’re reported 20% more often than phishing attacks. You can read more about the biggest fines under GDPR so far in 2020 on our blog. Lost customer trust and increased churn. Today, data privacy is taken seriously… and not just by regulatory bodies.  Don’t believe us? Research shows that organizations see a 2-7% customer churn after a data breach and 20% of employees say that their company lost a customer after they sent a misdirected email. A data breach can (and does) undermine the confidence that clients, shareholders, and partners have in an organization. Whether it’s via a formal report, word-of-mouth, negative press coverage, or social media, news of lost – or even misplaced – data can drive customers to jump ship. Revenue loss. Naturally, customer churn + hefty fines = revenue loss. But, organizations will also have to pay out for investigation and remediation and for future security costs. How much? According to IBM’s latest Cost of a Data Breach report, the average cost of a data breach today is $3.86 million. Damaged reputation. As an offshoot of lost customer trust and increased customer churn, organizations will – in the long-term – also suffer from a damaged reputation. Like we’ve said: people take data privacy seriously. That’s why, today, strong cybersecurity actually enables businesses and has become a unique selling point in and of itself. It’s a competitive differentiator. Of course, that means that a cybersecurity strategy that’s proven ineffective will detract from your business. But, individuals may also suffer from a damaged reputation or, at the very least, will be embarrassed. For example, the person who sent the misdirected email may be labeled careless and security leaders might be criticized for their lack of controls. This could lead to…. Job loss. Unfortunately, data breaches – even those caused by a simple mistake – often lead to job losses. It could be the Chief Information Security Officer, a line manager, or even the person who sent the misdirected email.  It goes to show that security really is about people. That’s why, at Tessian, we take a human-centric approach and, across three solutions, we prevent human error on email, including accidental data loss via misdirected emails.
How does Tessian prevent misdirected emails? Tessian turns an organization’s email data into its best defense against human error on email. Powered by machine learning, our Human Layer Security technology understands human behavior and relationships, enabling Tessian Guardian to automatically detect and prevent anomalous and dangerous activity like emails being sent to the wrong person. Importantly, Tessian’s technology automatically updates its understanding of human behavior and evolving relationships through continuous analysis and learning of the organization’s email network.  That means that if, for example, you frequently worked with “Jim Morris” on one project but then stopped interacting with him over email, Tessian would understand that he probably isn’t the person you meant to send your most recent (highly confidential) project proposal to. Crisis averted.  Interested in learning more about how Tessian can help prevent accidental data loss and data exfiltration in your organization? You can read some of our customer stories here or book a demo.
Customer Stories, DLP, Human Layer Security
Data Leakage and Exfiltration: 7 Problems Tessian Helps Solve
Monday, August 3rd, 2020
On Wednesday, July 29, Tessian hosted a webinar with two customers: Euromoney Institutional Investor and ERT. The topic? Data exfiltration and reduced visibility while workforces are remote. Martyn Booth, Chief Information Security Officer (CISO) at Euromoney Institutional Investor and Ted Crawford, Chief Information Officer (CIO) at ERT both offered incredible insights about how things have changed from a security perspective over the last four months and how Tessian has helped them lock down email, even before their employees started working from home. And, because Martyn and Ted are two security leaders in different industries (Financial Services and Tech/Healthcare respectively) and are based in different regions (England and The United States), they were able to share diverse opinions and experiences. Keep reading to learn more about how Tessian has helped them solve some of their biggest pain points.  7 Problems Tessian Helps Solve 1. Tessian prevents accidental data loss on email When you hear data exfiltration, what do you think of?  Many of you probably thought immediately about Insider Threats and other malicious activity. But, as our customers pointed out, most incidents involving data loss are accidental. Or, as Martyn put it, are the result of “naive email usage”. It could be an employee sending an email to the wrong person (we call this a misdirected email), it could be someone hitting “reply all”, or it could be someone emailing a spreadsheet to their personal email account to work on over the weekend.  Harmless, right? Not exactly. If these “accidents” involve sensitive information related to employees, customers, clients, or the company itself, it’s considered a breach.  Organizations can prevent all of the above with Tessian Guardian.  This is especially important now that employees are working remotely. Why? Because the lines between peoples’ personal and professional lives are blurred. Beyond that, people are distracted, stressed, and tired which, as we’ve shown in our latest research report The Psychology of Human Error, increases the likelihood that a mistake will happen. 2. Tessian prevents malicious data exfiltration on email While, many data loss incidents are accidental, some employees do intentionally exfiltrate data. There are a number of reasons why, but financial gain and a competitive edge are the most likely motivators.  Unfortunately, with so many people being laid off, made redundant, or furloughed, many organizations have seen a spike in this type of malicious activity. But, with Tessian Enforcer, organizations’ most sensitive data is kept safe.  Employees attempting to email sensitive information to themselves or a suspicious third-party will receive a warning message, explaining why the email has been flagged and asking if they’re sure they want to proceed. At the same time, security teams will get a notification.
Note: Instead of warning the employee and asking if they’d like to send the email anyway, security teams can easily configure Tessian to automatically quarantine emails that look like data exfiltration. Book a demo to see Tessian in action.  3. Tessian makes it easy to report security risks and communicate ROI  Communicating cybersecurity ROI has historically been a real challenge for security leaders. Not with Tessian. Martyn explained how Tessian enables him to share key results with executives and demonstrate the effectiveness of not just the solution, but his overall strategy. “One of the pillars of our infrastructure strategy was to build transparency across the organization. This comes from sharing metrics. With Tessian, we can show how many alerts were picked up and, each month, we can show the risk committee that we’re reducing the number of alerts. Now, are they actually interested in our preventative controls? I don’t think so. But the whole point of the metrics program is to show how well (or badly) our strategy is performing.  Before, they would make their decision based on cost or how much risk they thought we were going to be mitigating. It was quite subjective. We’ve moved that now into something more data-based. We can actually say “Well, actually, we pay x per year and, as a result of that, we’re going in the right direction in terms of our risk mitigations.” 4. Tessian helps organizations stay compliant  Both Healthcare and Financial Services are highly regulated industries that are bound to several compliance standards beyond GDPR.  That’s why, for Ted, protecting sensitive clinical data and ensuring “privacy and security by design” are both paramount. “There’s a lot of data that we need to protect and prevent from getting outside of the four walls of ERT,” he said. “As an offshoot of GDPR in 2018, we had to classify all of the data, determine from a privacy perspective how to treat it from a sensitivity perspective, and then decide how to treat it from a security perspective. Because it’s very easy to pull sensitive data and incur data loss on email, we needed a solution that would help us ensure data isn’t distributed where it shouldn’t go. That’s why we approached Tessian.” For more information about compliance in Financial Services, check out this article: Ultimate Guide to Data Protection and Compliance in Financial Services.
5. Tessian saves security teams time  While essential for compliance, classifying (and re-classifying) data, monitoring movement, investigating incidents, and generating reports all take a lot of time. That’s why 85% of IT leaders say rule-based DLP is admin-intensive.  With Tessian, security teams don’t have to do any of the above manually. This is a big selling point for Martyn, who said, “That’s where we really see the value with Tessian. It takes the burden off of people in my security team.” Tessian is powered by machine learning algorithms that have been trained on billions of data points. That means our solutions automatically understand what is and isn’t normal behavior for individual employees and can, therefore, detect and prevent threats before they turn into incidents or breaches. No rules required.  You can read more about our technology here.  6. Tessian gives security teams clear visibility of risks We’ve talked a lot about how Tessian detects and prevents risks. But for a solution to be really successful, it has to give security teams clear visibility of the risks in their organization. Tessian’s Human Layer Security platform does both.  With Tessian Human Layer Security Intelligence, our customers can easily and automatically get detailed insights into employee’s actions.  For example, imagine that in a single week, Tessian detects 12 different employees attempting to send sensitive information to their personal email accounts. When warned that sending the email is against company policy, nine of the employees opted to not send the email. The other three went ahead. Knowing this, security leaders can focus their efforts on the three that went ahead and offer additional, targeted training or, if necessary, they can escalate the incident to a line manager to issue a more formal warning.  This also helps predict future behavior. For example, if Tessian flags that an employee has sent upwards of 20 attachments – including Intellectual Property that would be valuable to a competitor – to a recipient he or she has no previous email history with soon after being denied a raise or promotion, security teams could infer that the employee is resigning and taking company data with them.  And, to prevent any further data exfiltration attempts, they can create custom filters specifically for that user, including customized warning messages or a filter that automatically blocks future exfiltration attempts. Before Tessian, this wasn’t possible for Martyn.  “Even if we suspected that an employee was going to go to a competitor and take data, we couldn’t check. We couldn’t see anything that was going up to the Cloud. It was all encrypted. The only way we would be able to see what people were emailing would be to actually go through individual emails to find ones that were problematic. We didn’t have time for that,” he said. 
7. Tessian helps reinforce training and improve employee’s security reflexes with in-the-moment warnings In the example above, three employees opted to send an email after being warned that doing so would be against company policy. But, what about the other nine? The warning message changed their behavior! It actually incentivized them to accurately mark emails as confidential or malicious if they were, in fact, confidential or malicious. This is really important. “You can’t take a ‘big bang’ approach to data privacy awareness training. To really see employees empowered, you have to constantly reinforce training,” Ted said.  The bottom line: For training to be effective long-term, employees need to apply what they learn to real-world situations and be reminded of policies in-the-moment. Over time, this will help improve their security reflexes and help build a more positive security culture.  Henry Trevelyan Thomas, the host of the webinar and Tessian’s Head of Customer Success, summarized the benefits of this for both employees and security leaders, “This is a really productive way to help employees take accountability for how they handle data. It democratizes security and takes some of the weight off of the Chief Information Security Officer’s shoulders.” Tessian can help prevent data exfiltration in your organization, too Tessian turns an organization’s email data into its best defense against inbound and outbound email security threats. Powered by machine learning, our Human Layer Security technology understands human behavior and relationships, enabling it to automatically detect and prevent anomalous and dangerous activity. Tessian Enforcer detects and prevents data exfiltration attempts Tessian Guardian detects and prevents misdirected emails Importantly, Tessian’s technology automatically updates its understanding of human behavior and evolving relationships through continuous analysis and learning of the organization’s email network. Oh, and it works silently in the background, meaning employees can do their jobs without security getting in the way.  Interested in learning more about how Tessian can help prevent accidental data loss and data exfiltration in your organization? You can read some of our customer stories here or book a demo.
Compliance
Ultimate Guide to Data Protection and Compliance in Financial Services
Monday, August 3rd, 2020
Over the last few decades – and driven by the digital transformation – compliance has become a core part of the financial services sector. But, today, security, compliance, and legal teams aren’t just ensuring that regulatory obligations are met because they’re legally compelled to. Compliance plays an important role in protecting firms’ reputations. The problem is, compliance is broad and multi-faceted. There are many ways in which a firm can fall out of compliance, especially in sensitive industries such as finance. Why? Because one of the leading causes of non-compliance is data loss and, according to one report, 62% of breached data came from financial services in 2019.  You can learn more about the frequency of data loss incidents in financial services here: The State of Data Loss Prevention in the Financial Services Sector.  The regulatory framework When it comes to privacy and data security, the financial services sector has a pretty strict regulatory environment, especially when compared to other sectors and in major markets like the United States, the European Union, and the United Kingdom, where financial services compliance is governed by intricate regulatory frameworks.  That’s why we’ve put this article together. We’ve compiled a list of the three compliance standards most relevant to those working in financial services and have outlined the key requirements of each, as well as exactly what organizations are affected.  Looking for something specific? Click the text below to jump down the page. 
Gramm-Leach-Bliley Act (GLBA) The US arguably has the most complex regulatory regime for financial products and services. Why? There’s a long list of reasons, including national politics and the country’s federalist nature. But, the federal GLBA is the “big one” that covers all “financial institutions,” a broad definition that includes any business that is “significantly engaged in providing financial products or services.”  These include: Banks and related services; Investment firms; Non-bank lenders (e.g. interest-free finance, payday loans); Mortgage brokers; and Real estate appraisers. What are the main compliance obligations under the GLBA?  The primary compliance obligation for firms under the GLBA is the requirement to develop a written security program that outlines how they safeguard consumer information. It is a fairly flexible obligation that requires firms to: Designate an employee to manage the program; Identify risks in operational areas and assess relevant security safeguards; and Adjust the program as risk factors develop.  Although the GLBA is flexible, financial services firms are expected to implement basic protections against cybersecurity risks. These include encrypting customer information and implementing solutions that prevent inbound and outbound threats. Find out why protecting data on email is especially important.  What are the penalties for non-compliance? GLBA violations can attract hefty penalties, including fines of up to $100,000 per violation and prison time of up to five years.  Financial Services and Markets Act 2000 (FSMA) In the UK, the primary piece of legislation that governs the regulated financial services market is the Financial Services and Markets Act 2000. This piece of legislation also establishes regulatory bodies like the Financial Conduct Authority (FCA), which is responsible for the regulation of conduct in wholesale financial markets.  The FCA’s objectives include: Ensuring market confidence and financial stability; Promoting public awareness; Protecting consumers (i.e. from instances of data loss); and Reducing financial crime.  Prior to the FSMA, compliance was viewed as a low priority within firms. The FSMA was introduced to act as a full, accurate, and accessible document that outlines the roles and responsibilities of the financial services and market industries.  Who does the FSMA apply to? Any authorized firm conducting regulated financial activities such as deposit taking, insurance-related activities, financing activities, and consumer credit activities.  What requirements exist concerning compliance under the FSMA? Regulated firms must have systems in place to ensure they are compliant with applicable laws. Like many other compliance standards though, The Act does not specify which systems. But, if we’re talking specifically about firms’ obligation to prevent data loss, DLP solutions are a good place to start. We have plenty of DLP resources, including an overview of what data loss prevention is, how it works, and an overview of current DLP solutions.  Controls, systems, and compliance programs can vary depending on the size of the firm and its regulated activities.  There are several ways that compliance best practice can be conveyed to firms, including through thematic reviews by the FCA.  General Data Protection Regulation (GDPR) If you hadn’t heard of the other two compliance standards on this list, you’ve almost certainly heard of this one. At the time of the GDPR’s introduction in 2018, it was the largest change to data protection legislation in almost 20 years and it’s where financial services firms around the world can find some of the most thorough guidance on their compliance obligations.  It gives regulators the power to impose hefty fines to organizations that are not compliant, and it has shaken up many industries where wide-scale privacy changes are required to achieve compliance.  Read more about the biggest fines issues so far in 2020 on our blog. What is the GDPR for? The GDPR was established amid growing concerns around the safety of personal data and the need to protect it from hackers, Insider Threats, and unethical use. It effectively puts individuals back in control of their data, giving them the power to control how businesses use it. You must be able to move or dispose of this data if requested.  Still scratching your head? We’ve answered 13 FAQs about GDPR.  How does the GDPR impact the financial services industry?  The GDPR impacts the sector in a few distinct ways.  You must have client consent The GDPR says that you must explicitly gain consent to gather personal data and say why you are collecting it. You must also gain additional consent if you wish to share this information. Personal information refers to anything that could be used to identify an individual, such as: Names Email addresses Social media profiles IP addresses You have end-to-end accountability for data IT systems are at the core of any financial firm and constantly have data passing through them.  The GDPR requires firms to understand all the dataflow across their organization and reduce exposure to external vendors and parties. Firms must also ensure vigilance when sharing data, particularly across borders. In layman’s terms: the GDPR holds businesses accountable for safeguarding customer data. Organizations are obligated to take steps to ensure data isn’t disclosed, either intentionally or accidentally, where there isn’t a legitimate reason.  Did you know that misdirected emails are the number one data loss incident reported to the Information Commissioner’s Office (ICO)? Learn more about the consequences of “fat fingering” an email here. Your clients have a right to erasure GDPR gives your clients the right to ask for their data to be removed without the need for any outside authorization. Financial institutions can keep some data to ensure compliance with other regulations (for example, information relevant to credit records) but in all other circumstances, data must be destroyed when requested.  You are bound by strict protocols in the event of a loss Before GDPR, firms could adopt their own protocols in the event of a data breach. Now, GDPR compels firms to report any data breaches, no matter how big or small, to the relevant regulatory or supervisory authority within 72 hours, such as the ICO. The notification must: Contain relevant details regarding the nature of the breach; The approximate number of people impacted; and Contact details of the firm’s Data Protection Officer (DPO).  Impacted clients must also be notified of the breach, the potential outcome, and any remediation “without undue delays”. That’s one reason why a data breach can negatively impact reputation and customer trust. But, those are the only consequences.  What are the penalties for non-compliance? Penalties for non-compliance are very harsh and can be as severe as a fine of 4% of annual global turnover or €20 million—whichever is higher. And they’re being handed out more often now too, with over 36 fines issued in March 2020 alone. That’s a new record.  That means ensuring compliance is essential.  Tessian helps financial services firms stay compliant Financial services firms are under increased pressure to monitor and control their data and restrict the movement of it to prevent both accidental and deliberate loss.  Of all the places where data can be lost, email represents one of the most common. In fact, 90% of data breaches begin with email. Why? Because it’s a threat vector for both inbound and outbound threats like phishing, data exfiltration, and misdirected emails.  Tessian prevents all these threats using machine learning by monitoring and applying human understanding to email behavior. Across three solutions, Tessian analyzes email data to understand and interpret communications and steps in when it detects that something’s “off”. For example, if an employee sends company data to a personal email account or if someone receives an email with a suspicious domain that could be a phish. Best of all, Tessian works quietly in the background, doesn’t disrupt workflow, and helpful, in-the-moment warnings reinforce training and remind employees of existing policies. That means it’s good for everyone. Learn more about how Tessian has been used by financial institutions such as Evercore, Man Group, and Premier Asset Management to proactively protect customer data and achieve full compliance. You can read more customer stories here.
Human Layer Security, Spear Phishing
Pros and Cons of Phishing Awareness Training
By Maddie Rosenthal
Monday, August 3rd, 2020
Over the last several weeks, phishing, spear phishing, and social engineering attacks have dominated headlines. But, phishing isn’t a new problem. These scams have been circulating since the mid-’90s.  So, what can security leaders do to prevent being targeted? Unfortunately, not much. Hackers play the odds and fire off thousands of phishing emails at a time, hoping that at least a few will be successful. The key, then, is to train employees to spot these scams. That’s why phishing awareness training is such an essential part of any cybersecurity strategy. But is phishing awareness training alone enough? Keep reading to find out the pros and cons of phishing awareness training as well as the steps security leaders need to take to level up their inbound threat protection. Still wondering how big of a problem phishing really is? Check out the latest phishing statistics for 2020.
To make this article easy-to-navigate, we’ll start with a simple list of the pros and cons of phishing awareness training. For more information about each point, you can click the text to jump down on the page. 
Pros of phishing awareness training Phishing awareness training introduces employees to threats they might not be familiar with While people working in security, IT, or compliance are all-too-familiar with phishing, spear phishing, and social engineering, the average employee isn’t. The reality is, they might not have even heard of these terms. That means phishing awareness training is an essential first step. To successfully spot a phish, they have to know they exist. By showing employees examples of attacks – including the subject lines to watch out for, a high-level overview of domain impersonation, and the types of requests hackers will generally make – they’ll immediately be better placed to identify what is and isn’t a phishing attack.   Looking for resources to help train your employees? Check out this blog with a shareable PDF. It includes examples of phishing attacks and reasons why the email is suspicious.  Phishing awareness training can teach employees more about existing policies and procedures Again, showing employees what phishing attacks look like is step one. But ensuring they know what to do if and when they receive one is an essential next step and is your chance to remind employees of existing policies and procedures. For example, who to report attacks to within the security or IT team. Importantly, though, phishing awareness training should also reinforce the importance of other policies, specifically around creating strong passwords, storing them safely, and updating them frequently. After all, credentials are the number one “type” of data hackers harvest in phishing attacks.  Phishing awareness training can help security leaders identify particularly risky and at-risk employees By getting teams across departments together for training sessions and phishing simulations, security leaders will get a birds’ eye view of employee behavior. Are certain departments or individuals more likely to click a malicious link than others? Are senior executives skipping training sessions? Are new-starters struggling to pass post-training assessments?  These observations will help security leaders stay ahead of security incidents, can inform subsequent training sessions, and could help pinpoint gaps in the overall security framework.
Phishing awareness training can help satisfy compliance standards While you can read more about various compliance standards – including GDPR, CCPA, HIPAA, and GLBA – on our compliance hub, they all include a clause that outlines the importance of implementing proper data security practices. What are “proper data security practices?” This criterion has – for the most part – not been formally defined. But, phishing awareness training is certainly a step in the right direction and demonstrates a concerted effort to secure data company-wide.   Phishing awareness training can help foster a strong security culture In the last several years (due in part to increased regulation) cybersecurity has become business-critical. But, it takes a village to keep systems and data safe, which means accountability is required from everyone to make policies, procedures, and tech solutions truly effective.  That’s why creating and maintaining a strong security culture is so important. While this is easier said than done, training sessions can help encourage employees – whether in finance or sales – to become less passive in their roles as they relate to cybersecurity, especially when gamification is used to drive engagement. You can read more about creating a positive security culture on our blog. Phishing awareness training can enable employees to spot scams in their personal lives, too The point of phishing awareness training is to prevent successful attacks in the workplace. But, it’s important to remember that phishing attacks are targeted at consumers, too. That’s why the most frequently impersonated brands are household names like Netflix and Facebook. Why does this matter? Because phishing attacks have serious consequences, and not just for larger organizations. If an employee was scammed in a consumer attack, they could lose thousands of dollars or even have their identity stolen. It’s hard to imagine a world in which this wouldn’t affect their work. The bottom line: prevention is better than cure and knowledge is power. Phishing awareness training won’t just protect your organization’s data and assets, it’ll empower your people to protect themselves outside of the office, too. 
Cons of phishing awareness training Phishing awareness training can’t prevent human error While phishing awareness training will help employees spot phishing scams and make them think twice before clicking a link or downloading an attachment, it’s not a silver bullet.  Even the most security-conscious and tech-savvy employees can – and do – fall for phishing attacks. Case in point: Employees working in the tech industry are the most likely to click on links in phishing emails, with nearly half (47%)  admitting to having done it. This is 22% higher than the average across all industries. As the saying goes, to “err is human”. Phishing awareness training can’t evolve as quickly as threats do Hackers think and move quickly and are constantly crafting more sophisticated attacks to evade detection. That means that training that was relevant three months may not be today. We only have to look at the spike in COVID-19 themed phishing attacks starting in March for proof. Prior to the outbreak of the pandemic, very few phishing awareness programs would have trained employees to look for impersonations of the World Health Organization, for example. Likewise, impersonations of collaboration tools like Zoom took off as soon as workforces shifted to remote-working. (Click here for more real-life examples of COVID-19 phishing emails.) What could be next?  Phishing awareness training has hidden costs According to Mark Logsdon, Head of Cyber Assurance and Oversight at Prudential, there are three fundamental flaws in training: it’s boring, often irrelevant, and expensive. We’ll cover the first two below but, for now, let’s focus on the cost. Needless to say, the cost of training and simulation software varies vendor-by-vendor. But, the solution itself is far from the only cost to consider. What about lost productivity? Imagine you have a 1,000-person organization and, as a part of an aggressive inbound strategy, you’ve opted to hold training every quarter. Training lasts, on average, three hours. That’s 12,000 lost hours a year.  While – yes – a successful attack would cost more, we can’t forget that phishing awareness training alone doesn’t work. (See point 1: Phishing awareness training can’t prevent human error.)
Phishing awareness training isn’t targeted (or engaging) enough Going back to what Mark Logsdon said: Training is boring and often irrelevant. It’s easy to see why. You can’t apply one lesson to an entire organization – whether it’s 20 people or 20,0000 – and expect it to stick. It has to be targeted based on age, department, and tech-literacy. Age is especially important.  According to Tessian’s latest research, nearly three-quarters of respondents who admitted to clicking a phishing email were aged between 18-40 years old. In comparison, just 8% of people over 51 said they had done the same. However, the older generation was also the least likely to know what a phishing email was. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Jeff Hancock, the Harry and Norman Chandler Professor of Communication at Stanford University and expert in trust and deception, explained how tailored training programs could help. “A one-size-fits-all approach won’t work. Different generations have grown up with tech in different ways, and security training needs to reflect this. That’s not to say that we should think that people over 50 are tech-illiterate, though. Businesses need to consider what motivates each age group and tailor training accordingly.”  “Being respected at work is incredibly important to an older generation, so telling them that they don’t understand something isn’t an effective way to educate them on the threats. Instead, businesses should engage them in a conversation, helping them to identify how their strengths and weaknesses could be used against them in an attack.”  “Many younger employees, on the other hand, have never known a time without the internet and they don’t want to be told how to use it. This generation has a thirst for knowledge, so teach them the techniques that hackers will use to target them. That way, when they see a scam, they’ll be able to unpick it and recognize the tactics being used on them.”   Phishing awareness training can’t force employees to care about cybersecurity Unfortunately, the average employee is less focused on cybersecurity and more focused on getting their jobs done. That’s why one-third (33%) rarely or never think about security and work and over half (54%) of employees say they’ll find a workaround if security software or policies prevent them from doing their job.  While – yes – security leaders can certainly reinforce the importance of software and policies, training alone won’t help control employee’s behavior or inspire every single person to become champions of cybersecurity. Phishing awareness can’t change quick-to-click company cultures It’s widely accepted that time pressure negatively impacts decision accuracy. But did you know that individuals who are expected to respond to emails quickly are also the most likely to click on phishing emails?  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); It makes sense. If you’re rushing to read and fire off emails – especially when you’re working off of laptops, phones, and even watches – you’re more likely to make mistakes.
Should I create a phishing awareness training program? The short answer: Absolutely. Phishing awareness training programs can help teach employees what phishing is, how to spot phishing emails, what to do if they’re targeted, and the implications of falling for an attack. But, as we’ve said, training isn’t a silver bullet. It will curb the problem, but it won’t prevent mistakes from happening. That’s why security leaders need to bolster training with technology that detects and prevents inbound threats. That way, employees aren’t the last line of defense. But, given the frequency of attacks year-on-year, it’s clear that spam filters, antivirus software, and other legacy security solutions aren’t enough. That’s where Tessian comes in. How does Tessian detect and prevent targeted phishing attacks? Tessian fills a critical gap in security strategies that SEGs, spam filters, and training alone can’t.  By learning from historical email data, Tessian’s machine learning algorithms can understand specific user relationships and the context behind each email. This allows Tessian Defender to detect a wide range of impersonations, spanning more obvious, payload-based attacks to difficult-to-spot social-engineered ones like CEO Fraud and Business Email Compromise. Once detected, real-time warnings are triggered and explain exactly why the email was flagged, including specific information from the email. (See below.) This is an important function. Why? Because, according to Jeff, “People learn best when they get fast feedback and when that feedback is in context,” 
These in-the-moment warnings reinforce training and policies and help employees improve their security reflexes over time.  To learn more about how tools like Tessian Defender can prevent spear phishing attacks, speak to one of our experts and request a demo today.
July Cybersecurity News Roundup
By Maddie Rosenthal
Friday, July 24th, 2020
If you keep up with cybersecurity news, you’ll know it’s been a busy month. We’ve seen headlines around social engineering attacks, the CCPA, coronavirus vaccine data, critical patches, and banned social media applications.  We’ve rounded up some of the top stories from July, including must-know information and links to various supporting resources.  Coronavirus: Russian Spies Target COVID-19 Vaccine Research After pharmaceutical companies and research centers in Great Britain were hacked, four agencies in the US, UK and Canada issued a joint warning, saying that Cozy Bear – a group that “almost certainly operate as a part of Russian intelligence services” – was responsible and that they were targeting organizations trying to develop a coronavirus vaccine. While the UK’s National Cyber Security Centre (NCSC) hasn’t revealed which organizations were targeted or whether any information had been stolen, they have made it clear that vaccine research wasn’t compromised.  In their warning, the US, UK, and Canadian agencies said that hackers not only exploited software flaws to gain access to computer systems, but they also used malware, and tricked employees into handing over login credentials with phishing and spear phishing attacks. Check out our guide: Coronavirus and Cybersecurity: How to Stay Safe From Phishing Attacks. Twitter Accounts Hacked in Bitcoin Scam On July 15, the official accounts of Barack Obama, Joe Biden, Elon Musk, Bill Gates and other celebrities and politicians were hacked in an apparent Bitcoin scam. 
According to Twitter, it was a coordinated social engineering attack involving an Insider that targeted employees who had access to internal systems and tools. This access was then used to take control of various accounts. And, in an update from the social media giant on Wednesday, July 22, it was announced that cybercriminals didn’t just tweet from hacked accounts, they also accessed the direct messages of around 36 people, including a Dutch politician.  The Federal Bureau of Investigation (FBI) is now involved and other lawmakers (on both sides of the political spectrum) are asking Twitter for transparency into what happened and how it can be prevented in the future. Emotet Spam Trojan Surges Back to Life After 5 Months of Silence After going dark five months ago, 2019’s most active malware – Emotet botnet – is back. The latest campaign (the first attack was spotted on July 17) is firing off spam emails, trying to infect users in the US and the UK with its malware. According to one researcher, the campaign is “ongoing” and reached 250,000 messages in just one day.  Here’s how it works: malicious Word attachments or URLs are contained within emails and, if clicked by targets, Emotet will be downloaded and installed. This initial foothold is then used to deploy other malware. What do you do if you’re infected? Isolate the infected system and take the entire network offline.  15 Billion Usernames and Passwords are For Sale on The Dark Web We often say that data is valuable currency but, after a report was released in early July, we can see just how much our personal information is worth. The report, From Exposure to Takeover, found that 100,000 data breaches over a two-year period have yielded a 300% increase in stolen credentials. That means that, today, there are fifteen billion usernames and passwords for sale on the dark web. These compromised credentials are being sold for an average price of $15.43. But, hackers can “rent” an identity for as little as $10. So, how are hackers getting their hands on this data? Phishing, credential-stealing malware, and credit-card skimmers are three of the most popular ways. Research Shows How to Prevent Mistakes Before They Become Breaches  The Psychology of Human Error, the latest report from Tessian, examines not only the mistakes people make at work, but why they make those mistakes. These are important questions to answer, especially when the research shows that nearly half (43%) of employees say they’ve made a mistake at work that had security repercussions for themselves or their company. The findings reveal that younger employees are more likely to make mistakes, that men are more likely than women to fall for phishing scams, and that fast-paced company cultures are driving employees to make more mistakes. The research also outlines that those employees who are distracted (which many people are when working from home) or tired are more likely to fall for phishing scams.  Read the full report to learn more, including what security leaders can do to combat the problem. In a rush? You can read an overview of the key findings here. Microsoft Patches Critical 17-year-old DNS Bug in Windows Server As a part of Microsoft’s monthly security update – called Patch Tuesday – 123 security flaws across 13 products were fixed. The most severe? The flaw is known as CVE-2020-1350 Windows DNS Server Remote Code Execution Vulnerability, and points to a problem with Microsoft’s implementation of DNS that can result in a server improperly handling domain name resolutions requests.  Researchers say hackers can exploit this vulnerability and weaponize it to create wormable malware that would allow them to gain Domain Administrator rights and take control of an entire network. Patches are available for several versions of Windows Server, going back as far as 2003 and Microsoft has advised that organizations install the patch as soon as possible. Note: The vulnerability is limited to Microsoft’s Windows DNS Server implementation, so Windows DNS clients are not affected.   Biden Ups the Cybersecurity Game Ahead of Elections The 2016 election made it clear how important cybersecurity is in politics.  As a preventive measure, some (although very few) candidates’ in this year’s election have brought on Chief Information Security Officers (CISOs). The latest announcement came from Joe Biden who announced Chris DeRush – former CISO for the State of Michigan who has also served as a cybersecurity advisor in the White House and Department of Homeland Security – would fill the position for his presidential campaign.  Learn more about why political campaigns need CISOs on our blog.  India Has Banned TikTok. US May be Next TikTok – the popular social media application – has generated a lot of buzz throughout July. Why? According to a press release from India’s Ministry of Electronics & IT, it’s because the app (and 58 other Chinese-owned apps) are “hostile to national security” and “pose a threat to sovereignty”.  These concerns arose after a military stand-off between China and India in mid-June. Other countries are following suit. Both US and Australian authorities banned the use of the app for military personnel as more and more questions are being asked about the security of data and potential breaches of privacy.     Most recently, The House of Representatives voted 336-71 in favor of the National Defense Authorization Act, which includes an amendment banning TikTok from all federal devices. Meanwhile, TikTok  – who has recently hired an American CEO – has maintained that it doesn’t share data from its app with the Chinese government.  Walmart Accused of Mishandling Data in CCPA Lawsuit July 1 was the official enforcement data of the CCPA and, less than two weeks later, Walmart was sued in a class-action lawsuit. Why? A San Francisco man claims that his personal information – including his credit card – was sold on the dark web after the superstore was hacked.  Under the CCPA, companies can be fined up to $750 “per consumer per incident” and, because the man alleges that hundreds more customers were affected, Walmart could be hit with a big fine. For now, Walmart says it wasn’t hacked, maintaining that “Protecting our customers’ data is a top priority and something we take very seriously. We dispute the plaintiff’s allegations that the failure of our systems played any role in the public disclosure of his personally identifiable information (PII)”. That’s all for this month! Did we miss anything? Email [email protected] You can also keep up with us on social media and check our blog for more updates. 
Data Exfiltration, DLP, Human Layer Security, Spear Phishing
Research Shows How To Prevent Mistakes Before They Become Breaches
By Maddie Rosenthal
Wednesday, July 22nd, 2020
We all make mistakes. But with over two-fifths of employees saying they’ve made mistakes at work that have had security repercussions, businesses need to find a way to stop mistakes from happening before they compromise cybersecurity.  That’s why we developed our report The Psychology of Human Error, with the help of Jeff Hancock, a professor at Stanford University and expert in social dynamics online.  We wanted to understand why these mistakes are happening, rather than simply dismissing incidents of human error as people acting carelessly or labeling people the ‘weakest link’ when it comes to security. By doing so, we hope businesses can better understand how to protect their people, and the data they control.  Key findings: 43% of employees have made mistakes that have compromised cybersecurity A third of workers (33%) rarely or never think about cybersecurity when at work 52% of employees make more mistakes when they’re stressed, while 43% are more error-prone when tired 58% have sent an email to the wrong person at work and 1 in 5 companies lost customers after an employee sent a misdirected email  Read on to learn why this matters. You can also register for our webinar on August 19 here. We’ll be exploring key findings from the report with Jeff Hancock. You’ll walk away with a better understanding of how hacker’s are manipulating employees and what you can do to stop them. What mistakes are people making?  The majority of our survey respondents said they had sent an email to the wrong person, with nearly one-fifth of these misdirected emails ending up in the wrong external person’s inbox.  Far from just red-faced embarrassment, this simple mistake has devastating consequences. Not only do companies face the wrath of data protection regulators for flouting the rules of regulations like GDPR, our research reveals that one in five companies lost customers as a result of a misdirected email, because the trust they once had with their clients was broken. What’s more, one in 10 workers said they lost their job.  !function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async"); Another mistake was clicking on links in phishing emails, something a quarter of respondents (25%) said they had done at work. This figure was significantly higher in the Technology industry however, with 47% of workers in this sector saying they’d fallen for phishing scams. It goes to show that even the most cybersecurity savvy people can make mistakes.  Interestingly, men were twice as likely as women to fall for phishing scams. While researchers aren’t 100% sure as to why gender differences play a factor in phishing susceptibility, our report does show that demographics play a role in people’s cybersecurity behaviors at work.  What’s causing these mistakes to happen?  1. Younger employees are 5x more likely to make mistakes 50% aged 18-30 years olds said they had made such mistakes with security repercussions for themselves or their organization. Just 10% of workers over 51 said the same.  This disparity, our report suggests, is not because younger workers are more careless. Rather, it may be because younger workers are actually more aware that they have made a mistake and are also more willing to admit their errors. For older generations, Professor Hancock explains, self-presentation and respect in the workplace are hugely important. They may be more reluctant to admit they’ve made a mistake because they feel ashamed due to preconceived notions about their generations and technology. Businesses, therefore, need to not only acknowledge how age affects cybersecurity behaviors but also find ways to deshame the reporting of mistakes in their organization. 2. 93% of employees are stressed and tired Employees told us they make more mistakes at work when they are stressed (52%), tired (43%), distracted (41%) and working quickly (36%).  This is concerning when you consider that an overwhelming 93% of employees surveyed said they were either tired or stressed at some point during the working week. This isn’t helped by the fact that nearly two-thirds of employees feel chained to their desks, with 61% saying there is a culture of presenteeism in their organization that makes them work longer hours than they need to.  The Covid-19 pandemic has put people under huge amounts of stress and change. In light of the events of 2020, our findings call for businesses to empathize with people’s positions and understand the impact stress and working cultures have on cybersecurity.
3. 57% of employees are being driven to distraction 47% of employees surveyed cited distraction as a top reason for falling for a phishing scam, while two-fifths said they sent an email to the wrong person because they were distracted.  With over half of workers (57%) admitting they’re more distracted when working from home, the sudden shift to remote-working could open businesses up to even more risks caused by human error. It’s hardly surprising. We suddenly had to set-up offices in the homes we share with our young children, pets and our housemates. There’s a lot going on, and mistakes are likely to happen. 
4. 41% thought phishing emails were from someone they trusted Over two-fifths of people (43%) mistakenly clicked on phishing emails because they thought the request was legitimate, while 41% said the email appeared to have come from either a senior executive or a well-known brand.  Over the past few months, we’ve seen hackers impersonating well-known brands and trusted authorities in their phishing scams, taking advantage of people’s desire to seek guidance and information on the pandemic. Impersonating someone in a position of trust or authority is a common and effective tactic used by hackers in phishing campaigns. Why? Because they know how difficult or unlikely it is to ignore a request from someone you like, respect or report into.  Businesses need to protect their people from these phishing scams. Educate staff on the ways hackers could take advantage of their circumstances and invest in solutions that can detect the impersonations, when your distracted and overworked employees can’t. !function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async"); But how can businesses prevent these mistakes from happening in the first place?  To successfully prevent mistakes from turning into serious security incidents, businesses have to take a more human approach.  It’s all too easy to place the blame of data breaches on people’s mistakes. But businesses have to remember that not every employee is an expert in cybersecurity. In fact, a third of our survey respondents (33%) said they rarely or never think about cybersecurity when at work. They are focused on getting the jobs they were hired to do, done. !function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async"); Training and policies help. However, combining this with machine intelligent security solutions – like Tessian – that automatically alert individuals of potential threats in real-time is a much more powerful tool in preventing mistakes before they turn into breaches.  Alerting employees to the threat in-the-moment helps override impulsive and dangerous decision-making that could compromise cybersecurity. By using explainable machine learning, we arm employees with the information they need to apply conscious reasoning to their actions over email, making them think twice before doing something they might regret. 
And with greater visibility into the behaviors of your riskiest and most at-risk employees, your teams can tailor security training and policies to influence and improve staff’s cybersecurity behaviors. Only by protecting people and preventing their mistakes can you ensure data and systems remain secure, and help your people do their best work. Read the full Psychology of Human Error report here.
Tessian Culture
Our First Growth Framework – How Did We Get Here?
By Jade Jarvis
Tuesday, July 21st, 2020
Tessian has just finished building our first-ever Growth Framework for our Engineering team. At the same time, we’ve also introduced Internal Levels to represent different stages of progression and identify key milestones as our Engineers develop and grow.  We see this framework as a guiding North Star for Tessians to trail blaze their own career. Tessian’s values ensure we achieve against the expectations outlined in the framework in the right way, and they are embedded throughout.  Why did we do it?   We’ve had feedback in the past about not having clear progression paths for our current team and – at the same time – it’s critical we understand what “good” looks like at each level for new team members. So, what problems is this new framework going to solve for us?  Our Engineers know what their career at Tessian could look like.  Introducing levels means we can celebrate promotion more formally.  It will support future hiring so we can find the best people to join us.  What does it look like?  There are levels which show the milestones of development at Tessian. As our engineers develop and grow, they progress through these levels.  How do they know what it takes to get to the next level? Well, that’s where our Growth Framework comes in! The framework defines competency clusters which are then broken down into further competencies that outline the  key behaviors expected at each level.  Let us introduce you to our Competency Clusters (and Competencies)
How did we do it?  Our Engineering and People team partnered on this and garnered insights not only from Tessians, but also by exploring best practice in our industry.  We have a culture of collaboration so it was important for us to hear directly from our Engineering team to learn more about their perspectives. We spoke to as many engineers as possible to find out what makes a great engineer and a great framework.  We also reviewed successful frameworks in other tech companies to get a strong sense of what has and hasn’t worked for others.  Here’s a step-by-step of our actions:  Information-gathering  We researched open sources and, by comparing and coding, we found that there are similar core competencies that appear in the majority of frameworks. We whittled it down to seven possible competencies; Craft, Impact, Execution, Leadership, Advocacy, Continuously Improves, and Communication.  Explore workshops  We ran a series of workshops with our Eng leaders to really dig into what’s important for them. In the first workshop, we shared the seven possible competencies and, in breakout groups, we discussed the competencies. We asked ourselves two key questions:  For Tessian: How important is this in helping us achieve our business goals?  For our Tessians: How meaningful is this for individuals in their careers?  We then brought everyone back for a group discussion and shared our thoughts from the breakout groups. As expected, we found there was a lot of cross-over amongst the competencies and what they mean to us.  From this first session, we had two main takeaways:  We needed to ensure that we created clear competency cluster definitions that describe exactly what that competency means to Tessian.  We could chop and merge some of the 7 possible competencies clusters to identify our core competencies.  In our second workshop, we shared our findings and introduced the rescoped and refined competencies. This was where the idea of Competency Clusters with Competencies within them were born.  Our four clusters became: Craft, Impact, Delivery, and Communication.  From this session, we gained better understanding about our competencies, but had feedback that perhaps the names of the clusters didn’t feel quite right.  We wanted our clusters to be action statements, so:  Craft became “Master your Craft” – broken down into Technical Skills, Continuous Improvement and Security.  Impact became “Make an Impact”– broken down into Teamwork, Influence, Accountability and Customer-Centricity.  Delivery became “Get Stuff Done” – broken down into Delivery and Autonomy.  Communication became “Communicate” – broken down into Information and Feedback.  Building the behaviors  At this point, we were in a good place with our competency clusters, competencies, and definitions. It was time to build the specific behavioral indicators that sit under each competency and each level.  We knew from our previous workshops that – while we were getting some really useful comments – it was sometimes quite difficult to capture them all verbally. So, we tried out a live commenting activity. It worked brilliantly and captured the diversity of thought amongst all of our engineering leaders.  We shared the draft framework with our managers before the session. Then, during the session, we asked managers to spend the first 20 minutes making live comments on our Google sheet. Afterwards, and as a group, we went through each of the comments, discussed further, and made notes for action.  Iterations  But, we didn’t stop there. We had a few more manager workshops where we had cycles of gaining feedback > making amends > sharing back.  When we got to what we’d call a ‘final draft’, we asked the wider Engineering team for volunteers to join a focus group to get their feedback. This was an energizing session and was the first time the wider team was able to see the framework and overall, the response was really positive. People really cared about understanding how this tool could be used to support their growth at Tessian. One nugget of feedback that came from this group was that our Competency Framework sounded very formal and not very inspiring, and so, this became our Growth Framework. This really felt like it was more representative of what we want to use it for.  Defining our levels  While we were finalzing our Growth Framework, we were also identifying what levels and job titles would make sense for Tessian. Again, we looked at what our peers were doing (using progression.fyi), so that whatever we landed on spoke the same language as the rest of tech. This also ensures that it’s on par with appropriate expectations in the market so it’s easily comparable with other companies. Testing, testing!  This is arguably the most important part. We had to see if the framework actually worked in practice. We had meetings with individual managers to work through what level each of their team members were currently delivering to. And, to ensure a fair and transparent process, we had a group calibration session to discuss the larger team. In an effort to ensure the same approach was being applied to every assessment, we asked managers to use a traffic light system to review their direct reports against each competency. The traffic light colors indicate: “They are consistently demonstrating this behavior” (green) “They demonstrate this behavior from time to time, but not all the time” (yellow) “They are currently not demonstrating this behavior” (red) Go Live The final tweaks were done, wordsmithing complete, and design decisions made. Finally, our framework was published on our internal Wiki for everyone to view.  All managers had discussions with their direct reports to understand where they think they currently sit against the framework, and we will finalize trial levels over the coming weeks. The word “trial” here is important. We’ll explain more below. What comes next?  Collaboration and shared accountability are critical to our engineering culture at Tessian. So, for the next few months, our team will have what we’re calling “trial levels”. This means that we won’t confirm final levels until we’ve used the framework in practice for a couple of months, and our team has really had the opportunity to see how this works for them before providing feedback. It’s a process! We’re beyond excited to see how this framework will continue to support Tessian to create a world class engineering organization that not only builds amazing products, but that enables engineers to thrive and grow in their careers. Watch this space as we share more news about how this has worked for us in practice and key insights gained!   And, if you’re interested in joining our team, see our open roles here.
DLP, Spear Phishing
Why Political Campaigns Need Chief Information Security Officers
Monday, July 20th, 2020
On July 10th, Joe Biden’s US presidential campaign announced it was hiring a Chief Information Security Officer (CISO) and a Chief Technology Officer (CTO). Biden’s campaign team told The Hill that these security professionals would help “mitigate cyber threats, bolster… voter protection efforts, and enhance the overall efficiency and security of the entire campaign.” This development confirms what cybersecurity experts have long understood — that, just like businesses, political campaigns require a CISO. We’ll tell you why. Are political campaigns likely targets of cybercrime? Rates of cybercrime — and the sophistication of cybercriminals — continue to increase across all sectors. Whether it’s phishing attacks, malware, ransomware, or brute force attacks, incidents are on the rise.  And, when you consider which industries are the most targeted (Healthcare, Financial Services, Manufacturing) It’s easy to understand why political campaigns are also targets of hackers and scammers: Political campaigns are a cornerstone of the democratic process They process the personal information of thousands of voters  They handle confidential and security-sensitive information These aren’t anecdotal reasons. Political campaigns have been targeted by cybercriminals before. For example, in 2016, Hillary Clinton’s campaign manager, John Podesta, received a spear phishing email disguised as a Google security alert. Podesta followed a link, entered his login credentials, and exposed over 50,000 emails to malicious actors. This is a great example of how human error can lead to data breaches and goes to show that anyone can make a mistake.  That’s why cybersecurity is so important. Learn how Tessian prevents spear phishing attacks.  How can a CISO help a political campaign? Hiring a CISO — and thus improving the cybersecurity of political campaigns — has three main benefits: Safeguarding the democratic process Protecting voter privacy Maintaining national security Let’s explore each of these in a bit more detail. You can also check out our CISO Spotlight Series to get a better idea of what role a CISO plays across different sectors.  Safeguarding the Democratic Process Whatever your political persuasion, it’s hard to ignore headlines that detail the role cybercriminals played in the 2016 US election, including: Cyberattacks occurred against politicians Electoral meddling undermined voters’ faith in the democratic process Better cybersecurity could have mitigated the impact of electoral cyberattacks A CISO ensures better coordination of a political campaign’s IT security program. This can involve: Mandating security software on all campaign devices  Setting up DMARC records for domains used in campaigning Assessing risk and responding to threats Increasing staff awareness of good cybersecurity practices Of course, these functions aren’t specific to political campaigns. A CISO’s job, whether at a big bank or a law firm, is to safeguard systems, data, and devices by implementing policies, procedures, and technology and to help build a positive security culture. The difference, though, is that while a CISO at your “average” organization helps prevent data breaches and other security incidents, the CISO of a political campaign does all of this while also helping maintain faith in the process among voters.  Keep reading to find out how. Protecting voter privacy Political campaigns must communicate directly with individual voters which means those working on the campaign have access to highly sensitive information. And, we’re not just talking about names and addresses. Even a person’s intention to vote is highly sensitive personal information.  While – yes – many people publicly proclaim their ideology and voting intention via social media, those people don’t expect their information to be mined by data-harvesting software, combined with other personal information, and shared with unauthorized third parties. They simply want to share their views with friends, family, and followers.
Like hacking, data mining operations can affect the outcome of elections. They also represent a gross invasion of individual privacy.  How valuable an asset is voter data? A few recent high-profile examples will give you an idea. (Click the links to learn more about each individual incident.) The UK pro-Brexit Vote Leave campaign’s involvement in the Cambridge Analytica scandal Rand Paul and Ted Cruz’s campaigns allegedly selling their voters’ contact information to the Trump campaign Rick Santorum’s campaign selling voters’ data to a “doomsday prepper” firm These examples prove that voter data can be used to raise funds or create a political advantage. But what are the consequences? To start, voter trust is lost which – as we’ve discussed – can impact the democratic process. Beyond that, there are also legal ramifications. Under state and federal privacy laws, selling personal information is a legally-regulated activity. Any allegation that a campaign has violated privacy law would be extremely damaging not just reputationally, but financially.  A CISO can help ensure that a political campaign is less likely to engage in risky behavior with voters’ personal information and assist the campaign to comply with privacy law.  But it’s not just personal information that political campaigns handle. Maintaining National Security Political campaigns also handle security-sensitive information which must be carefully safeguarded. Robert Deitz, former senior counselor to the CIA, told Washington Post that a Russian cyberattack on the Trump campaign could reveal information about Trump’s foreign investments and negotiating style. Having access to this data could help Russia understand “where it can get away with foreign adventurism.” A CISO has overall responsibility for information safeguarding within an organization. They understand:  What types of data exist about the candidate  How and where the information is processed, stored, and transferred Who can access the data All of this information helps CISOs implement data loss prevention (DLP) strategies in order to keep sensitive information out of the hands of bad actors.  Why does this matter?  Data privacy – and therefore cybersecurity – is essential for the modern world.  In fact, in business, a strong security posture fosters trust with customers and prospects and is therefore considered a competitive edge. Why? Because data is valuable currency. Customers and prospects expect the organizations they interact with to safeguard the information shared with them. Shouldn’t politicians foster trust with voters in the same way? 
Page