Step Into The Future of Cybersecurity — Save your spot at the Human Layer Security Summit for free.

Request a Demo of Tessian Today.
Automatically stop data breaches and security threats caused by employees on email. Powered by machine learning, Tessian detects anomalies in real-time, integrating seamlessly with your email environment within minutes and starting protection in a day. Provides you with unparalleled visibility into human security risks to remediate threats and ensure compliance.
DLP

Read our latest articles, tips and industry-specific news around Data Loss Prevention (DLP). Learn about the implications of data loss on email.

Spear Phishing DLP Remote Working Data Exfiltration
How to Keep Your Data Safe in The Great Resignation
28 July 2021
The pandemic has changed people and society in ways we wouldn’t have thought imaginable just 24 months ago.  Lockdown restrictions and remote working allowed many employees to reflect on what they want to do with their lives and the sort of companies they want to work for, as well as those they don’t.  Consequently, in April 2021 four million US workers quit their jobs, and according to recent research by Microsoft, over 40% of employees are considering leaving their employer this year. It’s being called ‘#TheGreatResignation’, and it presents a whole pile of problems for CISOs and other security leaders.  Here are some of the common problems you might face in keeping data secure when staff move on.  Staff burnout Let’s face it, everyone’s a little frazzled round the edges right now.  Our 2020 report, The Psychology Of Human Error, revealed that a shocking 93% of US and UK employees feel tired and stressed at some point during their working week. Staff burnout was real before the pandemic, and it’s only got worse during it as the months have turned into years.  Over half the employees (52%) we surveyed said they make more mistakes at work when they’re stressed. And we know that as some employees move on, others are left to pick up the slack, adding to their stress and further increasing the potential for human error. This goes to show that this isn’t just a cyber security issue, it’s a people issue, so get your COO and HR team involved and start exploring ways to improve company well-being. Mentally, they’ve already left Staff who are leaving will have ‘mentally uncoupled’ from your organization and its processes well before they actually make their exit. They’re distracted – perhaps even excited – about their new future and where they’re going. Our survey found that 47% of employees surveyed cited distraction as a top reason for falling for a phishing scam, while two-fifths said they sent an email to the wrong person because they were distracted.  This is made worse by the next problem…  “Hi, it’s Mark from HR, we haven’t met…” Changing jobs can bring staff into contact with people they might not have had much contact with before. In a big multinational, we doubt many staff can name every member of the payroll team – they might even be in another country! Our How to Hack a Human report found that an overwhelming 93% of workers also update their job status on social media, while 36% share information about their job.  If an employee has announced their imminent departure on social media, they can potentially be targets of spear phishing by hackers impersonating HR or operations staff. These could contain seemingly innocuous requests for key card returns, contract documents, and even IT hardware. We’ve seen it before! Check out our Threat Catalogue to see real examples of phishing attacks targeting (and impersonating!) new starters.  Notice period exfiltration Unless they’re leaving for a complete lifestyle change, like being a warden on a deserted Scottish island, many people tend to stay in the same sector or industry.  This means there’s a high probability of staff going to one of your competitors.  Our research reveals an increase in data exfiltration during an employee’s notice period. In fact, 45% of employees admit to “stealing” data before leaving or after being dismissed from a job. You can see the temptation – what better way to make a great impression on your first day than by bringing a juicy file of customer data, source code, or other highly valuable IP. People will often extract these assets by emailing them to their personal accounts. This is a particular problem in sectors such as legal, financial services, and entertainment, where a client base and extensive networks are crucial.  New staff So far all these problems have focused on leaving staff or those that remain, but another potential weak spot is the new hire that will replace them.  They’ve yet to undertake security awareness training on your systems and processes. They may have also announced their new role on social media (which means they could be victim to the same problem we explained in point 3).  It all comes back to one crucial point: 85% of data breaches are caused by human error.  How Tessian helps Security leaders have a big job; they have to secure networks, endpoints, and platforms like Slack and Microsoft Teams. But email remains the #1 threat vector. So how do you lock down email and prevent data exfiltration and successful phishing attacks? By empowering your people to do their best work, without security getting in the way. We believe employees should be experts in their respective fields, not in cybersecurity. Tessian’s suite of products secure the human layer, so that staff can concentrate on their roles and be empowered to do their best work.  Tessian Defender: Automatically prevents spear phishing, account takeover, business email compromise, and other targeted email attacks. Tessian Enforcer: Automatically prevents data exfiltration over email. Tessian Guardian: Automatically prevents accidental data loss caused by misdirected emails and misattached files.
DLP Data Exfiltration
Insider Threats Examples: 17 Real Examples of Insider Threats
By Maddie Rosenthal
21 July 2021
Insider threats are a big problem for organizations across industries. Why? Because they’re so hard to detect. After all, insiders have legitimate access to systems and data, unlike the external bad actors many security policies and tools help defend against. It could be anyone, from a careless employee to a rogue business partner. That’s why we’ve put together this list of Insider Threat types and examples. By exploring different methods and motives, security, compliance, and IT leaders (and their employees) will be better equipped to spot Insider Threats before a data breach happens.
Types of Insider Threats First things first, let’s define what exactly an Insider Threats is. Insider threats are people – whether employees, former employees, contractors, business partners, or vendors – with legitimate access to an organization’s networks and systems who deliberately exfiltrate data for personal gain or accidentally leak sensitive information. The key here is that there are two distinct types of Insider Threats:  The Malicious Insider: Malicious Insiders knowingly and intentionally steal data. For example, an employee or contractor may exfiltrate valuable information (like Intellectual Property (IP), Personally Identifiable Information (PII), or financial information) for some kind of financial incentive, a competitive edge, or simply because they’re holding a grudge for being let go or furloughed.  The Negligent Insider: Negligent insiders are just your average employees who have made a mistake. For example, an employee could send an email containing sensitive information to the wrong person, email company data to personal accounts to do some work over the weekend, fall victim to a phishing or spear phishing attack, or lose their work device.  We cover these different types of Insider Threats in detail in this article: What is an Insider Threat? Insider Threat Definition, Examples, and Solutions. 17 Examples of Insider Threats 
1. The employee who exfiltrated data after being fired or furloughed Since the outbreak of COVID-19, 81% of the global workforce have had their workplace fully or partially closed. And, with the economy grinding to a halt, employees across industries have been laid off or furloughed.  This has caused widespread distress. When you combine this distress with the reduced visibility of IT and security teams while their teams work from home, you’re bound to see more incidents of Malicious Insiders.  One such case involves a former employee of a medical device packaging company who was let go in early March 2020  By the end of March – and after he was given his final paycheck – Dobbins hacked into the company’s computer network, granted himself administrator access, and then edited and deleted nearly 120,000 records.  This caused significant delays in the delivery of medical equipment to healthcare providers.
2. The employee who sold company data for financial gain In 2017, an employee at Bupa accessed customer information via an in-house customer relationship management system, copied the information, deleted it from the database, and then tried to sell it on the Dark Web.  The breach affected 547,000 customers and in 2018 after an investigation by the ICO, Bupa was fined £175,000.
3. The employee who stole trade secrets In July 2020, further details emerged of a long-running insider job at General Electric (GE) that saw an employee steal valuable proprietary data and trade secrets. The employee, named Jean Patrice Delia, gradually exfiltrated over 8,000 sensitive files from GE’s systems over eight years — intending to leverage his professional advantage to start a rival company. The FBI investigation into Delia’s scam revealed that he persuaded an IT administrator to grant him access to files and that he emailed commercially-sensitive calculations to a co-conspirator. Having pleaded guilty to the charges, Delia faces up to 87 months in jail. What can we learn from this extraordinary inside job? Ensure you have watertight access controls and that you can monitor employee email accounts for suspicious activity.
4. The employees who exposed 250 million customer records Here’s an example of a “negligent insider” threat. In December 2019, a researcher from Comparitech noticed that around 250 million Microsoft customer records were exposed on the open web. This vulnerability meant that the personal information of up to 250 million people—including email addresses, IP addresses, and location—was accessible to anyone with a web browser. This incident represents a potentially serious breach of privacy and data protection law and could have left Microsoft customers open to scams and phishing attacks—all because the relevant employees failed to secure the databases properly. Microsoft reportedly secured the information within 24 hours of being notified about the breach.
5. The nuclear scientists who hijacked a supercomputer to mine Bitcoin Russian secret services reported in 2018 that they had arrested employees of the country’s leading nuclear research lab on suspicion of using a powerful supercomputer for bitcoin mining. Authorities discovered that scientists had abused their access to some of Russia’s most powerful supercomputers by rigging up a secret bitcoin-mining data center. Bitcoin mining is extremely resource-intensive and some miners are always seeking new ways to outsource the expense onto other people’s infrastructure. This case is an example of how insiders can misuse company equipment.
6. The employee who fell for a phishing attack While we’ve seen a spike in phishing and spear phishing attacks since the outbreak of COVID-19, these aren’t new threats. One example involves an email that was sent to a senior staff member at Australian National University. The result? 700 Megabytes of data were stolen. This data was related to both staff and students and included details like names, addresses, phone numbers, dates of birth, emergency contact numbers, tax file numbers, payroll information, bank account details, and student academic records.
7. The work-from-home employees duped by a vishing scam Cybercriminals saw an opportunity when many of Twitter’s staff started working from home. One cybercrime group conducted one of the most high-profile hacks of 2020 — knocking 4% off Twitter’s share price in the process. In July 2020, after gathering information on key home-working employees, the hackers called them up and impersonated Twitter IT administrators. During these calls, they successfully persuaded some employees to disclose their account credentials. Using this information, the cybercriminals logged into Twitter’s admin tools, changed the passwords of around 130 high-profile accounts — including those belonging to Barack Obama, Joe Biden, and Kanye West — and used them to conduct a Bitcoin scam. This incident put “vishing” (voice phishing) on the map, and it reinforces what all cybersecurity leaders know — your company must apply the same level of cybersecurity protection to all its employees, whether they’re working on your premises or in their own homes. Want to learn more about vishing? We cover it in detail in this article: Smishing and Vishing: What You Need to Know About These Phishing Attacks.
8. The ex-employee who got two years for sabotaging data The case of San Jose resident Sudhish Kasaba Ramesh serves as a reminder that it’s not just your current employees that pose a potential internal threat—but your ex-employees, too. Ramesh received two years imprisonment in December 2020 after a court found that he had accessed Cisco’s systems without authorization, deploying malware that deleted over 16,000 user accounts and caused $2.4 million in damage. The incident emphasizes the importance of properly restricting access controls—and locking employees out of your systems as soon as they leave your organization.
9. The employee who took company data to a new employer for a competitive edge This incident involves two of the biggest tech players: Google and Uber. In 2015, a lead engineer at Waymo, Google’s self-driving car project, left the company to start his own self-driving truck venture, Otto. But, before departing, he exfiltrated several trade secrets including diagrams and drawings related to simulations, radar technology, source code snippets, PDFs marked as confidential, and videos of test drives.  How? By downloading 14,000 files onto his laptop directly from Google servers. Otto was acquired by Uber after a few months, at which point Google executives discovered the breach. In the end, Waymo was awarded $245 million worth of Uber shares and, in March, the employee pleaded guilty.
10. The employee who stole a hard drive containing HR data Coca-Cola was forced to issue data breach notification letters to around 8,000 employees after a worker stole a hard drive containing human resources records. Why did this employee steal so much data about his colleagues? Coca-Cola didn’t say. But we do know that the employee had recently left his job—so he may have seen an opportunity to sell or misuse the data once outside of the company. Remember—network and cybersecurity are crucial, but you need to consider whether insiders have physical access to data or assets, too.
11. The employees leaking customer data  Toward the end of October 2020, an unknown number of Amazon customers received an email stating that their email address had been “disclosed by an Amazon employee to a third-party.” Amazon said that the “employee” had been fired — but the story changed slightly later on, according to a statement shared by Motherboard which referred to multiple “individuals” and “bad actors.” So how many customers were affected? What motivated the leakers? We still don’t know. But this isn’t the first time that the tech giant’s own employees have leaked customer data. Amazon sent out a near-identical batch of emails in January 2020 and November 2018. If there’s evidence of systemic insider exfiltration of customer data at Amazon, this must be tackled via internal security controls.
12. The employee offered a bribe by a Russian national In September 2020, a Nevada court charged Russian national Egor Igorevich Kriuchkov with conspiracy to intentionally cause damage to a protected computer. The court alleges that Kruichkov attempted to recruit an employee of Tesla’s Nevada Gigafactory. Kriochkov and his associates reportedly offered a Tesla employee $1 million to “transmit malware” onto Tesla’s network via email or USB drive to “exfiltrate data from the network.” The Kruichkov conspiracy was disrupted before any damage could be done. But it wasn’t the first time Tesla had faced an insider threat. In June 2018, CEO Elon Musk emailed all Tesla staff to report that one of the company’s employees had “conducted quite extensive and damaging sabotage to [Tesla’s] operations.” With state-sponsored cybercrime syndicates wreaking havoc worldwide, we could soon see further attempts to infiltrate companies. That’s why it’s crucial to run background checks on new hires and ensure an adequate level of internal security.
13. The ex-employee who offered 100 GB of company data for $4,000 Police in Ukraine reported in 2018 that a man had attempted to sell 100 GB of customer data to his ex-employer’s competitors—for the bargain price of $4,000. The man allegedly used his insider knowledge of the company’s security vulnerabilities to gain unauthorized access to the data. This scenario presents another challenge to consider when preventing insider threats—you can revoke ex-employees’ access privileges, but they might still be able to leverage their knowledge of your systems’ vulnerabilities and weak points.
14. The employee who accidentally sent an email to the wrong person Misdirected emails happen more than most think. In fact, Tessian platform data shows that at least 800 misdirected emails are sent every year in organizations with 1,000 employees. But, what are the implications? It depends on what data has been exposed.  In one incident in mid-2019, the private details of 24 NHS employees were exposed after someone in the HR department accidentally sent an email to a team of senior executives. This included: Mental health information Surgery information While the employee apologized, the exposure of PII like this can lead to medical identity theft and even physical harm to the patients. We outline even more consequences of misdirected emails in this article. 
15. The employee who accidentally misconfigured access privileges NHS coronavirus contact-tracing app details were leaked after documents hosted in Google Drive were left open for anyone with a link to view. Worse still, links to the documents were included in several others published by the NHS.  These documents – marked “SENSITIVE” and “OFFICIAL” contained information about the app’s future development roadmap and revealed that officials within the NHS and Department of Health and Social Care are worried about the app’s reliance and that it could be open to abuse that leads to public panic.
16. The security officer who was fined $316,000 for stealing data (and more!) In 2017, a California court found ex-security officer Yovan Garcia guilty of hacking his ex-employer’s systems to steal its data, destroy its servers, deface its website, and copy its proprietary software to set up a rival company. The cybercrime spree was reportedly sparked after Garcia was fired for manipulating his timesheet. Garcia received a fine of over $316,000 for his various offenses. The sheer amount of damage caused by this one disgruntled employee is pretty shocking. Garcia stole employee files, client data, and confidential business information; destroyed backups; and even uploaded embarrassing photos of his one-time boss to the company website.
17. The employee who sent company data to a personal email account We mentioned earlier that employees oftentimes email company data to themselves to work over the weekend.  But, in this incident, an employee at Boeing shared a spreadsheet with his wife in hopes that she could help solve formatting issues. While this sounds harmless, it wasn’t. The personal information of 36,000 employees were exposed, including employee ID data, places of birth, and accounting department codes.
How common are Insider Threats? Incidents involving Insider Threats are on the rise, with a marked 47% increase over the last two years. This isn’t trivial, especially considering the global average cost of an Insider Threat is $11.45 million. This is up from $8.76 in 2018. Who’s more culpable, Negligent Insiders or Malicious Insiders?  Negligent Insiders (like those who send emails to the wrong person) are responsible for 62% of all incidents Negligent Insiders who have their credentials stolen (via a phishing attack or physical theft) are responsible for 25% of all incidents Malicious Insiders are responsible for 14% of all incidents It’s worth noting, though, that credential theft is the most detrimental to an organization’s bottom line, costing an average of $2.79 million.  Which industries suffer the most? The “what, who, and why” behind incidents involving Insider Threats vary greatly by industry.  For example, customer data is most likely to be compromised by an Insider in the Healthcare industry, while money is the most common target in the Finance and Insurance sector. But, who exfiltrated the data is just as important as what data was exfiltrated. The sectors most likely to experience incidents perpetrated by trusted business partners are: Finance and Insurance Federal Government Entertainment Information Technology Healthcare State and Local Government Overall, though, when it comes to employees misusing their access privileges, the Healthcare and Manufacturing industries experience the most incidents. On the other hand, the Public Sector suffers the most from lost or stolen assets and also ranks in the top three for miscellaneous errors (for example misdirected emails) alongside Healthcare and Finance. You can find even more stats about Insider Threats (including a downloadable infographic) here.  The bottom line: Insider Threats are a growling problem. We have a solution.
Human Layer Security DLP Data Exfiltration
What is an Insider Threat? Insider Threat Definition, Examples, and Solutions
By Tessian
29 June 2021
Organizations often focus their security efforts on threats from outside. But increasingly, it’s people inside the organization who cause data breaches. There was a 47% increase in Insider Threat incidents between 2018 and 2020, including via malicious data exfiltration and accidental data loss. And the comprehensive Verizon 2021 Data Breach Investigations Report suggests that Insiders are directly responsible for around 22% of security incidents. So, what is an insider threat and how can organizations protect themselves from their own people?
Importantly, there are two distinct types of insider threats, and understanding different motives and methods of exfiltration is key for detection and prevention. Types of Insider Threats The Malicious Insider
Malicious Insiders knowingly and intentionally steal data, money, or other assets. For example, an employee or contractor exfiltrating intellectual property, personal information, or financial information for personal gain.  What’s in it for the insider? It depends. Financial Incentives Data is extremely valuable. Malicious insiders can sell customer’s information on the dark web. There’s a huge market for personal information—research suggests you can steal a person’s identity for around $1,010. Malicious Insiders can steal leads, intellectual property, or other confidential information for their own financial gain—causing serious damage to an organization in the process. Competitive Edge Malicious Insiders can steal company data to get a competitive edge in a new venture. This is more common than you might think.  For example, a General Electric employee was imprisoned in 2020 for stealing thousands of proprietary files for use in a rival business. Unsurprisingly, stealing data to gain a competitive edge is most common in competitive industries, like finance and entertainment. The Negligent (or Unaware) Insider 
Negligent Insiders are just “average” employees doing their jobs. Unfortunately, “to err is human”… which means people can—and do—make mistakes. Sending a misdirected email Sending an email to the wrong person is one of the most common ways a negligent insider can lose control of company data. Indeed, the UK’s Information Commissioner’s Office reports misdirected emails as the number one cause of data breaches.  And according to Tessian platform data, organizations with over 1,000 employees send around 800 misdirected emails every year. We’ve put together 11 Examples of Data Breaches Caused By Misdirected Emails if you want to see how bad this type of Insider Threat can get. Phishing attacks Last year, 66% of organizations worldwide experienced spear phishing attacks. Like all social engineering attacks, phishing involves tricking a person into clicking a link, downloading malware, or taking some other action to compromise a company’s security. A successful phishing attack requires an employee to fall for it. And practically any of your employees could fall for a sophisticated spear phishing attack. Want to know more about this type of Negligent Insider threat? Read Who Are the Most Likely Targets of Spear Phishing Attacks? Physical data loss   Whether it’s a phone, laptop, or a paper file, losing devices or hard-copy data can constitute a data breach. Indeed, in June 2021, a member of the public top-secret British military documents in a “soggy heap” behind a bus stop. Looking for more examples of Insider Threats (both malicious and negligent?) Check out this article: 17 Real-World Examples of Insider Threats How can I protect against Insider Threats? As we’ve seen, common Insider Threats are common. So why is so hard to prevent them? Detecting and preventing Insider Threats is such a challenge because it requires full visibility over your data—including who has access to it. This means fully mapping your company’s data, finding all entry and exit points, and identifying all the employees, contractors, and third parties who have access to it. From there, it comes down to training, monitoring, and security. Training While security awareness training isn’t the only measure you need to take to improve security, it is important. Security awareness training can help you work towards legal compliance, build threat awareness, and foster a security culture among your employees. Looking for resources to help train your employees? Check out this blog with a shareable PDF. Monitoring Insider Threats can be difficult to detect because insiders normally leverage their legitimate access to data. That’s why it’s important to monitor data for signs of potentially suspicious activity. Telltale signs of an insider threat include: Large data or file transfers Multiple failed logins (or other unusual login activity) Incorrect software access requests Machine’s take over Abuse by Service Accounts Email Security The vast majority of data exfiltration attempts, accidental data loss incidents, and phishing attacks take place via email. Therefore, the best action you can take to prevent insider threats is to implement an email security solution. Tessian is a machine learning-powered email security solution that uses anomaly detection, behavioral analysis, and natural language processing to detect data loss. Tessian Enforcer detects data exfiltration attempts and non-compliant emails Tessian Guardian detects misdirected emails and misattached files Tessian Defender detects and prevents spear phishing attacks How does Tessian detect and prevent Insider Threats? Tessian’s machine learning algorithms analyze your company’s email data. The software learns every employee’s normal communication patterns and maps their trusted email relationships — both inside and outside your organization. Tessian inspects the content and metadata of inbound emails for any signals suggestive of phishing—like suspicious payloads, geophysical locations, IP addresses, email clients—or data exfiltration—like anomalous attachments, content, or sending patterns. Once it detects a threat, Tessian alerts employees and administrators with clear, concise, contextual warnings that reinforce security awareness training
Human Layer Security DLP Data Exfiltration
Insider Threat Statistics You Should Know: Updated 2021
By Maddie Rosenthal
01 June 2021
Between 2018 and 2020, there was a 47% increase in the frequency of incidents involving Insider Threats. This includes malicious data exfiltration and accidental data loss. The latest research, from the Verizon 2021 Data Breach Investigations Report, suggests that Insiders are responsible for around 22% of security incidents. Why does this matter? Because these incidents cost organizations millions, are leading to breaches that expose sensitive customer, client, and company data, and are notoriously hard to prevent. In this article, we’ll explore: How often these incident are happening What motivates Insider Threats to act The financial  impact Insider Threats have on larger organizations The effectiveness of different preventive measures You can also download this infographic with the key statistics from this article. If you know what an Insider Threat is, click here to jump down the page. If not, you can check out some of these articles for a bit more background. What is an Insider Threat? Insider Threat Definition, Examples, and Solutions Insider Threat Indicators: 11 Ways to Recognize an Insider Threat Insider Threats: Types and Real-World Examples
How frequently are Insider Threat incidents happening? As we’ve said, incidents involving Insider Threats have increased by 47% between 2018 and 2020. A 2021 report from Cybersecurity Insiders also suggests that 57% of organizations feel insider incidents have become more frequent over the past 12 months. But the frequency of incidents varies industry by industry. The Verizon 2021 Breach Investigations Report offers a comprehensive overview of different incidents in different industries, with a focus on patterns, actions, and assets. Verizon found that: The Healthcare and Finance industries experience the most incidents involving employees misusing their access privileges The Healthcare and Finance industries also suffer the most from lost or stolen assets The Finance and Public Administration sectors experience the most “miscellaneous errors” (including misdirected emails)—with Healthcare in a close third place !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js");
There are also several different types of Insider Threats and the “who and why” behind these incidents can vary. According to one study: Negligent Insiders are the most common and account for 62% of all incidents.  Negligent Insiders who have their credentials stolen account for 25% of all incidents Malicious Insiders are responsible for 14% of all incidents.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Looking at Tessian’s own platform data, Negligent Insiders may be responsible for even more incidents than most expected. On average, 800 emails are sent to the wrong person every year in companies with 1,000 employees. This is 1.6x more than IT leaders estimate.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Malicious Insiders are likely responsible for more incidents than expected, too. Between March and July 2020, 43% of security incidents reported were caused by malicious insiders. We should expect this number to increase. Around 98% of organizations say they feel some degree of vulnerability to Insider Threats. Over three-quarters of IT leaders (78%) think their organization is at greater risk of Insider Threats if their company adopts a permanent hybrid working structure. Which, by the way, the majority of employees would prefer. What motivates Insider Threats to act? When it comes to the “why”, Insiders – specifically Malicious Insiders – are often motivated by money, a competitive edge, or revenge. But, according to one report, there is a range of reasons malicious Insiders act. Some just do it for fun.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); But, we don’t always know exactly “why”. For example, Tessian’s own survey data shows that 45% of employees download, save, send, or otherwise exfiltrate work-related documents before leaving a job or after being dismissed.  While we may be able to infer that they’re taking spreadsheets, contracts, or other documents to impress a future or potential employer, we can’t know for certain.  Note: Incidents like this happen the most frequently in competitive industries like Financial Services and Business, Consulting, & Management. This supports our theory.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); How much do incidents involving Insider Threats cost? The cost of Insider Threat incidents varies based on the type of incident, with incidents involving stolen credentials causing the most financial damage. But, across the board, the cost has been steadily rising. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Likewise, there are regional differences in the cost of Insider Threats, with incidents in North America costing the most and almost twice as much as those in Asia-Pacific. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); But, overall, the average global cost has increased 31% over the last 2 years, from $8.76 million in 2018 to $11.45 in 2020 and the largest chunk goes towards containment, remediation, incident response, and investigation. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); But, what about prevention? How effective are preventative measures? As the frequency of Insider Threat incidents continues to increase, so does investment in cybersecurity. But, what solutions are available and which solutions do security, IT, and compliance leaders trust to detect and prevent data loss within their organizations? A 2021 report from Cybersecurity Insiders suggests that a shortfall in security monitoring might be contributing to the prevalence of Insider Threat incidents. Asked whether they monitor user behavior to detect anomalous activity: Just 28% of firms responded that they used automation to monitor user behavior 14% of firms don’t monitor user behavior at all 28% of firms said they only monitor access logs 17% of firms only monitor specific user activity under specific circumstances 10% of firms only monitor user behavior after an incident has occurred And, according to Tessian’s research report, The State of Data Loss Prevention, most rely on security awareness training, followed by following company policies/procedures, and machine learning/intelligent automation. But, incidents actually happen more frequently in organizations that offer training the most often and, while the majority of employees say they understand company policies and procedures, comprehension doesn’t help prevent malicious behavior. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); That’s why many organizations rely on rule-based solutions. But, those often fall short.  Not only are they admin-intensive for security teams, but they’re blunt instruments and often prevent employees from doing their jobs while also failing to prevent data loss from Insiders.  So, how can you detect incidents involving Insiders in order to prevent data loss and eliminate the cost of remediation? Machine learning. How does Tessian detect and prevent Insider Threats? Tessian turns an organization’s email data into its best defense against inbound and outbound email security threats. Powered by machine learning, our Human Layer Security technology understands human behavior and relationships, enabling it to automatically detect and prevent anomalous and dangerous activity. Tessian Enforcer detects and prevents data exfiltration attempts Tessian Guardian detects and prevents misdirected emails Tessian Defender detects and prevents spear phishing attacks Importantly, Tessian’s technology automatically updates its understanding of human behavior and evolving relationships through continuous analysis and learning of the organization’s email network. Oh, and it works silently in the background, meaning employees can do their jobs without security getting in the way.  Interested in learning more about how Tessian can help prevent Insider Threats in your organization? You can read some of our customer stories here or book a demo.
Human Layer Security DLP Compliance
At a Glance: Data Loss Prevention in Healthcare
By Maddie Rosenthal
30 May 2021
Data Loss Prevention (DLP) is a priority for organizations across all sectors, but especially for those in Healthcare. Why? To start, they process and hold incredible amounts of personal and medical data and they must comply with strict data privacy laws like HIPAA and HITECH.  Healthcare also has the highest costs associated with data breaches – 65% higher than the average across all industries – and has for nine years running.  But, in order to remain compliant and, more importantly, to prevent data loss incidents and breaches, security leaders must have visibility over data movement. The question is: Do they? According to our latest research report, Data Loss Prevention in Healthcare, not yet. How frequently are data loss incidents happening in Healthcare? Data loss incidents are happening up to 38x more frequently than IT leaders currently estimate.  Tessian platform data shows that in organizations with 1,000 employees, 800 emails are sent to the wrong person every year. Likewise, in organizations of the same size, 27,500 emails containing company data are sent to personal accounts. These numbers are significantly higher than IT leaders expected.
But, what about in Healthcare specifically? We found that: Over half (51%) of employees working in Healthcare admit to sending company data to personal email accounts 46% of employees working in Healthcare say they’ve sent an email to the wrong person 35% employees working in Healthcare have downloaded, saved, or sent work-related documents to personal accounts before leaving or after being dismissed from a job This only covers outbound email security. Hospitals are also frequently targeted by ransomware and phishing attacks and Healthcare is the industry most likely to experience an incident involving employee misuse of access privileges.  Worse still, new remote-working structures are only making DLP more challenging.
Healthcare professionals feel less secure outside of the office  While over the last several months workforces around the world have suddenly transitioned from office-to-home, this isn’t a fleeting change. In fact, bolstered by digital solutions and streamlined virtual services, we can expect to see the global healthcare market grow exponentially over the next several years.  While this is great news in terms of general welfare, we can’t ignore the impact this might have on information security.   Half of employees working in Healthcare feel less secure outside of their normal office environment and 42% say they’re less likely to follow safe data practices when working remotely.   Why? Most employees surveyed said it was because IT isn’t watching, they’re distracted, and they’re not working on their normal devices. But, we can’t blame employees. After all, they’re just trying to do their jobs and cybersecurity isn’t top-of-mind, especially during a global pandemic. Perhaps that’s why over half (57%) say they’ll find a workaround if security software or policies make it difficult or prevent them from doing their job.  That’s why it’s so important that security leaders make the most secure path the path of least resistance. How can security leaders in Healthcare help protect employees and data? There are thousands of products on the market designed to detect and prevent data incidents and breaches and organizations are spending more than ever (up from $1.4 million to $13 million) to protect their systems and data.  But something’s wrong.  We’ve seen a 67% increase in the volume of breaches over the last five years and, as we’ve explored already, security leaders still don’t have visibility over risky and at-risk employees. So, what solutions are security, IT, and compliance leaders relying on? According to our research, most are relying on security training. And, it makes sense. Security awareness training confronts the crux of data loss by educating employees on best practice, company policies, and industry regulation. But, how effective is training, and can it influence and actually change human behavior for the long-term? Not on its own. Despite having training more frequently than most industries, Healthcare remains among the most likely to suffer a breach. The fact is, people break the rules and make mistakes. To err is human! That’s why security leaders have to bolster training and reinforce policies with tech that understands human behavior. How does Tessian prevent data loss on email? Tessian uses machine learning to address the problem of accidental or deliberate data loss. How? By analyzing email data to understand how people work and communicate.  This enables Tessian Guardian to look at email communications and determine in real-time if a particular email looks like they’re about to be sent to the wrong person. Tessian Enforcer, meanwhile, can identify when sensitive data is about to be sent to an unsafe place outside an organization’s email network. Finally, Tessian Defender detects and prevents inbound attacks like spear phishing, account takeover (ATO), and CEO Fraud.
Human Layer Security DLP Compliance Data Exfiltration
The State of Data Loss Prevention in the Financial Services Sector
By Maddie Rosenthal
10 May 2021
In our latest research report, we took a deep dive into Data Loss Prevention in Financial Services and revealed that data loss incidents are happening up to 38x more frequently than IT leaders currently estimate.  And, while data loss is a big problem across all industries, it’s especially problematic in those that handle highly sensitive data. One of those industries is Financial Services. Before we dive into how frequently data loss incidents are happening and why, let’s define what exactly a data loss incident is in the context of this report. We focused on outbound data loss on email. This could be either intentional data exfiltration by a disgruntled or financially motivated employee or it could be accidental data loss.  Here’s what we found out. The majority of employees have accidentally or intentionally exfiltrated data  Tessian platform data shows that in organizations with 1,000 employees, 800 emails are sent to the wrong person every year. This is 1.6x more than IT leaders estimated. Likewise, in organizations of the same size, 27,500 emails containing company data are sent to personal accounts. We call these unauthorized emails, and IT leaders estimated just 720 are sent annually. That’s a big difference.
But, what about in this particular sector? Over half (57%) of Financial Services professionals across the US and the UK admit to sending at least one misdirected email and 67% say they’ve sent unauthorized emails. But, when you isolate the US employees, the percentage almost doubles. 91% of Financial Services professionals in the US say they’ve sent company data to their personal accounts.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); And, because Financial Services is highly competitive, professionals working in this industry are among the most likely to download, save, or send company data to personal accounts before leaving or after being dismissed from a job, with 47% of employees saying they’ve done it. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); To really understand the consequences of incidents like this, you have to consider the type of data this industry handles and the compliance standards and data privacy regulations they’re obligated to satisfy. Every day, professionals working in Financial Services send and receive: Bank Account Numbers Loan Account Numbers Credit/Debit Card Numbers Social Security Numbers M&A Data In order to protect that data, they must comply with regional and industry-specific laws, including: GLBA COPPA FACTA FDIC 370 HIPAA CCPA GDPR So, what happens if there’s a breach? The implications are far-reaching, ranging from lost customer trust and a damaged reputation to revenue loss and regulatory fines.  For more information on these and other compliance standards, visit our Compliance Hub. Remote-working is making Data Loss Prevention (DLP) more challenging  The sudden transition from office to home has presented a number of challenges to both employees and security, IT, and compliance leaders.  To start, 65% of professionals working in Financial Services say they feel less secure working from home than they do in the office. It makes sense. People aren’t working from their normal work stations and likely don’t have the same equipment. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); A further 56% say they’re less likely to follow safe data practices when working remotely. Why? The most common reason was that IT isn’t watching, followed by being distracted.  Most of us can relate. When working remotely – especially from home – people have other responsibilities and distractions like childcare and roommates and, the truth is, the average employee is just trying to do their job, not be a champion of cybersecurity.  That’s why it’s so important that security and IT teams equip employees with the solutions they need to work securely, wherever they are. Current solutions aren’t empowering employees to work securely  Training, policies, and rule-based technology all have a place in security strategies. But, based on our research, these solutions alone aren’t working. In fact, 64% of professionals working in Financial Services say they’ll find a workaround to security software or policies if they impede productivity. This is 10% higher than the average across all industries. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js");
How does Tessian prevent data loss on email? Tessian uses machine learning to address the problem of accidental or deliberate data loss by applying human understanding to email behavior. Our machine learning models analyze email data to understand how people work and communicate. They have been trained on more than two billion emails and they continue to adapt and learn from your own data as human relationships evolve over time. This enables Tessian Guardian to look at email communications and determine in real time if particular emails look like they’re about to be sent to the wrong person. Tessian Enforcer, meanwhile, can identify when sensitive data is about to be sent to an unsafe place outside an organization’s email network. Finally, Tessian Defender detects and prevents inbound attacks like spear phishing, account takeover (ATO), and CEO Fraud. Enforcer and Guardian do all of this silently in the background. That means workflows aren’t disrupted and there’s no impact on productivity. Employees can do what they were hired to do without security getting in the way. Tessian bolsters training, complements rule-based solutions, and helps reinforce the policies security teams have worked so hard to create and embed in their organizations. That’s why so many Financial Services firms have adopted Tessian’s technology, including: Man Group Evercore BDO Affirm Armstrong Watson JTC DC Advisory Many More
DLP
What is Data Loss Prevention (DLP)? Complete Overview of DLP
06 May 2021
Let’s get straight to it and answer your questions.
How does DLP work? Put simply, DLP software monitors different entry and exit points (examples below) to “look” for data and keep it safe and sound inside the organization’s network.  A properly configured DLP solution can detect when sensitive or important data is leaving a company’s possession, alert the user and, ultimately, stop data loss. A DLP solution has three main jobs. DLP software: Monitors and analyzes data while at rest, in motion, and in use. Detects suspicious activity or anomalous network traffic. Blocks or flags suspicious activity, preventing data loss. Those entry and exit points we mentioned earlier include: Computers Mobile devices Email clients Servers Mail gateways Different types of DLP solutions are required to safeguard data in these environments. What are the different types of DLP? DLP software can monitor and safeguards data in three states: Data in motion (or “in transit”): Data that is being sent or received by your network Data in use: Data that a user is currently interacting with Data at rest: Data stored in a file or database that is not moving or in use There are three main types of DLP software designed to protect data in these different states. Network data loss prevention Network DLP software monitors network traffic passing through entry and exit points to protect data in motion. Network DLP scans all data passing through a company’s network. If it’s working properly, the software will detect sensitive data exiting the network and flag or block it while allowing other data to leave the network unimpeded where appropriate. Network administrators can customize network DLP software to block certain types of data from leaving the network by default or—by contrast—whitelist specific file types or URLs. Endpoint data loss prevention Endpoint DLP monitors data on devices and workstations, such as computers and mobile devices, to protect data in use. The software can monitor the device and detect a range of potentially malicious actions, including: Printing a document Creating or renaming a file Copying data to removable media (e.g. a USB drive) Such actions might be completely harmless—or they might be an attempt to exfiltrate confidential data. Effective endpoint DLP software (but not all endpoint DLP software) can distinguish between suspicious and non-suspicious activity. Email data loss prevention Email is the primary threat vector for most businesses, and the threat vector most security leaders are concerned about locking down with their DLP strategy.  Email represents a potential route straight through your company’s defenses for anyone wishing to deliver a malicious payload. And it’s also a way for insiders to send data out of your company’s network—whether by accident or on purpose. Email DLP can therefore protect against some of the most common and serious causes of data loss, including: Email-based cyberattacks, such as phishing Malicious exfiltration of data by employees (also called insider threats) Accidental data loss (for example, sending an email to the wrong person or attaching the wrong file) Further reading: ⚡ What is Email DLP? Overview of DLP on Email
Does my company need a data loss prevention solution? Almost certainly. DLP is a top priority for security leaders across industries and DLP software is a vital part of any organization’s security program.  Broadly, there are two reasons to implement an effective data loss prevention solution: Protecting your customers’ and employees’ personal information. Your business is responsible for all the personal information it controls. Cyberattacks and employee errors can put this data at risk. Protecting your company’s non-personal data. DLP can thwart attempts to steal intellectual property, client lists, or financial data. Want to learn more about how and why other organizations are leveraging DLP? We explore employee behavior, the frequency of data loss incidents, and the best (and worst) solutions in this report: The State of Data Loss Prevention. Now let’s look at the practical ways DLP software can benefit your business. What are the benefits of DLP? There are 4 main benefits of data loss prevention, which we’ll unpack below: Protecting against external threats (like spear phishing attacks) Protecting against internal threats (like insider threats) Protecting against accidental data loss (like accidentally sending an email to the wrong person) Compliance with laws and regulations Protecting against external threats External security threats are often the main driver of a company’s cybersecurity program—although, as we’ll see below, they’re far from the only type of security threat that businesses are concerned about. Here are some of the most significant external threats that can result in data loss: Phishing: Phishing is the most common online crime—and according to the latest FBI data, phishing rates doubled in 2020. Around 96% of phishing attacks take place via email. Spear phishing: A phishing attack targeting a specific individual. Spear phishing attacks are more effective than “bulk” phishing attacks and can target high-value individuals (whaling) or use advanced impersonation techniques (CEO fraud). Ransomware: A malicious actor encrypts company data and forces the company to pay a ransom to obtain the key. Cybercriminals can use various methods to undertake cyberattacks, including malicious email attachments or links and exploit kits. DLP can prevent these external threats by preventing malicious actors from exfiltrating data from your network, storage, or endpoints. Protecting against internal threats Malicious employees can use email to exfiltrate company data. This type of insider threat is more common than you might think. Verizon research shows how employees can misuse their company account privileges for malicious purposes, such as stealing or providing unauthorized access to company data. This problem is most significant in the healthcare and manufacturing industries. Why would an employee misuse their account privileges in this way? In some cases, they’re working with outsiders. In others, they’re stealing data for their own purposes. For more information, read our 11 Real Examples of Insider Threats. The difficulty is that your employees often need to send files and data outside of your company for perfectly legitimate purposes.  Thankfully, next-generation DLP can use machine learning to distinguish and block suspicious activity—while permitting data to leave your network where necessary. Preventing accidental data loss Human error is a widespread cause of data loss, but security teams sometimes overlook it. In fact, misdirected emails—where a person sends an email to the wrong recipient—are the most common cause of data breaches, according to the UK’s data protection regulator. Tessian platform data bears this out. In organizations with 1,000 or more employees, people send an average of 800 misdirected emails every year. Misdirected emails take many forms. But any misdirected email can result in data loss—whether through accidentally clicking “reply all”, attaching the wrong file, accepting an erroneous autocomplete, or simply spelling someone’s email address wrong. Compliance with laws and regulations Governments are more and more concerned about data privacy and security.  Data protection and cybersecurity regulations are increasingly demanding—and failing to comply with them can incur increasingly severe penalties. Implementing a DLP solution is an excellent way to demonstrate your organization’s compliance efforts with any of the following laws and standards:  General Data Protection Regulation (GDPR): Any company doing business in the EU, or working with EU clients or customers, must comply with the GDPR. The regulation requires all organizations to implement security measures to protect the personal data in their control. California Consumer Privacy Act (CCPA): The CCPA is one example of the many state privacy laws emerging across the U.S. The law requires businesses to implement reasonable security measures to guard against the loss or exfiltration of personal information. Sector-specific regulations: Tightly regulated sectors are subject to privacy and security standards, such as the Health Insurance Portability and Accountability Act (HIPAA), which covers healthcare providers and their business associates, and the Gramm-Leach-Bliley Act (GLBA), which covers financial institutions. Cybersecurity frameworks: Compliance with cybersecurity frameworks, such as the NIST Framework, CIS Controls, or ISO 27000 Series, is an important way to demonstrate high standards of data security in your organization. Implementing a DLP solution is one step towards certification with one of these frameworks. Bear in mind that, in certain industries, individual customers and clients will have their own regulatory requests, too.  Further reading: ⚡ The State of Data Loss Prevention in Healthcare ⚡ The State of Data Loss Prevention in Legal ⚡ The State of Data Loss Prevention in Financial Services ⚡ CCPA FAQs ⚡ GDPR FAQs Do DLP solutions work? We’ve looked at the huge benefits that DLP software can bring your organization. But does DLP actually work? Some, but not all.  Effective DLP software works seamlessly in the background, allowing employees to work uninterrupted, but stepping in to prevent data loss whenever necessary. Likewise, they’re easy for SOC teams to manage.  Unfortunately, legacy features are still present in some DLP solutions, that either fail to prevent loss effectively, create too much noise for security teams, or are too cumbersome to enable employees to work unimpeded. Let’s take a look at some DLP methods and weigh up the pros and cons of each approach. Blacklisting domains IT administrators can block certain domains associated with malicious activity, for example, “freemail” domains such as gmail.com or yahoo.com. Blacklisting entire domains, particularly popular (if problematic) domains, is not ideal. There may be good reasons to communicate with someone using a freemail address—for example, if they are a customer, contractor, or a potential client.  Tagging sensitive data  Some DLP software allows users to tag certain types of sensitive data.  For example, you may wish to block activity involving any file containing a 16-digit number (which might be a credit card number). But this rigid approach doesn’t account for the dynamic nature of sensitive data. In certain contexts, a 16 digit number might not be associated with a credit card. Or an employee may be using credit card data for legitimate purposes. Implementing rules Rule-based DLP uses “if-then” statements to block types of activities, such as “If an employee uploads a file of 10MB or larger, then block the upload and alert IT.”  The problem here is that, like the other “data-centric” solutions identified above, rule-based DLP often blocks legitimate activity and allows malicious activity to occur unimpeded. Machine learning Machine learning DLP software like Tessian’s Human Layer Security platform is a “human-centric” approach to data loss prevention. Here’s how it works: machine learning technology learns how people, teams, and customers communicate and understands the human context behind every interaction with data. By analyzing the evolving patterns of human interactions, machine learning DLP constantly reclassifies email addresses according to the relationship between a business and customers, suppliers, and other third parties. Further reading: ⚡ Learn how Tessian Guardian prevents accidental data loss ⚡ Learn how Tessian Enforcer prevents insider threats ⚡ Learn how Tessian Defender prevents inbound email attacks Was this article helpful? Subscribe for our weekly blog digest to get more insights into DLP, spear phishing, and other cybersecurity related topics.
DLP
Unauthorized Emails: The Risks of Sending Data to Your Personal Email Accounts
27 April 2021
Whether it’s done to work from home, to print something, or to get a second opinion from a spouse, most of us have sent “work stuff” to our personal email accounts. And, while we might think it’s harmless…it’s not. At Tessian, we call these emails “unauthorized emails”.
In this article, we’ll explore the reasons why employees might send emails to personal accounts, why sending these emails can be problematic, and how security leaders can solve the problem.  Why would an employee send company data to personal accounts? It’s easier than following security policies  Most of the time, employees send company data to their personal email addresses because they’re trying to get their job done and – well – it’s easier than the alternative. Easier than accessing files through the corporate VPN, easier than digging out the randomly generated password to their work email for use at home, easier than printing off everything they need and taking it home with them. They send an email, go home, and the documents are ready and waiting. Most of us can relate. 54% of employees say they’ll find a workaround if security policies or software make it difficult for them to do their job.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js");
Unfortunately, there can be more nefarious reasons for sending company data to personal email accounts. They’re maliciously trying to exfiltrate data  45% of employees say they’ve taken data with them before leaving or after being dismissed from a job.  Can you guess what the most common way of exfiltrating data is? Email. Looking for more information about insider threats? Check out these resources: What is an Insider Threat? Real-World Examples of Insider Threats Insider Threat Statistics  Whatever the reason, employees send a lot more unauthorized emails than security leaders currently estimate. How many? At least 27,500 a year in organizations with 1,000 employees.
What consequences are associated with sending company data to personal accounts? Most organizations have policies in place explicitly saying that employees can’t email company data to personal email accounts. That’s not because every single email to a person results in a data loss incident or breach.  It’s because when it does result in a data loss incident or a breach, the consequences can be far-reaching. Consequences include: Breach of contracts or non-disclosure agreements Loss of IP and proprietary research Breach of data protection regulations Heavy fines imposed by regulators and clients (GDPR, in particular, will greatly increase fines for all manner of data breaches) Lost customer trust, damaged reputation, and revenue loss   Check out this real-world example: In early 2017, an airline employee sent a spreadsheet containing approximately 36,000 employee records home so his wife could help with a formatting problem.  Based on data from the Ponemon Institute, this single spreadsheet may have cost the company as much as $5.7m. How can security leaders solve the problem? It’s important security leaders take a holistic approach to data loss prevention (DLP). We suggest you… 1. Educate your workforce Make sure your employees know how to observe best data security practices and they understand how best to secure the data they work with, especially confidential data. Top tip: Host refresher courses if necessary. 2. Ease of access Try as much as possible to ensure that your employees don’t feel the need to send work to their personal emails.  Implement secure file storage platforms they can access from home (SharePoint, GSuite, etc) or a corporate VPN so they can securely access the company network from anywhere.  You need to strike that happy middle ground between “easy to use but insecure” and “airtight but really disruptive”. 3. Be proactive, not reactive Choose email security platforms that offer complete protection against unauthorized email before it becomes a problem, instead of being left scrambling for a solution in the aftermath.  Find a solution that tracks and logs attempts to send data to a personal email address, and use the metrics to open a conversation with employees about data protection.
DLP
7 Tips for SOC Teams Using Splunk
By Maddie Rosenthal
22 April 2021
For most security leaders and SOC teams, “visibility” is the holy grail. It makes sense… Why does visibility matter? Clear visibility of threats is the first step in effectively reducing risk.  It’s what makes analyzing, correlating, reporting, and proactively preventing security events possible. It’s what allows security teams to find the needle in the haystack.  That’s why Splunk is so valuable, and why it’s essential security solutions easily integrate with SIEM (pronounced “sim”) systems.
Looking for some tips and tricks to help you and your team get the most out of your data in Splunk?  We talked to Imraan Dawood, Information Security Officer at Investec, and Martin Nortje, Information Security Engineer at Investec, about how they use Splunk to level up their security, without over-burdening their SOC teams. We’ve captured the highlights below. 7 tips for SOC teams using Splunk 1. Don’t create too many dashboards For those who have the tool, Splunk is the front door for all analytics for SOC and data security teams. It’s the first thing they log into when they sit down at their desk, and the one place they can see security events pulled from across their security stack. According to Imraan and Martin, it enables SOC teams to pinpoint potential problems in a matter of minutes or seconds versus hours or days. But, too much information or “noise” can be overwhelming and counterproductive.   So, instead of tracking everything, be choosy.  Imraan and Martin suggest that SOC teams work backward. First, consider what would be most valuable for analysts to see. Then, consider what you need to filter out in order for them to see that at a glance. (See point 2….) 2. Create a “hit list” of words and terms to help you zero in on the events that could have the biggest business impact While – yes – SOC teams will want to have visibility of all security events, it’s important to take the time to plan the structure and layout of the information in your dashboards to ensure you have an accurate picture of the security landscape and to help quickly identify high-impact threats – for example, misdirected emails. Not all misdirected emails are created equal, though.  Let’s say Donna, a sales executive, accidentally emailed the wrong Brad to ask “Can you still make the call at 2:00?”. Now, let’s say Elaine, a Finance Director, accidentally emailed the wrong Todd financial projections for Q2 2021. Which requires more immediate attention? Which could have the biggest business impact? The latter. Imraan and Martin suggest that SOC teams create a “hit list” of words and terms – for example, those related to financial data, PII, or R&D – to get a better view of what really matters. Here’s how you do that:  Make a list of the keywords and terms that you would like to report on Perform a search for those terms within Splunk to verify that the search term is only yielding the results that you’d like to alert on. (Doing this will ensure that you aren’t generating unnecessary noise for your SOC teams and will reduce notification fatigue.) Configure an alert to search and identify those specific keywords and terms within the platform.   Looking for more details? Check out this article from Splunk: Save your search as an alert 3. Remember that you can’t automate everything Splunk is great because it automatically integrates data from endpoints, applications, servers, etc. It makes life much easier for data analysts and the rest of the security team. But you can’t automate everything. For example, what happens after Elaine, the Finance Director, accidentally sends that email with financial projections to the wrong Todd? Several teams will have to be involved, from HR, to Customer Success, to Legal. It’s difficult – if not impossible – to automate those processes and workflows completely.  To put it simply, follow-up will still be manual. Top tip from Imraan and Martin: Automate your case management instead.  4. Consider the “why” and the “how” just as much as the “what” We all know that employees can make mistakes. Whether it’s cc’ing someone instead of bcc’ing someone, logging onto an unsecured network, or re-using a password.  But, some employees aren’t simply acting negligently. They’re acting maliciously. And, it’s essential SOC teams can differentiate between the two.  The question is: How? Imraan and Martin suggest relying on historical data. After all, it takes multiple insights to understand what’s business as usual vs. something more malicious. For example, if you’ve had an incident of a “bad leaver” in the past, use that data to compare and “match” the same behavior in real-time.  What does a “bad leaver” look like? Are they sending 1 email to a personal account a day over the course of 2 months? Or are they sending 15-20 emails a day for a week? Are they including attachments or not?  5. Be thoughtful in what data you include in reports for specific teams  As we’ve said, Splunk makes reporting easy. But, to effectively communicate risks (and wins!) and actually influence change, you have to be thoughtful in what data you include in reports for specific teams.  Your Risk Committee and your CEO will care about different things.  A few things you should consider when preparing reports: How much do they know about cybersecurity? What’s most relevant to their day-to-day? What metrics and KPIs are they held accountable for?  What’s the organization’s risk tolerance? 6. Lean on the vendors in your security stack for queries  Most vendors understand the importance of capturing security events in SEIM and will have advice on best practices and use cases that they’ve seen work well for other customers. Are you a Tessian customer? If you didn’t already know, we’ve created dozens of articles and guides for customers to make sure they get the most out of our products via Splunk. Just log into the Help Center or shoot your Customer Success Manager a message.  7. Use the insights!  Last but certainly not least, Imraan and Martin made it clear that viewing the data in Splunk is just step one. Step two is actually implementing processes that help reduce security incidents and improve the organization’s security posture.  For example, if you saw a massive spike in the number of employees who were printing sensitive documents or sending attachments to personal devices immediately after the move to remote-working, you might want to consider reminding employees or existing policies or – better yet – creating new policies that enable them to do their jobs without breaking the rules.   The key is to combine data-centric and human-centric approaches to really effect change.  Bonus: Tessian Human Layer Risk Hub makes it even easier to know what to do next. Based on insights into user behavior, Tessian intelligently and automatically identifies actions teams can take within the platform (for example, custom protections for certain user groups) to reinforce policies, improve security awareness, and change behavior to help drive down risk, as well as suggestions for additional processes and controls outside of Tessian to exercise better control over specific risks.
Learn more about Tessian’s integrations Tessian’s Human Layer Security (HLS) platform has vast integration capabilities to help security teams achieve increased visibility and extended protection. Learn more here. Or, if you’re looking for more tips, subscribe to our newsletter below.
DLP Data Exfiltration
What is Data Exfiltration? Tips for Preventing Data Exfiltration
22 April 2021
Data is valuable currency. Don’t believe us? Data brokering is a $200 billion industry…and this doesn’t even include the data that’s sold on the dark web.  This data could include anything from email addresses to financial projections, and the consequences of this data being leaked can be far-reaching. Data can be leaked in a number of ways, but when it’s stolen, we call it data exfiltration. You may also hear it referred to as data theft, data exportation, data extrusion, and data exfil.
This article will explore what data exfiltration is, how it works, and how you can avoid the fines, losses, and reputational damage that can result from it. Types of data exfiltration Data exfiltration can involve the theft of many types of information, including: Usernames, passwords, and other credentials Confidential company data, such as intellectual property or business strategy documents Personal data about your customers, clients, or employees b Keys used to decrypt encrypted information Financial data, such as credit card numbers or bank account details Software or proprietary algorithms To understand how data exfiltration works, let’s consider a few different ways it can be exfiltrated.  ✉ Email  According to IT leaders, email is the number one threat vector. It makes sense.  Over 124 billion business emails are sent and received every day and employees spend 40% of their time on email, sharing memos, spreadsheets, invoices, and other sensitive information and unstructured data with people both in and outside of their organization.  Needless to say, it’s a treasure trove of information, which is why it’s so often used in data exfiltration attempts. But how? Insider threats can email data to their own, personal accounts or third-parties External bad actors targeting employees with phishing, spear phishing, or ransomware attacks. Note:96% of phishing attacks start via email. ⚡ To learn more about insider threats, check out this article: 11 Real Examples of Insider Threats  ⚡ For more information about phishing, click here: What is Spear Phishing? Targeted Phishing Attacks Explained 💻  Remote access Gaining remote access to a server, device, or cloud storage platform is another data exfiltration technique. An attacker can gain remote access to a company’s data assets via several methods, including: Hacking to exploit access vulnerabilities Using a “brute force” attack to determine the password Installing malware, whether via phishing or another method Using stolen credentials, whether obtained via a phishing attack or purchased on the dark web According to 2020 Verizon data, over 80% of “hacking” data exfiltration incidents involve brute force techniques or compromised user credentials. That’s why keeping passwords strong and safe is essential. Remote data exfiltration might occur without a company ever noticing. Consider the now infamous 2020 SolarWinds hack: the attackers installed malware on thousands of organizations’ devices, which silently exfiltrated data for months before being detected. 💾  Physical access  As well as using remote-access techniques, such as phishing and malware, attackers can simply upload sensitive data onto a laptop, USB drive, or another portable storage device, and walk it out of a company’s premises.. Physically stealing data from a business requires physical access to a server or device. That’s why this method of exfiltration is commonly associated with current or former employees. And it happens more frequently than you might think. One report shows that: 15% of all insiders exfiltrate data via USBs and 8% of external bad actors do the same 11% of all insiders exfiltrate data via laptops/tablets and 13% of external bad actors do the same Here’s an example: in 2020, a Russian national tried to persuade a Tesla employee to use a USB drive to exfiltrate insider data from the company’s Nevada premises. ⚡ We’ve rounded up a dozen examples of data exfiltration here: 12 Examples of Data Exfiltration.
How common is data exfiltration? So how significant a problem is data exfiltration, and why should your company take steps to prevent it? It’s hard to say how often data is successful exfiltrated from a company’s equipment or network. But we know that the cybercrime methods used to carry out data exfiltration are certainly on the increase. For example, phishing was the leading cause of complaints to the FBI’s Internet Crime Complaint Centre (IC3) in 2020. The FBI’s data suggests that phishing incidents more than doubled compared to the previous year. The FBI also reported that the number of recorded personal data breaches increased from around 38,000 to over 45,000 in 2020. Verizon’s 2020 data suggests that companies with more than 1000 employees were more likely to experience data exfiltration attempts—but that attacks against smaller companies were much more likely to succeed.  Verizon also noted that “the time required to exfiltrate data has been getting smaller,” but “the time required for an organization to notice that they have been breached is not keeping pace.” In other words, cybercriminals are getting quicker and harder to detect. Consequences of data exfiltration We’ve seen how data exfiltration, and cybercrime more generally, is becoming more common. But even if a company experiences one data exfiltration attack, the consequences can be devastating. There’s a lot at stake when it comes to the data in your company’s control. Here are some stats from IBM about the cost of a data breach: The average data breach costs $3.6 million The cost is highest for U.S. companies, at $8.6 million Healthcare is the hardest-hit sector, with companies facing an average loss of $7.1 million What are the causes of these phenomenal costs? Here are three factors: Containment: Hiring cybersecurity and identity fraud companies to contain a data breach is an expensive business—not to mention the thousands of hours that can be lost trying to determine the cause of a breach. Lawsuits: Many companies face enormous lawsuits for losing customer data. Trends suggest a continuing increase in data-breach class action cases through 2021. Penalties: Laws such as the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) enable regulators to impose significant fines for personal data breaches.
How to prevent data exfiltration Understanding the form, causes, and consequences of data exfiltration is important. But what’s the best way to prevent data exfiltration? 🎓 Staff training Business leaders know the importance of helping their employees understand information security.  Staff training can help your staff spot some of the less sophisticated phishing attacks and learn the protocol for reporting a data breach. However, while staff training is important, it’s not sufficient to prevent data exfiltration. Remember these words from the U.K.’s National Cyber Security Centre (NCSC): “No training package (of any type) can teach users to spot every phish. Spotting phishing emails is hard.” 🚫 Blocking or blacklisting To prevent data exfiltration attempts, some organizations block or blacklist certain domains or activities. This approach involves blocking certain email providers (like Gmail), domains, or software (like DropBox) that are associated with cyberattacks. However, this blunt approach impedes employee productivity. Blacklisting fails to account for the dynamic nature of modern work, where employees need to work with many different stakeholders via a broad variety of mediums. 💬 Labeling and tagging sensitive data Another data loss prevention (DLP) strategy is to label and tag sensitive data. When DLP software notices tagged data moving outside of your company’s network, this activity can be flagged or prevented. However, this approach relies entirely on employees tagging data correctly. Given how much data organizations handle, the manual process of tagging isn’t viable—employees may label incorrectly or not label sensitive at all. 🔒 Email data loss prevention (DLP) Email is a crucial communication method for almost every business. But, as we’ve seen, it’s also a key way for fraudsters and criminals to gain access to your company’s valuable data. According to Tessian platform data, employees send nearly 400 emails a month. In an organization with 1,000 employees, that’s 400,000 possible data breaches each month. That’s why security-focused organizations seek to lock down this critical vulnerability by investing in email-specific DLP software. ⚡ Want to learn more about email DLP? We cover everything you need to know here: What is Email DLP? Complete Overview of DLP on Email. How does Tessian prevent data exfiltration? Tessian uses stateful machine learning to prevent data exfiltration on email by turning an organization’s own data into its best defense against inbound and outbound email security threats.   Our Human Layer Security platform understands human behavior and relationships, enabling it to automatically detect and prevent anomalous and dangerous activity like data exfiltration attempts and targeted phishing attacks.  To learn more about how Tessian detects and prevents data exfiltration attempts, check out our customer stories or talk to one of our experts today.
Human Layer Security DLP Data Exfiltration
11 Examples of Data Breaches Caused By Misdirected Emails
17 March 2021
While phishing, ransomware, and brute force attacks tend to make headlines, misdirected emails (emails sent to the wrong person) are actually a much bigger problem. In fact, in organizations with 1,000 employees, at least 800 emails are sent to the wrong person every year. That’s two a day. You can find more insights in The Psychology of Human Error and The State of Data Loss Prevention 2020.  Are you surprised? Most people are. That’s why we’ve rounded up this list of 11 real-world (recent) examples of data breaches caused by misdirected emails. And, if you skip down to the bottom, you’ll see how you can prevent misdirected emails (and breaches!) in your organization.  If you’re looking for a bit more background, check out these two articles: What is a Misdirected Email? Consequences of Sending an Email to the Wrong Person 11 examples of data breaches caused by misdirected emails  1. University support service mass emails sensitive student information University and college wellbeing services deal with sensitive personal information, including details of the health, beliefs, and disabilities of students and their families.  Most privacy laws impose stricter obligations on organizations handling such sensitive personal information—and there are harsher penalties for losing control of such data. So imagine how awful the Wellbeing Adviser at the University of Liverpool must have felt when they emailed an entire school’s worth of undergraduates with details about a student’s recent wellbeing appointment. The email revealed that the student had visited the Adviser earlier that day, that he had been experiencing ongoing personal difficulties, and that the Adviser had advised the student to attend therapy. A follow-up email urged all the recipients to delete the message “immediately” and appeared to blame the student for providing the wrong email address. One recipient of the email reportedly said: “How much harder are people going to find it actually going to get help when something so personal could wind up in the inbox of a few hundred people?” 2. Trump White House emails Ukraine ‘talking points’ to Democrats Remember in 2019, when then-President Donald Trump faced accusations of pressuring Ukraine into investigating corruption allegations against now-President Joe Biden? Once this story hit the press, the White House wrote an email—intended for Trump’s political allies—setting out some “talking points” to be used when answering questions about the incident (including blaming the “Deep State media”). Unfortunately for the White House, they sent the email directly to political opponents in the Democratic Party. White House staff then attempted to “recall” the email. If you’ve ever tried recalling an email, you’ll notice that it doesn’t normally work.  Recalling an email only works if the recipient is on the same exchange server as you—and only if they haven’t read the email. Looking for information on this? Check out this article: You Sent an Email to the Wrong Person. Now What? Unsurprisingly, this was not the case for the Democrats who received the White House email, who subsequently leaked it on Twitter.  I would like to thank @WhiteHouse for sending me their talking points on how best to spin the disastrous Trump/Zelensky call in Trump’s favor. However, I will not be using their spin and will instead stick with the truth. But thanks though. — US Rep Brendan Boyle (@RepBrendanBoyle) September 25, 2019 3. Australia’s Department of Foreign Affairs and Trade  leaked 1,000 citizens’ email addresses On September 30, 2020, Australia’s Department of Foreign Affairs and Trade (DFAT) announced that the personal details of over 1,000 citizens were exposed after an employee failed to use BCC. So, who were the citizens Australians who have been stuck in other countries since inbound flights have been limited (even rationed) since the outbreak of COVID-19. The plan was to increase entry quotas and start an emergency loans scheme for those in dire need. Those who had their email addresses exposed were among the potential recipients of the loan. Immediately after the email was sent, employees at DFAT tried to recall the email, and event requested that recipients delete the email from their IT system and “refrain from any further forwarding of the email to protect the privacy of the individuals concerned.” 4. Serco exposes contact traces’ data in email error  In May 2020, an employee at Serco, a business services and outsourcing company, accidentally cc’d instead of bcc’ing almost 300 email addresses. Harmless, right? Unfortunately not.  The email addresses – which are considered personal data – belonged to newly recruited COVID-19 contact tracers. While a Serco spokesperson has apologized and announced that they would review and update their processes, the incident nonetheless has put confidentiality at risk and could leave the firm under investigation with the ICO.  5. Sonos accidentally exposes the email addresses of hundreds of customers in email blunder  In January 2020, 450+ email addresses were exposed after they were (similar to the example above) cc’d rather than bcc’d.  Here’s what happened: A Sonos employee was replying to customers’ complaints. Instead of putting all the email in BCC, they were CC’d, meaning that every customer who received the email could see the personal email addresses of everyone else on the list.  The incident was reported to the ICO and is subject to potential fines.
6. Gender identity clinic leaks patient email addresses In September 2019, a gender identity clinic in London exposed the details of close to 2,000 people on its email list after an employee cc’d recipients instead of bcc’ing them. Two separate emails were sent, with about 900 people cc’d on each.  While email addresses on their own are considered personal information, it’s important to bear in mind the nature of the clinic. As one patient pointed out, “It could out someone, especially as this place treats people who are transgender.”  The incident was reported to the ICO who is currently assessing the information provided. But, a similar incident may offer a glimpse of what’s to come.  In 2016, the email addresses of 800 patients who attended HIV clinics were leaked because they were – again – cc’d instead of bcc’d. An NHS Trust was £180,000. Bear in mind, this fine was issued before the introduction of GDPR. 7. University mistakenly emails 430 acceptance letters, blames “human error” In January 2019, The University of South Florida St. Petersburg sent nearly 700 acceptance emails to applicants. The problem? Only 250 of those students had actually been accepted. The other 400+ hadn’t. While this isn’t considered a breach (because no personal data was exposed) it does go to show that fat fingering an email can have a number of consequences.  In this case, the university’s reputation was damaged, hundreds of students were left confused and disappointed, and the employees responsible for the mistake likely suffered red-faced embarrassment on top of other, more formal ramifications. The investigation and remediation of the incident also will have taken up plenty of time and resources.  8. Union watchdog accidentally leaked secret emails from confidential whistleblower In January 2019, an official at Australia’s Registered Organisations Commission (ROC) accidentally leaked confidential information, including the identity of a whistleblower. How? The employee entered an incorrect character when sending an email. It was then forwarded to someone with the same last name – but different first initial –  as the intended recipient.  The next day, the ROC notified the whistleblower whose identity was compromised and disclosed the mistake to the Office of the Australian Information commissions as a potential privacy breach. 9. Major Health System Accidentally Shares Patient Information Due to Third-Party Software for the Second Time This Year In May 2018 Dignity Health – a major health system headquartered in San Francisco that operates 39 hospitals and 400 care centers around the west coast – reported a breach that affected 55,947 patients to the U.S. Department of Health and Human Services.  So, how did it happen? Dignity says the problem originated from a sorting error in an email list that had been formatted by one of its vendors. The error resulted in Dignity sending emails to the wrong patients, with the wrong names. Because Dignity is a health system, these emails also often contained the patient’s doctor’s name. That means PII and Protect health information (PHI) was exposed.  10. Inquiry reveals the identity of child sexual abuse victims This 2017 email blunder earned an organization a £200,000 ($278,552) fine from the ICO. The penalty would have been even higher if the GDPR has been in force at the time. When you look at the detail of this incident, it’s easy to see why the ICO wanted to impose a more severe fine. The Independent Inquiry into Child Sexual Abuse (IICSA) sent a Bcc email to 90 recipients, all of whom were involved in a public hearing about child abuse.  Sending a Bcc means none of the recipients can see each other’s details/ But the sender then sent a follow-up email to correct an error—using the “To” field by mistake. The organization made things even worse by sending three follow-up emails asking recipients to delete the original message—one of which generated 39 subsequent “Reply all” emails in response. The error revealed the email addresses of all 90 recipients and 54 people’s full names.  But is simply revealing someone’s name that big of a deal? Actually, a person’s name can be very sensitive data—depending on the context. In this case, IICSA’s error revealed that each of these 54 people might have been victims of child sexual abuse. 11. Boris Johnson’s dad’s email blunder nearly causes diplomatic incident Many of us know what it’s like to be embarrassed by our dad.  Remember when he interrogated your first love interest? Or that moment your friends overheard him singing in the shower. Or when he accidentally emailed confidential information about the Chinese ambassador to the BBC. OK, maybe not that last one. That happened to the father of U.K. Prime Minister Boris Johnson in February 2020. Johnson’s dad, Stanley Johnson, was emailing British officials following a meeting with Chinese ambassador Liu Xiaoming. He wrote that Liu was “concerned” about a lack of contact from the Prime Minister to the Chinese state regarding the coronavirus outbreak. The Prime Minister’s dad inexplicably copied the BBC into his email, providing some lucky journalists with a free scoop about the state of U.K.-China relations. It appears the incident didn’t cause any big diplomatic issues—but we can imagine how much worse it could have been if Johnson had revealed more sensitive details of the meeting.
Prevent misdirected emails (and breaches) with Tessian Guardian Regardless of your region or industry, protecting customer, client, and company information is essential. But, to err is human. So how do you prevent misdirected emails? With machine learning.  Tessian turns an organization’s email data into its best defense against human error on email. Our Human Layer Security technology understands human behavior and relationships and automatically detects and prevents emails from being sent to the wrong person. Yep, this includes typos, accidental “reply alls” and cc’ing instead of bcc’ing. Tessian Guardian can also detect when you’ve attached the wrong file. Interested in learning more about how Tessian can help prevent accidental data loss and data exfiltration in your organization? You can read some of our customer stories here or book a demo.
DLP
What is a Misdirected Email?
05 March 2021
Misdirected emails are common — sending an email to the wrong person is an easy mistake. Who hasn’t done it? But they can also be disastrous, potentially damaging a company’s reputation, revealing its confidential data, and breaching its customers’ privacy. If you’re looking for a solution versus an explanation of the problem, we’ve got you covered. Learn more about how Tessian Guardian prevents misdirected emails. How common are misdirected emails? Many of us have been using email daily for our entire working lives. In fact, around 4 billion people use email regularly, sending around 306.4 billion emails every day. That explains why misdirected emails are such a major problem. According to research, 58% of people have sent an email to the wrong person while at work, with 20% of recipients stating that this action has lost their company business — and 12% stating that it cost them their job.  And according to Tessian platform data, organizations with over 1,000 employees send around 800 misdirected emails every year. That’s more than two emails a day. It’s also the most common type of error to cause a breach, according to Verizon’s 2021 DBIR.  Indeed, year after year, the UK’s Information Commissioner’s Office reports misdirected emails as the number one cause of data breaches. And the latest breach data from California also shows that email “misdelivery” was the most common type of data breach caused by human error. Looking for some examples? Check out this article: 7 Data Breaches Caused by Misdirected Emails. Why do misdirected emails keep happening? So — why do we keep making this mistake?  Well, the problem is partly down to burnout. Around 52% of people say they were more likely to make mistakes while tired — and 93% said they were tired at some point during the working week. But there are some technical issues that lead to misdirected emails, too. Spelling mistakes Email is “interoperable,” meaning that, for example, Gmail users can email Outlook users without issue. In fact, any two people can email each other, as long as they have internet access. So this communication method is highly flexible — but also open to sending errors. Need to email your payroll data/passport photo/HR file to rob.bateman@companyA.com? Make sure you don’t accidentally type “rod.bateman@companyA.com”, or worse — “rob.bateman@companyB.com”. The “To” field takes us back to a time before spellcheck began correcting our mistakes without us even noticing. One wrong letter can lead to a data breach. Autocomplete When you’re typing an email address into Gmail, Outlook, or any other popular email client, you may notice the “autocomplete” function trying to finish it off for you. Autocomplete can be a very useful feature when you email the same person regularly. But autocomplete can also lead to misdirected emails. Autocomplete can lead to misdirected emails when:  You start typing in the “To” field. You see the autocomplete function completing the recipient’s name. You press “Tab” or “Enter” — without checking whether autocomplete has chosen the right recipient from your address book, Productivity guru Cal Newport estimates that we send and receive around 126 email messages per day — so features like autocomplete save businesses significant amounts of time. But the impact of one misdirected email can undo these benefits. Bcc error Bcc (which stands for “blind carbon copy”) lets you hide recipients when sending an email.  There are a few benefits to using Bcc, but its most useful function is when emailing a large group of people. If you don’t want any of the recipients to know who else got the email, you can put them all in the Bcc field. Mailing lists are covered by data protection laws, such as the EU General Data Protection Regulation (GDPR). In most cases, each recipient of an email has the right to keep their email address private from the other recipients.  That’s why accidentally using the “Cc” or “To” field instead of the “Bcc” field can constitute a data breach. Indeed, in January 2020, speaker company Sonos referred itself to the UK’s data regulator after an employee accidentally copied 450 recipients into the Cc field. The dreaded “Reply All” Here’s one almost all of us have done before — hitting “Reply All” on an email to multiple recipients when we only meant to email one person (e.g., the sender). In most cases, accidentally “replying to all” is little more than an embarrassment. But consider Maria Peterson, who, in 2018, accidentally replied to all of Utah’s 22,000 public sector employees. Misattached files Misattached files and misdirected emails aren’t the same things — but misattached files (attaching the wrong file to an email) deserve a dishonorable mention in this article.  Around one in five emails contains an attachment, and Tessian research reveals some troubling data about this type of human error-based data breach: 48% of employees have emailed the wrong attachment 42% of misattached files contained company data or research 39% contained authentication data like passwords Misattached files caused the offending company legal issues in 31% of cases Looking for a solution? We have one.  Next steps We’ve looked at five types of misdirected email, and hopefully, you understand how serious a problem misdirected emails can be. To find out how to prevent — or recover from — misdirected emails, take a look at our article: You Sent an Email to the Wrong Person. Now What?
Page