Tessian’s mission is to secure the human layer by empowering people to do their best work, without security getting in their way.
Insider threats are a big problem for organizations across industries. Why? Because they’re so hard to detect. After all, insiders have legitimate access to systems and data, unlike the external bad actors many security policies and tools help defend against.
It could be anyone, from a careless employee to a rogue business partner.
That’s why we’ve put together this list of Insider Threat types and examples. By exploring different methods and motives, security, compliance, and IT leaders (and their employees) will be better equipped to spot Insider Threats before a data breach happens.
First things first, let’s define what exactly an Insider Threats is.
Insider threats are people – whether employees, former employees, contractors, business partners, or vendors – with legitimate access to an organization’s networks and systems who deliberately exfiltrate data for personal gain or accidentally leak sensitive information.
The key here is that there are two distinct types of Insider Threats:
We cover these different types of Insider Threats in detail in this article: What is an Insider Threat? Insider Threat Definition, Examples, and Solutions.
Since the outbreak of COVID-19, 81% of the global workforce have had their workplace fully or partially closed. And, with the economy grinding to a halt, employees across industries have been laid off or furloughed.
This has caused widespread distress.
When you combine this distress with the reduced visibility of IT and security teams while their teams work from home, you’re bound to see more incidents of Malicious Insiders.
One such case involves a former employee of a medical device packaging company who was let go in early March 2020
By the end of March – and after he was given his final paycheck – Dobbins hacked into the company’s computer network, granted himself administrator access, and then edited and deleted nearly 120,000 records.
This caused significant delays in the delivery of medical equipment to healthcare providers.
In 2017, an employee at Bupa accessed customer information via an in-house customer relationship management system, copied the information, deleted it from the database, and then tried to sell it on the Dark Web.
The breach affected 547,000 customers and in 2018 after an investigation by the ICO, Bupa was fined £175,000.
In July 2020, further details emerged of a long-running insider job at General Electric (GE) that saw an employee steal valuable proprietary data and trade secrets.
The employee, named Jean Patrice Delia, gradually exfiltrated over 8,000 sensitive files from GE’s systems over eight years — intending to leverage his professional advantage to start a rival company.
The FBI investigation into Delia’s scam revealed that he persuaded an IT administrator to grant him access to files and that he emailed commercially-sensitive calculations to a co-conspirator. Having pleaded guilty to the charges, Delia faces up to 87 months in jail.
What can we learn from this extraordinary inside job? Ensure you have watertight access controls and that you can monitor employee email accounts for suspicious activity.
Here’s an example of a “negligent insider” threat. In December 2019, a researcher from Comparitech noticed that around 250 million Microsoft customer records were exposed on the open web.
This vulnerability meant that the personal information of up to 250 million people—including email addresses, IP addresses, and location—was accessible to anyone with a web browser.
This incident represents a potentially serious breach of privacy and data protection law and could have left Microsoft customers open to scams and phishing attacks—all because the relevant employees failed to secure the databases properly.
Microsoft reportedly secured the information within 24 hours of being notified about the breach.
Russian secret services reported in 2018 that they had arrested employees of the country’s leading nuclear research lab on suspicion of using a powerful supercomputer for bitcoin mining.
Authorities discovered that scientists had abused their access to some of Russia’s most powerful supercomputers by rigging up a secret bitcoin-mining data center.
Bitcoin mining is extremely resource-intensive and some miners are always seeking new ways to outsource the expense onto other people’s infrastructure. This case is an example of how insiders can misuse company equipment.
While we’ve seen a spike in phishing and spear phishing attacks since the outbreak of COVID-19, these aren’t new threats.
One example involves an email that was sent to a senior staff member at Australian National University. The result? 700 Megabytes of data were stolen.
This data was related to both staff and students and included details like names, addresses, phone numbers, dates of birth, emergency contact numbers, tax file numbers, payroll information, bank account details, and student academic records.
Cybercriminals saw an opportunity when many of Twitter’s staff started working from home. One cybercrime group conducted one of the most high-profile hacks of 2020 — knocking 4% off Twitter’s share price in the process.
In July 2020, after gathering information on key home-working employees, the hackers called them up and impersonated Twitter IT administrators. During these calls, they successfully persuaded some employees to disclose their account credentials.
Using this information, the cybercriminals logged into Twitter’s admin tools, changed the passwords of around 130 high-profile accounts — including those belonging to Barack Obama, Joe Biden, and Kanye West — and used them to conduct a Bitcoin scam.
This incident put “vishing” (voice phishing) on the map, and it reinforces what all cybersecurity leaders know — your company must apply the same level of cybersecurity protection to all its employees, whether they’re working on your premises or in their own homes.
Want to learn more about vishing? We cover it in detail in this article: Smishing and Vishing: What You Need to Know About These Phishing Attacks.
The case of San Jose resident Sudhish Kasaba Ramesh serves as a reminder that it’s not just your current employees that pose a potential internal threat—but your ex-employees, too.
Ramesh received two years imprisonment in December 2020 after a court found that he had accessed Cisco’s systems without authorization, deploying malware that deleted over 16,000 user accounts and caused $2.4 million in damage.
The incident emphasizes the importance of properly restricting access controls—and locking employees out of your systems as soon as they leave your organization.
This incident involves two of the biggest tech players: Google and Uber.
In 2015, a lead engineer at Waymo, Google’s self-driving car project, left the company to start his own self-driving truck venture, Otto. But, before departing, he exfiltrated several trade secrets including diagrams and drawings related to simulations, radar technology, source code snippets, PDFs marked as confidential, and videos of test drives.
How? By downloading 14,000 files onto his laptop directly from Google servers.
Otto was acquired by Uber after a few months, at which point Google executives discovered the breach. In the end, Waymo was awarded $245 million worth of Uber shares and, in March, the employee pleaded guilty.
Coca-Cola was forced to issue data breach notification letters to around 8,000 employees after a worker stole a hard drive containing human resources records.
Why did this employee steal so much data about his colleagues? Coca-Cola didn’t say. But we do know that the employee had recently left his job—so he may have seen an opportunity to sell or misuse the data once outside of the company.
Remember—network and cybersecurity are crucial, but you need to consider whether insiders have physical access to data or assets, too.
Toward the end of October 2020, an unknown number of Amazon customers received an email stating that their email address had been “disclosed by an Amazon employee to a third-party.”
Amazon said that the “employee” had been fired — but the story changed slightly later on, according to a statement shared by Motherboard which referred to multiple “individuals” and “bad actors.”
So how many customers were affected? What motivated the leakers? We still don’t know. But this isn’t the first time that the tech giant’s own employees have leaked customer data. Amazon sent out a near-identical batch of emails in January 2020 and November 2018.
If there’s evidence of systemic insider exfiltration of customer data at Amazon, this must be tackled via internal security controls.
In September 2020, a Nevada court charged Russian national Egor Igorevich Kriuchkov with conspiracy to intentionally cause damage to a protected computer.
The court alleges that Kruichkov attempted to recruit an employee of Tesla’s Nevada Gigafactory. Kriochkov and his associates reportedly offered a Tesla employee $1 million to “transmit malware” onto Tesla’s network via email or USB drive to “exfiltrate data from the network.”
The Kruichkov conspiracy was disrupted before any damage could be done. But it wasn’t the first time Tesla had faced an insider threat. In June 2018, CEO Elon Musk emailed all Tesla staff to report that one of the company’s employees had “conducted quite extensive and damaging sabotage to [Tesla’s] operations.”
With state-sponsored cybercrime syndicates wreaking havoc worldwide, we could soon see further attempts to infiltrate companies. That’s why it’s crucial to run background checks on new hires and ensure an adequate level of internal security.
Police in Ukraine reported in 2018 that a man had attempted to sell 100 GB of customer data to his ex-employer’s competitors—for the bargain price of $4,000.
The man allegedly used his insider knowledge of the company’s security vulnerabilities to gain unauthorized access to the data.
This scenario presents another challenge to consider when preventing insider threats—you can revoke ex-employees’ access privileges, but they might still be able to leverage their knowledge of your systems’ vulnerabilities and weak points.
Misdirected emails happen more than most think. In fact, Tessian platform data shows that at least 800 misdirected emails are sent every year in organizations with 1,000 employees. But, what are the implications?
It depends on what data has been exposed.
In one incident in mid-2019, the private details of 24 NHS employees were exposed after someone in the HR department accidentally sent an email to a team of senior executives.
While the employee apologized, the exposure of PII like this can lead to medical identity theft and even physical harm to the patients. We outline even more consequences of misdirected emails in this article.
NHS coronavirus contact-tracing app details were leaked after documents hosted in Google Drive were left open for anyone with a link to view. Worse still, links to the documents were included in several others published by the NHS.
These documents – marked “SENSITIVE” and “OFFICIAL” contained information about the app’s future development roadmap and revealed that officials within the NHS and Department of Health and Social Care are worried about the app’s reliance and that it could be open to abuse that leads to public panic.
In 2017, a California court found ex-security officer Yovan Garcia guilty of hacking his ex-employer’s systems to steal its data, destroy its servers, deface its website, and copy its proprietary software to set up a rival company.
The cybercrime spree was reportedly sparked after Garcia was fired for manipulating his timesheet. Garcia received a fine of over $316,000 for his various offenses.
The sheer amount of damage caused by this one disgruntled employee is pretty shocking. Garcia stole employee files, client data, and confidential business information; destroyed backups; and even uploaded embarrassing photos of his one-time boss to the company website.
We mentioned earlier that employees oftentimes email company data to themselves to work over the weekend.
But, in this incident, an employee at Boeing shared a spreadsheet with his wife in hopes that she could help solve formatting issues. While this sounds harmless, it wasn’t. The personal information of 36,000 employees were exposed, including employee ID data, places of birth, and accounting department codes.
Incidents involving Insider Threats are on the rise, with a marked 47% increase over the last two years. This isn’t trivial, especially considering the global average cost of an Insider Threat is $11.45 million. This is up from $8.76 in 2018.
It’s worth noting, though, that credential theft is the most detrimental to an organization’s bottom line, costing an average of $2.79 million.
The “what, who, and why” behind incidents involving Insider Threats vary greatly by industry.
For example, customer data is most likely to be compromised by an Insider in the Healthcare industry, while money is the most common target in the Finance and Insurance sector.
But, who exfiltrated the data is just as important as what data was exfiltrated. The sectors most likely to experience incidents perpetrated by trusted business partners are:
Overall, though, when it comes to employees misusing their access privileges, the Healthcare and Manufacturing industries experience the most incidents. On the other hand, the Public Sector suffers the most from lost or stolen assets and also ranks in the top three for miscellaneous errors (for example misdirected emails) alongside Healthcare and Finance. You can find even more stats about Insider Threats (including a downloadable infographic) here.
The bottom line: Insider Threats are a growling problem. We have a solution.