Request a Demo of Tessian Today.
Automatically stop data breaches and security threats caused by employees on email. Powered by machine learning, Tessian detects anomalies in real-time, integrating seamlessly with your email environment within minutes and starting protection in a day. Provides you with unparalleled visibility into human security risks to remediate threats and ensure compliance.
Threat Intel
US Legal Education Provider Spam Campaign Detected
By Charles Brook
Friday, June 11th, 2021
Overview Time period: March 2020 – May 28, 2021 Number of emails sent: >405,000 Subject lines used: 5,881 Mailboxes targeted: 2,099 Sender domains used: 821 Tessian’s Research & Intelligence team have identified a pattern of suspicious email activity across the Tessian platform, originating from a US-based online “leader in legal education”. The first email campaigns were detected in early 2020. In every campaign, the organization appears to be promoting discounts on educational courses or new curriculum. New domains – our team has observed 2-3 new domains appearing per week – were used to evade spam filters and SEGs. Who was targeted? Over 10% of our customer base received one of the campaigns from this legal education firm. 65% of the targeted customers are in the Legal sector; 25% are in Financial Services. Almost all targeted customers are US-based. Nearly every customer has a legacy Secure Email Gateway (SEG) and Tessian Defender as part of their inbound email tech stack. These emails bypassed the SEGs, but were flagged as potentially malicious by Tessian Defender.
One single law firm received an astounding 280,000 emails from this organization in a little over a year. Other Tessian customers received several hundred to thousands in the same time frame. Normally high-volume campaigns like this are not very targeted or customized to the recipient. In this case, the sender has taken a scatter-shot approach with the hope that a fraction of the recipients engage. Even if these emails are not malicious, they are certainly a nuisance – especially for busy attorneys.   What was the angle? Nearly 6,000 subject lines were used in these email campaigns. Notable themes and keywords include: Coronavirus / COVID-19 Cryptocurrency, Blockchain, Bitcoin and Smart Contracts AirBnB & Short-Term Rental Law Marijuana, Hemp and Cannabis Law  Judgments & Asset Protection Uber, Lyft & Ridesharing law Discounts Last/final day to register It appears that they are attempting to capitalize on new or trending legal topics, which could be particularly relevant to law firms and financial services institutions.
Suspicious, not necessarily malicious  While this legal education provider may be a legitimate organization, their website is insecure (no SSL certification, no padlock icon), and more importantly, the way they are building and distributing these email campaigns is suspicious; their tactics mimic those deployed by cybercriminals to evade defenses. For example, the emails are often sent from a recently registered domain by a sender the recipient will probably not have seen before. These are two key indicators that trigger Tessian Defender. In a little over a year, the legal education provider registered over 800 domains; sent emails from over 825 email addresses; and used about 20 different display names. This sort of behavior indicates that they were deliberately crafting emails to bypass rule-based filtering. [Read more about display name and domain manipulation.] Why? Once a domain has developed a reputation for spam, then it can be added to a spamming blacklist, which will be a significant factor considered by spam filters.  Registering a new domain with a fresh or unknown reputation is the easiest way to get around this. This is not dissimilar to how hackers create phishing attacks.  The emails often also contained a sense of urgency to bait the recipient into buying or signing up to something while a certain discount is still available. Urgency (i.e. “Last day to register”) is another technique regularly employed in phishing emails. Most of the URLs in the emails pointed to a legitimate website called Constant Contact (an email marketing tool). What can you do about it? General guidance  Limit how far you share your email address across the internet. Keep it private unless it is essential to share it. Do not click on any links in spam emails as they could be malicious. Mark it as spam or move it to your spam/junk email folder to help train the spam recognition algorithm. After marking it as spam, delete the email from your spam/junk folder. If you’re a Tessian customer Review attacks in the Tessian portal and add senders to a denylist to be blocked before reaching inboxes in the future.  Review attacks in the Tessian portal and remove emails from employee inboxes.  Use the Human Layer Risk Hub to understand which employees are most at risk of phishing; then notify them individually or create customized warnings to educate them about the risk. The primary way for avoiding spam is to limit how much you share your email address across the internet. Be cautious of who and what services you sign up to with your email address – whether it’s your personal or business email address. Some services may willingly sell your information to spammers or marketers. The key difference between marketing emails and spam is that marketing emails should only be sent to emails that have consented to receive them. To comply with regulations like GDPR and CCPA, marketing emails must also provide an easy way to opt out of future emails, for example, by including an unsubscribe link or button in the email. Last but not least, if you’re a lawyer, always make sure the provider and courses of legal training are accredited. 
Read Blog Post
Customer Stories
Advanced Inbound and Outbound Threat Protection for an International Law Firm
Friday, June 11th, 2021
Company: Penningtons Manches Cooper Industry: Legal Company Size: 1,000 employees Solutions: Enforcer, Guardian, Defender Environment: Hybrid Platform: Outlook Customer since: 2016 About Penningtons Manches Cooper Penningtons Manches Cooper is a leading UK and international law firm which provides high quality legal advice to both businesses and individuals. The firm has UK offices in the City of London, Basingstoke, Birmingham, Cambridge, Guildford, Oxford and Reading with an overseas network stretching from Asia to South America through their presence in Singapore, Piraeus, Paris, Madrid and São Paulo. With 130 partners and over 880 people in total, Penningtons Manches Cooper is acknowledged as a dynamic and forward-thinking practice which combines legal services with a responsive and flexible approach.  They have established a strong reputation in a variety of sectors, particularly private wealth, shipping, technology and property.  Penningtons Manches Cooper lawyers are also recognised for their expertise in life sciences, education, retail, sports and entertainment and international trade. Before Tessian…. Before deploying Tessian in 2016, Marcus Shepherd, Best Practice Operations, and Richard Mullins, IT Security Engineer, both suspected Penningtons Manches Cooper had a more significant problem with email data breaches than was being reported. Marcus explained, saying “It was pretty clear that, together with the rest of the industry back then, we had a problem with email data breaches but had no visibility as to the extent of it. We had reporting processes in place, but had a hunch that the actual number of incidents was higher than those being reported by employees. Part of the problem was education. Complete understanding of what constituted a data breach and the possible consequences of data breaches – even with very basic personal details – was not fully understood then.  A lot of employees were not clear that if something had taken place, it needed to be reported.” While they were leveraging some standard rules in Outlook for inbound threats, they were relying on employee training, rule-based systems, and self-reporting to prevent outbound threats like misdirected emails and data exfiltration (both accidental and malicious).
According to Marcus and Richard, they lacked visibility and control over threats, employees were struggling with alert fatigue, and their security team was inundated with more false positives than they could investigate.  Must-have features…. In evaluating solutions, the firm was originally looking for three key features. Effectiveness: Because data loss incidents were a concern, their top priority was to find a solution that would accurately predict data loss incidents on email. But unsurprisingly, they were wary of any solution that might trigger false positives. This would distract partners and cause alert fatigue. Ease-of-use: They wanted a tool that would be easy to deploy and not require a large security team to manage it day-to-day.   Education: It can be difficult to encourage fee-earners to prioritize security considerations when dealing with busy and demanding clients. The pop-ups triggered by rule-based tools weren’t offering employees the information they needed to understand how to handle data safely or why it was so important to do so. Marcus and Richard wanted a tool that offered context and complemented training and awareness programs.
With Tessian…. As an innovative firm with a proactive security team, Penningtons Manches Cooper was an early adopter of Tessian and deployed Tessian Guardian and Tessian Enforcer in 2016 to prevent misdirected emails and data exfiltration on email. In 2019 – as soon as it was released to market – they deployed Tessian Defender. Tessian offers advanced threat protection  Since deploying Tessian, Richard and Marcus have seen Tessian Enforcer reduce loss of IP from people leaving the firm, have seen over 3,000 interventions where Tessian Guardian has prevented a potential data breach by flagging a misdirected email, and have seen Tessian Defender prevent advanced impersonation attacks including CEO Fraud and Business Email Compromise.  “Tessian is a vital part of our security stack when it comes to cyber awareness, risk and compliance, and information protection. It’s an essential perimeter defense – and sometimes the last line of defense,” Richard said.  Tessian surfaces rich insights about employee behavior on email With Human Layer Risk Hub, Penningtons Manches Coopers’ security team has clear visibility of threats.  “Tessian is doing the heavy lifting for us now. We’re no longer looking through spreadsheets with hundreds or thousands of events. With Human Layer Risk Hub, we get incredible visibility within the portal into high-risk users and high-risk events. We can now identify users whose behavior could put us at risk, whether it’s via misdirected emails, unauthorized emails, or spear phishing attacks. This all helps massively with incident response since our security and compliance teams do not have limitless resources.,” Richard said.  In-the-moment warnings reinforce security awareness training and reduce risk over time Tessian’s in-the-moment warnings offer context about why an email is being flagged as malicious or suspicious. They’re written in clear, easy-to-understand language and help nudge employees towards safer behavior over time.
The platform is easy to deploy and manage day-to-day  Tessian deploys within minutes, learns within hours, and starts protecting in a day. Richard and Marcus experienced this during their initial deployment and again during their merger with Thomas Coopers LLP in 2019.  Marcus explained, saying that “Deploying Tessian across new users after the merger was seamless. We got everyone connected immediately which helped us extend our security culture right away”.  Low flag rates and false positives mean Tessian doesn’t get in the way  It was important for Marcus and Richard to find a tool that worked, without distracting, frustrating, or confusing especially busy lawyers.  With Tessian, they no longer struggle with high rates of false positives.
Tessian sets the benchmark for technology partners From the outset, Richard and Marcus have been proactive in helping shape Tessian’s product roadmap to serve them, other law firms, and customers across industries.   “In terms of a relationship with a supplier, Tessian is the benchmark for continuous improvement and adapting to the threat landscape. We have a huge amount of engagement and feedback with Tessian which has helped to improve our email security posture. They actively want to go on our journey with us and are always willing to listen to our concerns or requirements,” Richard said.
Read Blog Post
Spear Phishing
8 Real-World Examples of Business Email Compromise (Updated 2021)
Thursday, June 10th, 2021
Business Email Compromise (BEC) attacks use real or impersonated business email accounts to defraud employees. The FBI calls BEC a “$26 billion scam” that affects thousands of businesses every year. This article will look at some examples of BEC attacks that have cost organizations money, time, and reputation — to help you avoid making the same mistakes. Not sure what BEC is? We tell you everything you need to know about it – including how it works – in this article: What is Business Email Compromise and How Does it Work? You can also learn how Tessian prevents BEC for organizations across industires here.  1. $17.2m acquisition scam Our first example demonstrates how fraudsters can play on a target’s trust and exploit interpersonal relationships. In June 2014, Keith McMurtry, a Scoular employee, received an email supposedly from his boss, CEO Chuck Elsea. The email informed McMurty that Scoular was set to acquire a Chinese company. Elsea instructed McMurty to contact a lawyer at accounting firm KPMG. The lawyer would help facilitate a transfer of funds and close the deal.  McMurty obeyed, and he soon found himself transferring $17.2 million to a Shanghai bank account in the name of “Dadi Co.” The CEO’s email, as you might have guessed, was fraudulent. The scammers had used email impersonation to create accounts imitating both Elsea and the KPMG lawyer. Aside from the gargantuan $17.2m loss, what’s special about the Scoular scam? Take a look at this excerpt from the email, provided by FT.com, from “Elsea” to McMurty: “We need the company to be funded properly and to show sufficient strength toward the Chinese. Keith, I will not forget your professionalism in this deal, and I will show you my appreciation very shortly.” Given the emotive language, the praise, and the promise of future rewards — it’s easy to see why an employee would go along with a scam like this. 2. Law enforcement turns a blind eye to nonprofit’s $625,000 BEC loss BEC rates have been rising for several years, as demonstrated by 2021 data from the FBI’s Internet Crime Complaint Center (IC3).  The IC3 says that in 2020, losses from BEC exceeded $1.8 billion—that’s a fourfold increase since 2016. The number of BEC incidents also rose by 61% between 2016 and 2020. So perhaps it’s unsurprising—if somewhat disheartening—that law enforcement agencies are struggling to cope with all the BEC incidents that companies are reporting to them. In June 2021, we learned that San Fransisco-based homelessness charity Treasure Island fell victim to a devastating, month-long $625,000 BEC attack after hackers infiltrated the organization’s bookkeeper’s email system. The hackers found and manipulated a legitimate invoice used by one of Treasure Island’s partner organizations. Staff at Treasure Island transferred a loan intended for the partner organization straight into the cybercriminals’ bank account.  The nonprofit sadly lacked cybercrime insurance. But even worse—the U.S. Attorney’s Office in San Fransisco, which would have been responsible for leading an investigation into the BEC attack, reportedly declined to investigate the incident. This case serves as a reminder that, when it comes to cybercrime, prevention is always better than cure. Building security into your systems is the only viable way to avoid the losses associated with BEC attacks. 3. BEC scammers exploit COVID-19 fears 2020 was a turbulent year, and we saw cybercriminals exploiting people’s fear and uncertainty like never before. A particularly prevalent example was the trend of COVID-19-related BEC scams. As the pandemic spread, governments worldwide issued warnings about a surge in cyberattacks. In April 2020, for example, the FBI warned that scammers were “using the uncertainty surrounding the COVID-19 pandemic” to conduct BEC scams.  The FBI gave one example of an unnamed company, whose supposed supplier requested payments to a new account “due to the Coronavirus outbreak and quarantine processes and precautions.” Criminals will always seek to capitalize on chaos. In December 2020, Keeper reported that uncertainty caused by COVID-19, Brexit, and the move to remote-working led to 70% of U.K. finance companies experiences experiencing BEC attacks over the preceding year. Looking for more examples of scammers exploiting COVID-19 fears? We share four more and outline the red flags contained in each here. BONUS! There’s a downloadable guide at the bottom of the article.  4. Prison sentence for Atlantan BEC scammer In June 2021, an Atlanta court sentenced Anthony Dwayne King to two and a half years in prison for his role in a BEC scam—but only after he’d earned nearly $250,000 ripping off businesses and individuals across four U.S. states. Between October 2018 and February 2019, King and his accomplices conducted BEC and vishing (phone phishing) operations, setting up fake companies and opening fraudulent bank accounts to redirect wire transfers.  The cybercriminals targeted law firms and home movers but were thwarted by Georgia’s Cyber Fraud Task Force. As well as serving federal prison time, King will have to repay the money he stole from his victims. 5. Hacker group behind Solarwinds attack launches BEC campaign The cybersecurity world was rocked in 2020 by the Solarwinds attacks, in which Russian group Nobelium (also known as Cozy Bear and APT29, among other names) pushed its malware into thousands of organizations’ systems via a software update. In March 2021, we learned about Nobelium’s new campaign. Rather than hijacking software updates provided by a trusted software provider, Nobelium’s most recent cybercrime spree leverages a trusted mass email provider. Nobelium reportedly used email provider Constant Contact to send more than 3,000 emails to over 150 organizations, including government agencies.  The emails were disguised as information about electoral fraud and contained a malicious payload designed to create a backdoor into the recipient’s computer. As companies worldwide attempt to recover from the impact of the Solarwinds attack, Nobelium’s follow-on campaign reminds us about the variety of threat vectors available to cybercrime groups. If you want to learn more about the SolarWinds attack, check out our conversation with world-renowed hacker Samy 6. $46.7m vendor fraud In August 2015, IT company Ubiquiti filed a report to the U.S. Securities and Exchange Commission revealing it was the victim of a $46.7 million “business fraud.” This attack was an example of a type of BEC, sometimes called Vendor Email Compromise (VEC). The scammers impersonated employees at a third-party company and targeted Ubiquiti’s finance department. We still don’t know precisely how the cybercriminals pulled off this massive scam. VEC attacks previously relied on domain impersonation and email spoofing techniques, but these days, scammers are increasingly turning to the more sophisticated account takeover method. 7. Snapchat payroll information breach Many high-profile BEC attacks target a company’s finance department and request payment of an invoice to a new account. But not all BEC scams involve wire transfer fraud. Here’s an example of how BEC scams can target data, as well as money. In February 2016, cybercriminals launched a BEC attack against social media firm Snapchat. Impersonating Snapchat’s CEO, the attackers obtained “payroll information about some current and former employees.” The scam resulted in a breach of some highly sensitive data, including employees’ Social Security Numbers, tax information, salaries, and healthcare plans. Snapchat offered each affected employee two years of free credit monitoring and up to $1 million in reimbursement. 8. The big one: $121m BEC scam targeting Facebook and Google  Last — but by no means least — let’s look at the biggest known BEC scam of all time: a VEC attack against tech giants Facebook and Google that resulted in around $121 million in collective losses. The scam took place between 2013 and 2015 — and the man at the center of this BEC attack, Evaldas Rimasauskas, was sentenced to five years in prison in 2019. So how did some of the world’s most tech-savvy employees fall for this elaborate hoax?  Rimasauskas and associates set up a fake company named “Quanta Computer”  — the same name as a real hardware supplier. The group then presented Facebook and Google with convincing-looking invoices, which they duly paid to bank accounts controlled by Rimasauskas. As well as fake invoices, the scammers prepared counterfeit lawyers’ letters and contracts to ensure their banks accepted the transfers. The Rimasauskas scam stands as a lesson to all organizations. If two of the world’s biggest tech companies lost millions to BEC over a two-year period — it could happen to any business. Want to explore other examples of email attacks? Check out these articles: 6 Examples of Social Engineering Attacks COVID-19: Real-Life Examples of Opportunistic Phishing Emails  Phishing Statistics (Updated 2021)
Read Blog Post
Tessian Culture
The Rise Of The New-School CISO
By Henry Trevelyan Thomas
Wednesday, June 9th, 2021
I’m lucky. I get to speak to and learn from hundreds of CISOs each year across all different sizes and types of orgs. Each conversation gives a unique insight into how companies approach security and, crucially, what works well and what doesn’t.  Over the last few years, there is a very specific type of security leader who I’ve seen succeeding time and time again. A type of CISO that has particular success in winning boards over, getting employees to engage, and ultimately reducing exposure for their org. So what makes these new-school CISOs so successful? 1. Security = sales Gone are the days where security leaders are technical folks that shun any form of pitching as “not part of their job”. The best CISOs have recognized the importance of security in winning new customers and embraced it. Whether it’s leveraging their progressive security tech stack to impress, wowing a prospective customers’ security team over a call, or being a crucial part of the pitch team, security can have a material impact on the most important business driver (new revenue). It doesn’t start when the pitch has been secured; I’m regularly hit up by CISOs at customers asking for intros to prospective new customers. CISOs winning and finding deals – no wonder they don’t struggle with exec attention.    2. Customer success = CISO’s success I’m biased, but nothing is more important than making the customer successful. It’s easy for security teams to think they have limited ability to impact their customers’ success, but some CISOs are going above and beyond to make their customers successful. From proactively building relationships with customers’ security teams to notifying customers of potential vulnerabilities, nothing beats having a direct relationship with a customer. Better still, if you can help them be more secure, you’re creating value for the customer and protecting your organization; after all, you’re only as secure as the customers (and suppliers) you work with.    3. Employees are everything Security doesn’t work in a vacuum. You won’t win as a CISO if you push down security policies and assume they’ll be adopted. This belief is at the core of the new-school of CISO – they know that they’ll only drive positive change in the organization if they bring people along for the journey. That means great storytelling, making content relevant and doing everything they possibly can to help, not hinder, their employees to work securely. Some great examples I’ve seen here are hijacking the beginning of other meetings to educate on security, giving line managers the insight they need to make their teams work securely and gamifying leaderboards to make security competitive.   4. Mission-driven Businesses exist to achieve their mission. Security exists to ensure businesses can stay safe in order to achieve their mission. It’s that simple, and emergent CISOs are building their team around this premise. Unfortunately, often there are years worth of clunky legacy technology and processes that restrict employees, meaning CISOs need to go against the status quo, ask more from their solutions and not settle for anything less than frictionless UX. It’s awesome to see security teams start to judge their success on the delight of their employees – using metrics such as NPS or CSAT – as well as more security-centric metrics. After all, if you enable your employees, you’ll enable your business to succeed.     5. Technical proficiency There’s no doubt having a level of technical proficiency as a CISO is important, but the CISOs who are influencing the most and driving change in the profession often win because they are great communicators, understand business drivers and care about user experience. More and more, I’m seeing CISOs from non-technical backgrounds triumph by combining the above traits with hiring a team who bring the technical expertise.  It’s been awesome seeing security – and the role of the CSIO – change and observing which type of CISOs are having the most success. No doubt in 12 months time, the traits needed to succeed will have changed again and I’ll need to rewrite this 🤦‍♀️.   
Read Blog Post
DLP, Data Exfiltration
Insider Threats Examples: 17 Real Examples of Insider Threats
By Maddie Rosenthal
Tuesday, June 8th, 2021
Insider threats are a big problem for organizations across industries. Why? Because they’re so hard to detect. After all, insiders have legitimate access to systems and data, unlike the external bad actors many security policies and tools help defend against. It could be anyone, from a careless employee to a rogue business partner. That’s why we’ve put together this list of Insider Threat types and examples. By exploring different methods and motives, security, compliance, and IT leaders (and their employees) will be better equipped to spot Insider Threats before a data breach happens. Types of Insider Threats First things first, let’s define what exactly an Insider Threats is. Insider threats are people – whether employees, former employees, contractors, business partners, or vendors – with legitimate access to an organization’s networks and systems who deliberately exfiltrate data for personal gain or accidentally leak sensitive information. The key here is that there are two distinct types of Insider Threats:  The Malicious Insider: Malicious Insiders knowingly and intentionally steal data. For example, an employee or contractor may exfiltrate valuable information (like Intellectual Property (IP), Personally Identifiable Information (PII), or financial information) for some kind of financial incentive, a competitive edge, or simply because they’re holding a grudge for being let go or furloughed.  The Negligent Insider: Negligent insiders are just your average employees who have made a mistake. For example, an employee could send an email containing sensitive information to the wrong person, email company data to personal accounts to do some work over the weekend, fall victim to a phishing or spear phishing attack, or lose their work device.  We cover these different types of Insider Threats in detail in this article: What is an Insider Threat? Insider Threat Definition, Examples, and Solutions.
17 Examples of Insider Threats  1. The employee who exfiltrated data after being fired or furloughed Since the outbreak of COVID-19, 81% of the global workforce have had their workplace fully or partially closed. And, with the economy grinding to a halt, employees across industries have been laid off or furloughed.  This has caused widespread distress. When you combine this distress with the reduced visibility of IT and security teams while their teams work from home, you’re bound to see more incidents of Malicious Insiders.  One such case involves a former employee of a medical device packaging company who was let go in early March 2020  By the end of March – and after he was given his final paycheck – Dobbins hacked into the company’s computer network, granted himself administrator access, and then edited and deleted nearly 120,000 records.  This caused significant delays in the delivery of medical equipment to healthcare providers. 2. The employee who sold company data for financial gain In 2017, an employee at Bupa accessed customer information via an in-house customer relationship management system, copied the information, deleted it from the database, and then tried to sell it on the Dark Web.  The breach affected 547,000 customers and in 2018 after an investigation by the ICO, Bupa was fined £175,000. 3. The employee who stole trade secrets In July 2020, further details emerged of a long-running insider job at General Electric (GE) that saw an employee steal valuable proprietary data and trade secrets. The employee, named Jean Patrice Delia, gradually exfiltrated over 8,000 sensitive files from GE’s systems over eight years — intending to leverage his professional advantage to start a rival company. The FBI investigation into Delia’s scam revealed that he persuaded an IT administrator to grant him access to files and that he emailed commercially-sensitive calculations to a co-conspirator. Having pleaded guilty to the charges, Delia faces up to 87 months in jail. What can we learn from this extraordinary inside job? Ensure you have watertight access controls and that you can monitor employee email accounts for suspicious activity. 4. The employees who exposed 250 million customer records Here’s an example of a “negligent insider” threat. In December 2019, a researcher from Comparitech noticed that around 250 million Microsoft customer records were exposed on the open web. This vulnerability meant that the personal information of up to 250 million people—including email addresses, IP addresses, and location—was accessible to anyone with a web browser. This incident represents a potentially serious breach of privacy and data protection law and could have left Microsoft customers open to scams and phishing attacks—all because the relevant employees failed to secure the databases properly. Microsoft reportedly secured the information within 24 hours of being notified about the breach. 5. The nuclear scientists who hijacked supercomputer to mine Bitcoin Russian secret services reported in 2018 that they had arrested employees of the country’s leading nuclear research lab on suspicion of using a powerful supercomputer for bitcoin mining. Authorities discovered that scientists had abused their access to some of Russia’s most powerful supercomputers by rigging up a secret bitcoin-mining data center. Bitcoin mining is extremely resource-intensive and some miners are always seeking new ways to outsource the expense onto other people’s infrastructure. This case is an example of how insiders can misuse company equipment. 6. The employee who fell for a phishing attack While we’ve seen a spike in phishing and spear phishing attacks since the outbreak of COVID-19, these aren’t new threats. One example involves an email that was sent to a senior staff member at Australian National University. The result? 700 Megabytes of data were stolen. This data was related to both staff and students and included details like names, addresses, phone numbers, dates of birth, emergency contact numbers, tax file numbers, payroll information, bank account details, and student academic records. 7. The work-from-home employees duped by a vishing scam Cybercriminals saw an opportunity when many of Twitter’s staff started working from home. One cybercrime group conducted one of the most high-profile hacks of 2020 — knocking 4% off Twitter’s share price in the process. In July 2020, after gathering information on key home-working employees, the hackers called them up and impersonated Twitter IT administrators. During these calls, they successfully persuaded some employees to disclose their account credentials. Using this information, the cybercriminals logged into Twitter’s admin tools, changed the passwords of around 130 high-profile accounts — including those belonging to Barack Obama, Joe Biden, and Kanye West — and used them to conduct a Bitcoin scam. This incident put “vishing” (voice phishing) on the map, and it reinforces what all cybersecurity leaders know — your company must apply the same level of cybersecurity protection to all its employees, whether they’re working on your premises or in their own homes. Want to learn more about vishing? We cover it in detail in this article: Smishing and Vishing: What You Need to Know About These Phishing Attacks. 8. The ex-employee who got two years for sabotaging data The case of San Jose resident Sudhish Kasaba Ramesh serves as a reminder that it’s not just your current employees that pose a potential internal threat—but your ex-employees, too. Ramesh received two years imprisonment in December 2020 after a court found that he had accessed Cisco’s systems without authorization, deploying malware that deleted over 16,000 user accounts and caused $2.4 million in damage. The incident emphasizes the importance of properly restricting access controls—and locking employees out of your systems as soon as they leave your organization. 9. The employee who took company data to a new employer for a competitive edge This incident involves two of the biggest tech players: Google and Uber. In 2015, a lead engineer at Waymo, Google’s self-driving car project, left the company to start his own self-driving truck venture, Otto. But, before departing, he exfiltrated several trade secrets including diagrams and drawings related to simulations, radar technology, source code snippets, PDFs marked as confidential, and videos of test drives.  How? By downloading 14,000 files onto his laptop directly from Google servers. Otto was acquired by Uber after a few months, at which point Google executives discovered the breach. In the end, Waymo was awarded $245 million worth of Uber shares and, in March, the employee pleaded guilty. 10. The employee who stole a hard drive containing HR data Coca-Cola was forced to issue data breach notification letters to around 8,000 employees after a worker stole a hard drive containing human resources records. Why did this employee steal so much data about his colleagues? Coca-Cola didn’t say. But we do know that the employee had recently left his job—so he may have seen an opportunity to sell or misuse the data once outside of the company. Remember—network and cybersecurity are crucial, but you need to consider whether insiders have physical access to data or assets, too. 11. The employees leaking customer data  Toward the end of October 2020, an unknown number of Amazon customers received an email stating that their email address had been “disclosed by an Amazon employee to a third-party.” Amazon said that the “employee” had been fired — but the story changed slightly later on, according to a statement shared by Motherboard which referred to multiple “individuals” and “bad actors.” So how many customers were affected? What motivated the leakers? We still don’t know. But this isn’t the first time that the tech giant’s own employees have leaked customer data. Amazon sent out a near-identical batch of emails in January 2020 and November 2018. If there’s evidence of systemic insider exfiltration of customer data at Amazon, this must be tackled via internal security controls. 12. The employee offered a bribe by a Russian national In September 2020, a Nevada court charged Russian national Egor Igorevich Kriuchkov with conspiracy to intentionally cause damage to a protected computer. The court alleges that Kruichkov attempted to recruit an employee of Tesla’s Nevada Gigafactory. Kriochkov and his associates reportedly offered a Tesla employee $1 million to “transmit malware” onto Tesla’s network via email or USB drive to “exfiltrate data from the network.” The Kruichkov conspiracy was disrupted before any damage could be done. But it wasn’t the first time Tesla had faced an insider threat. In June 2018, CEO Elon Musk emailed all Tesla staff to report that one of the company’s employees had “conducted quite extensive and damaging sabotage to [Tesla’s] operations.” With state-sponsored cybercrime syndicates wreaking havoc worldwide, we could soon see further attempts to infiltrate companies. That’s why it’s crucial to run background checks on new hires and ensure an adequate level of internal security. 13. The ex-employee who offered 100 GB of company data for $4,000 Police in Ukraine reported in 2018 that a man had attempted to sell 100 GB of customer data to his ex-employer’s competitors—for the bargain price of $4,000. The man allegedly used his insider knowledge of the company’s security vulnerabilities to gain unauthorized access to the data. This scenario presents another challenge to consider when preventing insider threats—you can revoke ex-employees’ access privileges, but they might still be able to leverage their knowledge of your systems’ vulnerabilities and weak points. 14. The employee who accidentally sent an email to the wrong person Misdirected emails happen more than most think. In fact, Tessian platform data shows that at least 800 misdirected emails are sent every year in organizations with 1,000 employees. But, what are the implications? It depends on what data has been exposed.  In one incident in mid-2019, the private details of 24 NHS employees were exposed after someone in the HR department accidentally sent an email to a team of senior executives. This included: Mental health information Surgery information While the employee apologized, the exposure of PII like this can lead to medical identity theft and even physical harm to the patients. We outline even more consequences of misdirected emails in this article.  15. The employee who accidentally misconfigured access privileges Just last month, NHS coronavirus contact-tracing app details were leaked after documents hosted in Google Drive were left open for anyone with a link to view. Worse still, links to the documents were included in several others published by the NHS.  These documents – marked “SENSITIVE” and “OFFICIAL” contained information about the app’s future development roadmap and revealed that officials within the NHS and Department of Health and Social Care are worried about the app’s reliance and that it could be open to abuse that leads to public panic. 16. The security officer who was fined $316,000 for stealing data (and more!) In 2017, a California court found ex-security officer Yovan Garcia guilty of hacking his ex-employer’s systems to steal its data, destroy its servers, deface its website, and copy its proprietary software to set up a rival company. The cybercrime spree was reportedly sparked after Garcia was fired for manipulating his timesheet. Garcia received a fine of over $316,000 for his various offenses. The sheer amount of damage caused by this one disgruntled employee is pretty shocking. Garcia stole employee files, client data, and confidential business information; destroyed backups; and even uploaded embarrassing photos of his one-time boss to the company website. 17. The employee who sent company data to a personal email account We mentioned earlier that employees oftentimes email company data to themselves to work over the weekend.  But, in this incident, an employee at Boeing shared a spreadsheet with his wife in hopes that she could help solve formatting issues. While this sounds harmless, it wasn’t. The personal information of 36,000 employees were exposed, including employee ID data, places of birth, and accounting department codes.
How common are Insider Threats? Incidents involving Insider Threats are on the rise, with a marked 47% increase over the last two years. This isn’t trivial, especially considering the global average cost of an Insider Threat is $11.45 million. This is up from $8.76 in 2018. Who’s more culpable, Negligent Insiders or Malicious Insiders?  Negligent Insiders (like those who send emails to the wrong person) are responsible for 62% of all incidents Negligent Insiders who have their credentials stolen (via a phishing attack or physical theft) are responsible for 25% of all incidents Malicious Insiders are responsible for 14% of all incidents It’s worth noting, though, that credential theft is the most detrimental to an organization’s bottom line, costing an average of $2.79 million.  Which industries suffer the most? The “what, who, and why” behind incidents involving Insider Threats vary greatly by industry.  For example, customer data is most likely to be compromised by an Insider in the Healthcare industry, while money is the most common target in the Finance and Insurance sector. But, who exfiltrated the data is just as important as what data was exfiltrated. The sectors most likely to experience incidents perpetrated by trusted business partners are: Finance and Insurance Federal Government Entertainment Information Technology Healthcare State and Local Government Overall, though, when it comes to employees misusing their access privileges, the Healthcare and Manufacturing industries experience the most incidents. On the other hand, the Public Sector suffers the most from lost or stolen assets and also ranks in the top three for miscellaneous errors (for example misdirected emails) alongside Healthcare and Finance. You can find even more stats about Insider Threats (including a downloadable infographic) here.  The bottom line: Insider Threats are a growling problem. We have a solution.
Read Blog Post
Human Layer Security
6 Insights From Tessian Human Layer Security Summit
By Maddie Rosenthal
Thursday, June 3rd, 2021
That’s a wrap! A big “thank you” to our incredible line-up of speakers, panelists, sponsors, and – of course – attendees of Tessian’s fifth Human Layer Security Summit.  Security leaders shared advice on scaling enterprise security programs, explained how they’ve successfully re-framed cybersecurity as a business enabler, and offered tips on how to prevent breaches.  If you’re looking for a recap, we’ve identified one key takeaway from each session. You can also watch the Summit (and previous Summits…) on-demand for free here. Want to be involved next time? Email us: marketing@tessian.com 1. The average person makes 35,000 decisions a day – one mistake could have big consequences While most decisions you make won’t impact your company’s cybersecurity, some can. For example, sending an email to the wrong person, misconfiguring a firewall, or clicking on a malicious link. And these mistakes happen more often than you might think… 95% of breaches are caused by human error. That’s why security leaders implement policies, offer training, and deploy technology. But did you know there’s one solution that prevents human error by offering automatic threat prevention, training, and risk analytics all in one platform?   Watch the full session below to hear more about Tessian Human Layer Risk Hub, or download the datasheet for a more detailed look at the product.  Further reading: Research: Why Do People Make Mistakes? What is Human Layer Security? Product Datasheet: Tessian Platform Overview 2. The best cybersecurity strategies combine experience, threat intelligence, and business intelligence If you’re looking for practical advice, check out this session. Bobby Ford, Senior Vice President and CSO at Hewlett-Packard, and James McQuiggan, Security Awareness Advocate at KnowBe4, discuss cybersecurity strategies they recommend for the enterprise.  You might be surprised to find out that technology wasn’t the focus of the conversation. Relationships were. By listening to and understanding your people, you can build better relationships, ensure alignment with the company’s mission, vision, and values, and influence real change.  “You have to assess the overall culture and then develop a strategy that’s commensurate with that culture,” Bobby explained. For more insights – including a personal anecdote about how implementing a security strategy is like teaching your children to walk – watch the full session.  Further reading: 7 Fundamental Problems With Security Awareness Training Hey CISOs! This Framework Will Help You Build Better Relationships  3. Some of the year’s biggest hacks have one thing in common: human error Who better to discuss this year’s biggest hacks than a hacker?  Samy Kamkar, Renowned Ethical Hacker, joined us to break down the SolarWinds and Twitter breaches and offer advice on how to prevent similar incidents.  To start, he explained that in both hacks, social engineering played a role. That’s why people are the key to a strong and effective cybersecurity strategy.  Sure, automated detection and prevention systems can help. So can password managers. But, at the end of the day, employees are the last line of defense and hackers don’t attack machines. They attack people.  According to Samy, “We don’t have time to implement every possible safeguard. That’s why we have to lean on training.” Watch the full session for more insights, including Samy’s book recommendation and why he doesn’t trust MFA.  Further reading: Tessian Threat Intelligence and Research Real World Examples of Social Engineering Research: How to Hack a Human 4. DLP is boring, daunting, and complex….but it doesn’t have to be Punit Rajpara, Global Head of IT and Business Systems at GoCardless, has a strong track record of leading IT and security teams at start-ups, with a resume that includes both Uber and WeWork.  For him, empowerment, enablement, and trust are key and should be reflected in an organization’s security strategy. That means rule-based DLP solutions – which he deemed “boring, daunting – and complex” just don’t cut it. Tessian does, though.  “Security is often looked at as a big brother, we’re-watching-everything-you-do sort of thing. At GoCardless, Tessian has changed that perception and is instead putting the power in the hands of the users,” Punit explained. To learn more about why Punit chose Tessian and how he uses the platform today, watch the full session below.  Further reading: Customer Story: How Tessian Gave GoCardless Better Control and Visibility of Their Email Threats Research: Data Loss Prevention in Financial Services Product Datasheet: Tessian Platform Overview 5. Learning is only effective when it’s an ongoing activity  When asked what was top of mind for her when it comes to cybersecurity, Katerina Sibinovska, CISO at Intertrust Group simply said “data loss”. I think most would agree. But, as we all know, data loss can be the result of just about anything. Lack of awareness, negligence, malicious intent… So, how does she prevent data loss? By balancing technical and non-technical controls and building a strong security culture.  And, as she pointed out, annual (and even quarterly!) training isn’t enough to build that strong security culture. “It can’t just be a tickbox exercise,” she said. Instead, meet employees where they are. Add context. Engage and reward them. Support them rather than blame them.  To learn more about how she’s reduced data loss – and what role Tessian plays – watch the full session. Further reading: Why Do the World’s Top Financial Institutions Trust Tessian? Pros and Cons of Phishing Awareness Training Product Data Sheet: Tessian Human Layer Risk Hub 6. People don’t just want to know WHAT to do, but they want to know WHY. You don’t want to miss this Q&A. Jerry Perullo, CISO at ICE | New York Stock Exchange has over 25 years of experience in cybersecurity and shares his thoughts on the role of the CISO, how to get buy-in, and why training is (generally) a “time-suck” for employees.  His advice? Don’t just tell people what they need to do in order to handle data safely, tell them why they need to do it. What are the legal obligations? What would the consequences be? This will help you re-frame cybersecurity as an enabler instead of an obstacle.  Watch the full session for more tips from this cybersecurity trailblazer.  Further reading: 1.CEO’s Guide to Data Protection and Compliance  2. 7 Fundamental Problems With Security Awareness Training You’re invited to the next Summit! Subscribe to our weekly newsletter to be the first to hear about events, product updates, and new research. 
Read Blog Post
Engineering Team
After 18 Months of Engineering OKRs, What Have We Learned?
By Andy Smith
Thursday, June 3rd, 2021
We have been using OKRs (Objectives and Key Results) at Tessian for over 18 months now, including in the Engineering team. They’ve grown into an essential part of the organizational fabric of the department, but it wasn’t always this way. In this article I will share a few of the challenges we’ve faced, lessons we’ve learned and some of the solutions that have worked for us. I won’t try and sell you on OKRs or explain what an OKR is or how they work exactly; there’s lots of great content that already does this! Getting started When we introduced OKRs, there were about 30 people in the Engineering department. The complexity of the team was just reaching the tipping point where planning becomes necessary to operate effectively. We had never really needed to plan before, so we found OKR setting quite challenging, and we found ourselves taking a long time to set what turned out to be bad OKRs. It was tempting to think that this pain was caused by OKRs themselves. On reflection today, however, it’s clear that OKRs were merely surfacing an existing pain that would emerge at some point anyway. If teams can’t agree on an OKR, they’re probably not aligned about what they are working on. OKRs surfaced this misalignment and caused a little pain during the setting process that prevented a large pain during the quarter when the misalignment would have had a larger impact. The Key Result part of an OKR is supposed to describe the intended outcome in a specific and measurable way. This is sometimes straightforward, typically when a very clear metric is used, such as revenue or latency or uptime. However, in Engineering there are often KRs that are very hard to write well. It’s too easy to end up with a bunch of KRs that drive us to ship a bunch of features on time, but have no aspect of quality or impact. The other pitfall is aiming for a very measurable outcome that is based on a guess, which is what happens when there is no baseline to work from. Again, these challenges exist without OKRs, but they may never precipitate into the conversation about what a good outcome is for a particular deliverable without OKRs there to make it happen. Unfortunately we haven’t found the magic wand that makes this easy, and we still have some binary “deliver the feature” key results every quarter, but these are less frequent now. We will often set a KR to ship a feature in Q1 and to set up a metric and will then set a target for the metric in Q2 once we have a baseline. Or if we have a lot of delivery KRs, we’ll pull them out of OKRs altogether and zoom out to set the KR around their overall impact. An eternal debate in the OKR world is whether to set OKRs top-down (leadership dictate the OKRs and teams/individuals fill out the details) or bottom-up (leadership aggregates the OKRs of teams and individuals into something coherent) or some mixture of the two. We use a blend of the two, and will draft department OKRs as a leadership team and then iterate a lot with teams, sometimes changing them entirely. This takes time, though. Every iteration uncovers misalignment, capacity, stakeholder or research issues that need to be addressed. We’ve sometimes been frustrated and rushed this through as it feels like a waste of time, but when we’ve done this, we’ve just ended up with bigger problems later down the road that are harder to solve than setting decent OKRs in the first place. The lesson we’ve learned is that effort, engagement with teams and old-fashioned rigor are required when setting OKRs, so we budget 3-4 weeks for the whole process. Putting OKRs into Practice The last three points have all been about setting OKRs, but what about actually using them day to day? We’ve learned two things:  the importance of allowing a little flex, and  how frequent – but light – process is needed to get the most out of your OKRs First, flex. Our OKRs are quarterly, but sometimes we need to set a 6 month OKR because it just makes more sense! We encourage this to happen. We don’t obsess about making OKRs ladder up perfectly to higher-level OKRs. It’s nice when they do, but if this is a strict requirement, then we find that it’s hard to make OKRs that actually reflect the priorities of the quarter. Sometimes a month into the quarter, we realize we set a bad OKR or wrote it in the wrong way. A bit of flexibility here is important, but not too much. It’s important to learn from planning failures, but it is probably more important that OKRs reflect teams’ actual priorities and goals, or nobody is going to take them seriously. So tweak that metric or cancel that OKR if you really need to, but don’t go wild. Finally, process. If we don’t actively check in on OKRs weekly, we tend to find that all the value we get from OKRs is diluted. Course-corrections come too late or worries go unsolved for too long. To keep this sustainable, we do this very quickly. I have an OKR check-in on the agenda for all my 1-1s with direct reports, and we run a 15-minute group meeting every week with the Product team where each OKR owner flags any OKRs that are off track, and we work out what we need to do to resolve them. Often this causes us to open a slack channel or draft a document to solve the issue outside of the meeting so that we stick to the strict 15 minute time slot. Many of these lessons have come from suggestions from the team, so my final tip is that if you’re embarking on using OKRs in your Engineering team, or if you need to get them back on track, make sure you set some time aside to run a retrospective. This invites your leaders and managers to think about the mechanics of OKRs and planning, and they usually have the best ideas on how to improve things.
Read Blog Post
Tessian Culture
Building a Customer Success Team: 5 Pillars of Success
By Henry Trevelyan Thomas
Wednesday, June 2nd, 2021
Customer Success (CS) is on fire. LinkedIn recently ranked CS as the 6th fastest growing role, 80% of CS teams saw growth last year and investors can’t get enough of net revenue retention (NRR). It’s never been more important to make your customers successful, but it’s also not always easy. In my journey growing Tessian from a 5 to a 150 person company, I am learning the importance of focus in creating the right environment for your CS team and customers to succeed.  Focus and attention needs to start at a company level. At Tessian, we’re lucky that long-term customer health and value has always been at the forefront of our founders’ minds; so much so that it became codified in of our core company values:
With a tick against company buy-in, my next learning was the importance of championing this value internally and with customers. To drive this focus, we developed a CS strategy built on 5 pillars. I’ve outlined our pillars below in the hope they may help other growing CS teams.
1. People It all starts with your people. It’s no longer enough to have a great product, it’s about how you deliver it. To turn your CS motion into extensions of your customers’ teams and into a key differentiator for your company (the holy grail), you need the best people motivated to push new boundaries. That’s why this pillar always comes first for us.    It’s all very well getting the best people in seat (we use interview scorecards based on our 5 pillars to source, interview and hire for the right competency and culture fit), but you also need to create an environment for people to succeed. Doing so is a team commitment, which requires everyone to commit to helping each other thrive. One of the ways we’ve been doing this is through our “CS People Committee”, who meet monthly to review team sentiment and feedback from our employee engagement tool (Peakon). The committee works to ensure we are continually prioritizing the changes and refinements that are needed to ensure the team continues to remain a great place to work. They recently identified “career growth” as a key area for improvement, leading to the team collaborating to implement growth frameworks for all CS roles 🙌 !  We measure team success and progress using Peakon scores, but we all know the real impact of an engaged team is reflected in how our customers are doing, which is when we turn to our next pillar… 2. Customer Health Having a deep understanding of your customers’ health is critical. Done well, the business impact for the team (proactive engagement, better understanding of value, more focused interactions) and customers (faster time to value, better feedback loops, earlier course-correcting) is huge. The catch is that defining and measuring customer health isn’t easy; it is a nuanced, multi-faceted and a very contextual exercise.  Until we embarked on our journey of contextualizing customer health at Tessian, we relied on gut-feel and experience to diagnose customer health issues. This wasn’t going to scale with our rapidly-growing customer base, so we built a predictive customer health score in Gainsight, bringing together qualitative customer experience variables (relationship strength, sentiment, advocacy, etc) and quantitative product outcomes (time to deploy, product performance, portal engagement, etc) into one place.  This was a great start, but we soon realized our health score was an internally controlled score, which lacked external customer validation. Enter net promoter score (NPS). We built an adaptive NPS program based on the customer stage in their journey with us, giving us a constant stream of live customer feedback. Our NPS augments our predictive health score and has now become the north star metric for this particular pillar.  3. Customer Growth Net Revenue Retention (NRR) is fast becoming one of the most important metrics in SaaS, and CS is the new growth engine driving it. It’s a great measure of both product and customer success. Get it right (>100%) and company valuations soar. Get it wrong (<100%) and you’re going to struggle to survive.  We identified very early that onboarding was one of the key areas we needed to get right to positively impact our NRR. The quicker customers onboard, the quicker we’re showing value (for example, stopping breaches), and the more opportunities we have to show them how else we can help (by introducing them to our expanding product portfolio). Through a combination of creating a dedicated onboarding team (less distractions for CSMs) and working closely with our product teams to drive automation in our product (post technical onboarding, our machine learning-led approach means little product configuration is required), we focused our CSMs on showing customers where there’s opportunity to deepen their protection and increase business value with Tessian. Our quarterly account planning cycles help keep everyone (onboarding, CSMs, AMs, leadership) on the same page about how customers are progressing through their journeys and where we specifically need to focus our efforts.   As a result of this pillar, our NRR rate is now a key measure of success for not only our Customer Growth pillar, but for the company as a whole, with the CS team tracking against churn and expansion targets on a weekly basis.  Our growth focus doesn’t just stop at NRR. Sales Engineering also falls under the CS umbrella due to our proximity and intersection with Product, Engineering and, most importantly, our customers. The combination of anecdotal customer feedback and a deep understanding of the product and its use cases, leaves us with a formidable Sales Eng team who are focused on bringing more customers and revenue into Tessian 👊.. 4. Customer Community We very consciously set out to make a customer’s decision to buy Tessian about more than just buying our products; they’re also joining a the Human Layer Security movement (#SecureTheHumanLayer) that helps amplify their voice, profession and careers. By creating a community of speakers, advocates and content creators, not only are we engaging with our customers at a deeper level, we’re also spreading the word, and building advocacy which in turns helps us to generate more pipeline, getting us closer to our mission of securing the human layer for enterprises across the globe. We’ve repeatedly seen the power of putting our customers at the center of our community: our flagship event – The Human Layer Security Summit – has gone from 30 to over 2,500 attendees; our podcast was ranked among the best data breach podcasts after our debut season launched; and our customer-led product webinar series has been big hit. So we’ve decided to continue doubling-down on our customers’ involvement. To incentivize the team to build and sustain a healthy customer community, we gamified customer involvement through creating an “advocacy events” score, which involves CSMs earning points as a result of customers engagement in our community (referrals, speakers, case studies, podcasts, reviews, etc). Going forward, we’ll be tracking which advocacy events lead to the most pipeline generation and acceleration of value for customers because, after all, that’s a core objective of our community.   
5. Product Enablement Our product philosophy is based on the fact that security teams have too much to do with too little time; hence we lean heavily on automation and reducing time to manage. To maintain our product experience, we are very aware that we need to carry that philosophy through in our onboarding and support interactions. To do so, we invested early (way before most people said we needed to) in building, maintaining and promoting a HelpCenter. This continues to be a great resource for educating both customers and new team members dealing with onboarding or technical issues. We learned soon after release that text-heavy articles are often counterproductive and get poor uptake, so we pivoted to snappy articles, explainer videos and lots of diagrams.  The feedback on the HelpCentre has been amazing and it has played a big role in influencing our key product enablement metrics: CSAT and % users onboarded.  Defining your pillars of success We’re still on a journey of continually reviewing and refining our pillars (we have so much more to learn), but having this framework has allowed us to focus on the things that matter most for our company and customers.  Here are some top tips for defining your own pillars: Keep them simple – anyone should be able to glance at your pillars and quickly understand what your CS team is about. Align them to business priorities – what’s top of mind for your company in the next 12-18 months? Align your pillars to that. The more relevant they are to the company story, the more they’ll resonate with execs, leadership and other teams. And be sure to build a routine of re-examining and refining the pillars because as the business changes, so will your team’s focus. Make sure they’re measurable – pick a “north star” KPI to measure each pillar against. You’ll likely have a ton of supporting KPIs in each pillar, but the north star KPI will keep everyone focused on what’s important. Don’t pick too many! – we’ve learned that 5 to 6 is the sweet spot. Anything more and you risk spreading your focus too thinly and anything less you’re likely to not be effective in helping your company achieve its mission. I’ve only scratched the surface in this article but hope at least some of these lessons we’ve learned at Tessian can be helpful in your journey. It can sometimes be daunting to condense your team strategy into a few key pillars (not least because CS differs so wildly in every company), but get them right and your team will have the focus, clarity and ownership it requires to thrive 🤝 .  
Read Blog Post
Human Layer Security, Spear Phishing
Is Your Office 365 Email Secure?
By Maddie Rosenthal
Wednesday, June 2nd, 2021
In July last year, Microsoft took down a massive fraud campaign that used knock-off domains and malicious applications to scam its customers in 62 countries around the world.  But this wasn’t the first time a successful phishing attack was carried out against Office 365 (O365) customers. In December 2019, the same hackers gained unauthorized access to hundreds of Microsoft customers’ business email accounts.  According to Microsoft, this scheme “enabled unauthorized access without explicitly requiring the victims to directly give up their login credentials at a fake website…as they would in a more traditional phishing campaign.” Why are O365 accounts so vulnerable to attacks? Exchange Online/Outlook – the cloud email application for O365 users – has always been a breeding ground for phishing, malware, and very targeted data breaches.  Though Microsoft has been ramping up its O365 email security features with Advanced Threat Protection (ATP) as an additional layer to Exchange Online Protection (EOP), both tools have failed to meet expectations because of their inability to stop newer and more innovative social engineering attacks, business email compromise (BEC), and impersonations.  One of the biggest challenges with ATP in particular is its time-of-click approach, which requires the user to click on URLs within emails to activate analysis and remediation.   Is O365 ATP enough to protect my email? We believe that O365’s native security controls do protect users against bulk phishing scams, spam, malware, and domain spoofing. And these tools are great when it comes to stopping broad-based, high-volume, low-effort attacks – they offer a baseline protection.  For example, you don’t need to add signature-based malware protection if you have EOP/ATP for your email, as these are proven to be quite efficient against such attacks. These tools employ the same approach used by network firewalls and email gateways – they rely on a repository of millions of signatures to identify ‘known’ malware.  But, this is a big problem because the threat landscape has changed in the last several years.  Email attacks have mutated to become more sophisticated and targeted and  hackers exploit user behavior to launch surgical and highly damaging campaigns on people and organizations. Attackers use automation to make small, random modifications to existing malware signatures and use transformation techniques to bypass these native O365 security tools. Unsuspecting – and often untrained – users fall prey to socially engineered attacks that mimic O365 protocols, domains, notifications, and more.  See below for a convincing example.
It is because such loopholes exist in O365 email security that Microsoft continues to be one of the most breached brands in the world.  What are the consequences of a compromised account? There is a lot at stake if an account is compromised.  With ~180 million O365 active email accounts, organizations could find themselves at risk of data loss or a breach, which means revenue loss, damaged reputation, customer churn, disrupted productivity, regulatory fines, and penalties for non-compliance. This means they need to quickly move beyond relying on largely rule- and reputation-based O365 email filters to more dynamic ways of detecting and mitigating email-originated risks. Enter machine learning and behavioral analysis. There has been a surge in the availability of platforms that use machine learning algorithms. Why? Because these platforms detect and mitigate threats in ways other solutions can’t and help enterprises improve their overall security posture. Instead of relying on static rules to predict human behavior, solutions powered by machine learning actually adapt and evolve in tandem with relationships and circumstances. Machine learning algorithms “study” the email behavior of users, learn from it, and – finally – draw conclusions from it.  But, not all of ML platforms are created equal. There are varying levels of complexity (going beyond IP addresses and metadata to natural language processing); algorithms learn to detect behavior anomalies at different speeds (static vs. in real-time); and they can achieve different scales (the number of data points they can simultaneously study and analyze). How does Tessian prevent threats that O365 security controls miss? Tessian’s Human Layer Security platform is designed to offset the rule-based and sandbox approaches of O365 ATP to detect and stop newer and previously unknown attacks from external sources, domain / brand / service impersonations, and data exfiltration by internal actors.  Learn more about why rule-based approaches to spear phishing attacks fail. By dynamically analyzing current and historical data, communication styles, language patterns, and employee project relationships both within and outside the organization, Tessian generates contextual employee relationship graphs to establish a baseline normal behavior. By doing this, Tessian turns both your employees and the email data into an organization’s biggest defenses against inbound and outbound email threats.  Conventional tools focus on just securing the machine layer – the network, applications, and devices. By uniquely focusing on the human layer, Tessian can make clear distinctions between legitimate and malicious email interactions and warn users in real-time to reinforce training and policies to promote safer behavior.  How can O365 ATP and Tessian work together?  Often, customers ask us which approach is better: the conventional, rule-based approach of the O365 native tools, or Tessian’s powered by machine learning? The answer is, each has their unique place in building a comprehensive email security strategy for O365. But, no organization that deals with sensitive, critical, and personal data can afford to overlook the benefits of an approach based on machine learning and behavioral analysis.  A layered approach that leverages the tools offered by O365 for high-volume attacks, reinforced with next-gen tools for detecting the unknown and evasive ones, would be your best bet.  A very short implementation time coupled with the algorithm’s ability to ‘learn’ from historical email data over the last year – all within 24 hours of deployment – means Tessian could give O365 users just the edge they need to combat modern day email threats. 
Read Blog Post
Human Layer Security, DLP, Data Exfiltration
Insider Threat Statistics You Should Know: Updated 2021
By Maddie Rosenthal
Tuesday, June 1st, 2021
Between 2018 and 2020, there was a 47% increase in the frequency of incidents involving Insider Threats. This includes malicious data exfiltration and accidental data loss. The latest research, from the Verizon 2021 Data Breach Investigations Report, suggests that Insiders are responsible for around 22% of security incidents. Why does this matter? Because these incidents cost organizations millions, are leading to breaches that expose sensitive customer, client, and company data, and are notoriously hard to prevent. In this article, we’ll explore: How often these incident are happening What motivates Insider Threats to act The financial  impact Insider Threats have on larger organizations The effectiveness of different preventive measures You can also download this infographic with the key statistics from this article. If you know what an Insider Threat is, click here to jump down the page. If not, you can check out some of these articles for a bit more background. What is an Insider Threat? Insider Threat Definition, Examples, and Solutions Insider Threat Indicators: 11 Ways to Recognize an Insider Threat Insider Threats: Types and Real-World Examples
How frequently are Insider Threat incidents happening? As we’ve said, incidents involving Insider Threats have increased by 47% between 2018 and 2020. A 2021 report from Cybersecurity Insiders also suggests that 57% of organizations feel insider incidents have become more frequent over the past 12 months. But the frequency of incidents varies industry by industry. The Verizon 2021 Breach Investigations Report offers a comprehensive overview of different incidents in different industries, with a focus on patterns, actions, and assets. Verizon found that: The Healthcare and Finance industries experience the most incidents involving employees misusing their access privileges The Healthcare and Finance industries also suffer the most from lost or stolen assets The Finance and Public Administration sectors experience the most “miscellaneous errors” (including misdirected emails)—with Healthcare in a close third place !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js");
There are also several different types of Insider Threats and the “who and why” behind these incidents can vary. According to one study: Negligent Insiders are the most common and account for 62% of all incidents.  Negligent Insiders who have their credentials stolen account for 25% of all incidents Malicious Insiders are responsible for 14% of all incidents.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Looking at Tessian’s own platform data, Negligent Insiders may be responsible for even more incidents than most expected. On average, 800 emails are sent to the wrong person every year in companies with 1,000 employees. This is 1.6x more than IT leaders estimate.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Malicious Insiders are likely responsible for more incidents than expected, too. Between March and July 2020, 43% of security incidents reported were caused by malicious insiders. We should expect this number to increase. Around 98% of organizations say they feel some degree of vulnerability to Insider Threats. Over three-quarters of IT leaders (78%) think their organization is at greater risk of Insider Threats if their company adopts a permanent hybrid working structure. Which, by the way, the majority of employees would prefer. What motivates Insider Threats to act? When it comes to the “why”, Insiders – specifically Malicious Insiders – are often motivated by money, a competitive edge, or revenge. But, according to one report, there is a range of reasons malicious Insiders act. Some just do it for fun.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); But, we don’t always know exactly “why”. For example, Tessian’s own survey data shows that 45% of employees download, save, send, or otherwise exfiltrate work-related documents before leaving a job or after being dismissed.  While we may be able to infer that they’re taking spreadsheets, contracts, or other documents to impress a future or potential employer, we can’t know for certain.  Note: Incidents like this happen the most frequently in competitive industries like Financial Services and Business, Consulting, & Management. This supports our theory.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); How much do incidents involving Insider Threats cost? The cost of Insider Threat incidents varies based on the type of incident, with incidents involving stolen credentials causing the most financial damage. But, across the board, the cost has been steadily rising. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Likewise, there are regional differences in the cost of Insider Threats, with incidents in North America costing the most and almost twice as much as those in Asia-Pacific. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); But, overall, the average global cost has increased 31% over the last 2 years, from $8.76 million in 2018 to $11.45 in 2020 and the largest chunk goes towards containment, remediation, incident response, and investigation. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); But, what about prevention? How effective are preventative measures? As the frequency of Insider Threat incidents continues to increase, so does investment in cybersecurity. But, what solutions are available and which solutions do security, IT, and compliance leaders trust to detect and prevent data loss within their organizations? A 2021 report from Cybersecurity Insiders suggests that a shortfall in security monitoring might be contributing to the prevalence of Insider Threat incidents. Asked whether they monitor user behavior to detect anomalous activity: Just 28% of firms responded that they used automation to monitor user behavior 14% of firms don’t monitor user behavior at all 28% of firms said they only monitor access logs 17% of firms only monitor specific user activity under specific circumstances 10% of firms only monitor user behavior after an incident has occurred And, according to Tessian’s research report, The State of Data Loss Prevention, most rely on security awareness training, followed by following company policies/procedures, and machine learning/intelligent automation. But, incidents actually happen more frequently in organizations that offer training the most often and, while the majority of employees say they understand company policies and procedures, comprehension doesn’t help prevent malicious behavior. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); That’s why many organizations rely on rule-based solutions. But, those often fall short.  Not only are they admin-intensive for security teams, but they’re blunt instruments and often prevent employees from doing their jobs while also failing to prevent data loss from Insiders.  So, how can you detect incidents involving Insiders in order to prevent data loss and eliminate the cost of remediation? Machine learning. How does Tessian detect and prevent Insider Threats? Tessian turns an organization’s email data into its best defense against inbound and outbound email security threats. Powered by machine learning, our Human Layer Security technology understands human behavior and relationships, enabling it to automatically detect and prevent anomalous and dangerous activity. Tessian Enforcer detects and prevents data exfiltration attempts Tessian Guardian detects and prevents misdirected emails Tessian Defender detects and prevents spear phishing attacks Importantly, Tessian’s technology automatically updates its understanding of human behavior and evolving relationships through continuous analysis and learning of the organization’s email network. Oh, and it works silently in the background, meaning employees can do their jobs without security getting in the way.  Interested in learning more about how Tessian can help prevent Insider Threats in your organization? You can read some of our customer stories here or book a demo.
Read Blog Post
Human Layer Security, DLP, Compliance
At a Glance: Data Loss Prevention in Healthcare
By Maddie Rosenthal
Sunday, May 30th, 2021
Data Loss Prevention (DLP) is a priority for organizations across all sectors, but especially for those in Healthcare. Why? To start, they process and hold incredible amounts of personal and medical data and they must comply with strict data privacy laws like HIPAA and HITECH.  Healthcare also has the highest costs associated with data breaches – 65% higher than the average across all industries – and has for nine years running.  But, in order to remain compliant and, more importantly, to prevent data loss incidents and breaches, security leaders must have visibility over data movement. The question is: Do they? According to our latest research report, Data Loss Prevention in Healthcare, not yet. How frequently are data loss incidents happening in Healthcare? Data loss incidents are happening up to 38x more frequently than IT leaders currently estimate.  Tessian platform data shows that in organizations with 1,000 employees, 800 emails are sent to the wrong person every year. Likewise, in organizations of the same size, 27,500 emails containing company data are sent to personal accounts. These numbers are significantly higher than IT leaders expected.
But, what about in Healthcare specifically? We found that: Over half (51%) of employees working in Healthcare admit to sending company data to personal email accounts 46% of employees working in Healthcare say they’ve sent an email to the wrong person 35% employees working in Healthcare have downloaded, saved, or sent work-related documents to personal accounts before leaving or after being dismissed from a job This only covers outbound email security. Hospitals are also frequently targeted by ransomware and phishing attacks and Healthcare is the industry most likely to experience an incident involving employee misuse of access privileges.  Worse still, new remote-working structures are only making DLP more challenging.
Healthcare professionals feel less secure outside of the office  While over the last several months workforces around the world have suddenly transitioned from office-to-home, this isn’t a fleeting change. In fact, bolstered by digital solutions and streamlined virtual services, we can expect to see the global healthcare market grow exponentially over the next several years.  While this is great news in terms of general welfare, we can’t ignore the impact this might have on information security.   Half of employees working in Healthcare feel less secure outside of their normal office environment and 42% say they’re less likely to follow safe data practices when working remotely.   Why? Most employees surveyed said it was because IT isn’t watching, they’re distracted, and they’re not working on their normal devices. But, we can’t blame employees. After all, they’re just trying to do their jobs and cybersecurity isn’t top-of-mind, especially during a global pandemic. Perhaps that’s why over half (57%) say they’ll find a workaround if security software or policies make it difficult or prevent them from doing their job.  That’s why it’s so important that security leaders make the most secure path the path of least resistance. How can security leaders in Healthcare help protect employees and data? There are thousands of products on the market designed to detect and prevent data incidents and breaches and organizations are spending more than ever (up from $1.4 million to $13 million) to protect their systems and data.  But something’s wrong.  We’ve seen a 67% increase in the volume of breaches over the last five years and, as we’ve explored already, security leaders still don’t have visibility over risky and at-risk employees. So, what solutions are security, IT, and compliance leaders relying on? According to our research, most are relying on security training. And, it makes sense. Security awareness training confronts the crux of data loss by educating employees on best practice, company policies, and industry regulation. But, how effective is training, and can it influence and actually change human behavior for the long-term? Not on its own. Despite having training more frequently than most industries, Healthcare remains among the most likely to suffer a breach. The fact is, people break the rules and make mistakes. To err is human! That’s why security leaders have to bolster training and reinforce policies with tech that understands human behavior. How does Tessian prevent data loss on email? Tessian uses machine learning to address the problem of accidental or deliberate data loss. How? By analyzing email data to understand how people work and communicate.  This enables Tessian Guardian to look at email communications and determine in real-time if a particular email looks like they’re about to be sent to the wrong person. Tessian Enforcer, meanwhile, can identify when sensitive data is about to be sent to an unsafe place outside an organization’s email network. Finally, Tessian Defender detects and prevents inbound attacks like spear phishing, account takeover (ATO), and CEO Fraud.
Read Blog Post
Spear Phishing
What are Deepfakes? Are They a Security Threat?
Wednesday, May 26th, 2021
According to a recent Tessian survey, 74% of IT leaders think deepfakes are a threat to their organizations’ and their employees’ security*. Are they right to be worried? We take a look. What is a deepfake?
Deepfakes are highly convincing— and successfully track people into believing that a person did or said something that never happened.  Most people associate deepfakes with misinformation—and the use of deepfakes to imitate leaders or celebrities could present a major risk to people’s reputations and to political stability.  Deepfake tech is still young, and not yet sophisticated enough to deceive the public at scale. But some reasonably deepfake clips of Barack Obama and Mark Zuckerberg have provided a glimpse of what the technology is capable of. But deepfakes are also an emerging cybersecurity concern and businesses increasingly will need to defend against them as the technology improves.  Here’s why security leaders are taking steps to protect their companies against deepfakes. How could deepfakes compromise security? Cybercriminals can use deepfakes in social engineering attacks to trick their targets into providing personal information, account credentials, or money. Social engineering attacks, such as phishing, have always relied on impersonation—some of the most effective types involve pretending to be a trusted corporation (business email compromise), a company’s supplier (vendor email compromise), or the target’s boss (CEO fraud). Typically, this impersonation takes place via email. But with deepfakes, bad actors can leverage multiple channels. Imagine your boss emails you to make an urgent wire transfer. It seems like an odd request for her to make but, just as you’re reading the email, your phone rings. You pick it up and hear a voice that sounds exactly like your bosses, confirming the validity of the email and asking you to transfer the funds ASAP. What would you do?  The bottom line is: Deepfake generation adds new ways to impersonate specific people and leverage employees’ trust.
Examples of deepfakes The first known deepfake attack occurred in March 2019 and was revealed by insurance company Euler Hermes (which covered the cost of the incident). The scam started when the CEO of a U.K. energy firm got a call from his boss, the head of the firm’s German parent company—or rather, someone the CEO thought was his boss. According to Euler Hermes, the U.K.-based CEO heard his boss’s voice—which had exactly the right tone, intonation, and subtle German accent—asking him to transfer $243,000, supposedly into the account of a Hungarian supplier. The energy firm’s CEO did as he was asked—only to learn later that he had been tricked. Fraud experts at the insurance firm believe this was an example of an AI-driven deepfake phishing attack. And in July 2020, Motherboard reported a failed deepfake phishing attempt targeting a tech firm. Even more concerning—an April 2021 report from Recorded Future found evidence that malicious actors are increasingly looking to leverage deepfake technology to use in cybercrime. The report shows how users of certain dark web forums, plus communities on platforms like Discord and Telegram, are discussing how to use deepfakes to carry out social engineering, fraud, and blackmail. Consultancy Technologent has also warned that new patterns of remote working are putting employees at an even greater risk of falling victim to deepfake phishing—and reported three such cases among its clients in 2020.
But is deepfake technology really that convincing? Deepfake technology is improving rapidly.  In her book Deepfakes: The Coming Infopocalypse, security advisor Nina Schick describes how recent innovations have substantially reduced the amount of time and data required to generate a convincing fake audio or video clip via AI. According to her, “this is not an emerging threat. This threat is here. Now”.   Perhaps more worryingly—deepfakes are also becoming much easier to make.  Deepfake expert Henry Ajder notes that the technology is becoming “increasingly democratized” thanks to “intuitive interfaces and off-device processing that require no special skills or computing power.” And last year, Philip Tully from security firm FireEye warned that non-experts could already use AI tools to manipulate audio and video content. Tully claimed that businesses were experiencing the “calm before the storm”—the “storm” being an oncoming wave of deepfake-driven fraud and cyberattacks.
How could deepfakes compromise election security? There’s been a lot of talk about how deepfakes could be used to compromise the security of the 2020 U.S. presidential election. In fact, an overwhelming 76% of IT leaders believe deepfakes will be used as part of disinformation campaigns in the election*.  Fake messages about polling site disruptions, opening hours, and voting methods could affect turnout or prevent groups of people from voting. Worse still, disinformation and deepfake campaigns -whereby criminals swap out the messages delivered by trusted voices like government officials or journalists – threaten to cause even more chaos and confusion among voters.  Elvis Chan, a Supervisory Special Agent assigned to the FBI told us that people are right to be concerned.  “Deepfakes may be able to elicit a range of responses which can compromise election security,” he said. “On one end of the spectrum, deepfakes may erode the American public’s confidence in election integrity. On the other end of the spectrum, deepfakes may promote violence or suppress turnout at polling locations,” he said. So, how can you spot a deepfake and how can you protect your people from them? 
How to protect yourself and your organization from deepfakes AI-driven technology is likely to be the best way to detect deepfakes in the future. Machine learning techniques already excel at detecting phishing via email because of how they can detect tiny irregularities and anomalies that humans can’t spot. But for now, here are some of the best ways to help ensure you’re prepared for deepfake attacks: Ensure employees are aware of all potential security threats, including the possibility of deepfakes. Tessian research* suggests that 61% of IT leaders are already training their teams about deepfakes, with a further 27% planning to do so. Create a system whereby employees can verify calls via another medium, such as email. Verification is a good way to defend against conventional vishing (phone phishing) attacks, as well as deepfakes. Maintain a robust security policy—so that everyone on your team knows what to do if they have a concern.
Read Blog Post
Page
[if lte IE 8]
[if lte IE 8]