Request a Demo of Tessian Today.
Automatically stop data breaches and security threats caused by employees on email. Powered by machine learning, Tessian detects anomalies in real-time, integrating seamlessly with your email environment within minutes and starting protection in a day. Provides you with unparalleled visibility into human security risks to remediate threats and ensure compliance.

See a sneak peek of Tessian in action featuring admin and end user experiences. Watch the Product Tour →

guide icon

Tessian Blog

See All Posts
ATO/BEC, Human Layer Security
Phishing Campaigns Pick-Up in the Wake of the Ukraine Invasion
By Charles Brook
Tuesday, April 5th, 2022
Key Takeaways   We’ve seen an upward trend in the number of suspicious emails being flagged related to Ukraine.  Spam campaigns started to appear only one day after the initial invasion by Russia.   The number of new domains containing “Ukraine” registered in 2022 is up 210% from 2021.   An average of 315 new Ukraine themed domains have been observed per day since the 24th February.  77% of these domains appear to be suspicious based on early indicators.
Overview   The conflict taking place in Ukraine has quickly become a common theme for threat actors and scammers alike. Tessian has observed an upward trend in Ukraine themed emails flagged by our platform, including a number of threat campaigns that are exploiting the conflict as a theme for new scams, malspam, and phishing.   In line with this, open source intelligence shows a significant increase in the number of Ukraine themed domains being registered, which can be used for malicious purposes.   The scams observed typically request donations in the form of crypto-currency under the pretense of supporting the Ukrainian humanitarian effort in the wake of the Russian invasion. The spam is similar to common campaigns previously observed, pushing links to suspicious e-commerce sites selling Ukrainian themed items.
Trend analysis Domain registrations   There has been a significant upward trend in the number of new domains being registered that contain “Ukraine”. The number of these domains being registered is up more than 210% in 2022, compared to 2021.   Researching domain registrations , we can see the upward trend progressing over the past two months. 
Since early March there has been an average of 340 new domains registered each day, either containing “Ukraine” or closely resembling the word.  Our platform observed an upward initial trend in Ukraine themed emails, which peaked early March. This included the spam campaigns and donation scams.
Threat campaign explainer  Donation Scams   Donations from around the world have been made in support of Ukraine in the wake of the Russian invasion. Unfortunately, leveraging humanitarian efforts such as the one currently underway in Ukraine to perpetrate phishing-related fraud has become a common modus operandi for threat actors and fraudsters. This explains why phishing remains among the top reported cybersecurity incidents according to the FBI’s latest Internet Crime Report, with over 323k reported incidents for 2021.   The donation scams vary in sophistication from basic emails containing a short message with a plea for help, to fake websites set up to impersonate certain charitable organizations like the British Red Cross.    One of these scam emails claims to be supporting the humanitarian aid effort in Ukraine and is requesting  Bitcoin cryptocurrency donations. Legitimate website  text and logos from the likes of UNICEF, Actalliance and the Australian Council for International Affairs (ACFID) are being fraudulently leveraged to enhance the authenticity of the phishing emails.   The threat campaign detailed below purporting to be a legitimate humanitarian aid effort for Ukraine from the ACFID, requests Bitcoin donations and allows victims to make the donation via direct Bitcoin address or via a malicious QR code.
Phishing email purporting to be from the ACFID  
Scanning the QR code with the iOS camera app will prompt you to open a locally installed payment app that supports Bitcoin. In this case, Cash App.   According to Blockchain Explorer, the last transaction to take place with the address in this email was on 2022-02-14 with only 6 transactions in total.    Another donation scam was sent from a newly registered domain redcrossukraine[.]org impersonating the Red Cross in Ukraine. The email contained a link to a professional looking website containing details of the Ukraine conflict as well as instructions on how to donate cryptocurrency in aid of Ukraine.
The site was based on a bootstrap template by BootstrapMade which gave it the look and feel of a legitimate website. Towards the bottom were references to addresses for 3 different crypto wallets you could send payments to as a ‘donation’. One for Bitcoin, one for Ethereum, and one for Tether cryptocurrency.
Ukraine themed spam   Spammers have also quickly reacted to the invasion of Ukraine by adjusting the themes of their campaigns.    One notable spam campaign, only a day after the initial invasion, began blasting out spam with links to suspicious e-commerce sites pushing the sale of t-shirts and other items to show support for Ukraine.   The emails sent out in the campaign have subjects like “I Stand With Ukraine Shirts” and contain images of t-shirts with slogans in support of Ukraine. The emails also contain links pointing to sites like mimoprint[.]info or mabil-store[.]com where you can browse and purchase some of the products referenced in the email.   Links resolving to recently created sites like mimoprint[.]info or mabil-store[.]com were sent out in emails with subjects like  “I Stand With Ukraine Shirts”. Searching this site online reveals some reviews claiming that they are a scam and if a purchase is made then no product is received. Other reviews claim they steal designs from users on other sites.    Recommended action  Some charities do and are accepting cryptocurrency donations. But be cautious of any emails purporting to aid or receive donations in an effort to support the humanitarian effort in Ukraine. If cryptocurrency is requested from an unsolicited email then the likelihood is that it is a scam.   Before interacting with any Ukrainian themed email received, check the source and email header to confirm the organization it originated from is legitimate.   If you want to make a donation in support of Ukraine, then the best way is to go directly to your preferred charitable organization. CNET has published a list of reputable charities you can donate in aid of Ukraine. 
Read Blog Post
ATO/BEC, Human Layer Security, Life at Tessian
Book Recommendations for Security Professionals
By Maddie Rosenthal
Friday, April 1st, 2022
Looking for some summer reading? We’ve pulled together a little reading guide for when you get some well-earned downtime. We asked around the Tessian offices for recommendations for good reads in the tech and security space. Here’s the team’s recommendations.
Cyber Privacy: Who Has Your Data and Why You Should Care April Falcon Doss Amazon, Google, Facebook, governments. No matter who we are or where we go, someone is collecting our data: to profile us, target us, assess us; to predict our behavior and analyze our attitudes; to influence the things we do and buy — even to impact our vote. Read more at Good Reads   Social Engineering: The Science of Human Hacking Christopher Hadnagy Social Engineering: The Science of Human Hacking reveals the craftier side of the hacker’s repertoire—why hack into something when you could just ask for access? Undetectable by firewalls and antivirus software, social engineering relies on human fault to gain access to sensitive spaces; in this book, renowned expert Christopher Hadnagy explains the most commonly-used techniques that fool even the most robust security personnel, and shows you how these techniques have been used in the past. We take a deep dive into the psychology of human error in this report, with insights from Stanford Psychology and Communications professor Jeff Hancock. Read more at Good Reads.    The Fifth Domain: Defending Our Country, Our Companies, and Ourselves in the Age of Cyber Threats Richard A. Clarke  “Great book on the challenges of cyberwarfare policy” – Paul Sanglé-Ferrière, Product Manager, Tessian. An urgent new warning from two bestselling security experts – and a gripping inside look at how governments, firms, and ordinary citizens can confront and contain the tyrants, hackers, and criminals bent on turning the digital realm into a war zone. Read more at Good Reads   The Wires of War: Technology and the Global Struggle for Power Jacob Helberg From the former news policy lead at Google, an urgent and groundbreaking account of the high-stakes global cyberwar brewing between Western democracies and the autocracies of China and Russia that could potentially crush democracy. Read more at Good Reads   This Is How They Tell Me the World Ends: The Cyberweapons Arms Race Nicole Perlroth Filled with spies, hackers, arms dealers, and a few unsung heroes, written like a thriller and a reference, This Is How They Tell Me the World Ends is an astonishing feat of journalism. Based on years of reporting and hundreds of interviews, The New York Times reporter Nicole Perlroth lifts the curtain on a market in shadow, revealing the urgent threat faced by us all if we cannot bring the global cyber arms race to heel. Read more at Good Reads.   The Art of Invisibility: The World’s Most Famous Hacker Teaches You How to Be Safe in the Age of Big Brother and Big Data Kevin Mitnick & Robert Vamosi  In The Art of Invisibility Mitnick provides both online and real life tactics and inexpensive methods to protect you and your family, in easy step-by-step instructions. He even talks about more advanced “elite” techniques, which, if used properly, can maximize your privacy. Read more at Good Reads The Cuckoo’s Egg Clifford Stoll “Probably the original threat actor report – so good” – Matt Smith, Software Engineer at Tessian In 1986,  Clifford Stoll – a systems administrator at the Lawrence Berkeley National Laboratory – wrote this book. Based on his field notes, this is arguably one of the first documented cases of a computer hack and the subsequent investigation, which eventually led to the arrest of Markus Hess. It’s now considered an essential read for anyone interested in cybersecurity. Read more at Good Reads. CISO Compass: Navigating Cybersecurity Leadership Challenges with Insights from Pioneers  Todd Fitzgerald While this book covers all the fundamentals of IT security governance and risk management, it also digs deeper into people. After all, being a CISO isn’t just about technology. The insights in the book come directly from CISOs. In total, 75 security leaders contributed to the book, which means there’s plenty of actionable advice you can apply to your strategies.  Looking for more insights from security leaders? Check out Tessian’s CISO Spotlight series. Read more at Good Reads.   Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin’s Most Dangerous Hackers  Andy Greenburg Politics play a big role in cybercrime. This book is focused on Sandworm, the group of Russian hackers who, over the last decade, has targeted American utility companies, NATO, and electric grids in Eastern Europe and paralyzed some of the world’s largest businesses with malware. But the author, Wired senior writer Andy Greenberg, also provides plenty of background on both the technology and the relationships between various countries. Read more on Good Reads.   Cult of the Dead Cow Joseph Menn Cult of the Dead Cow is the tale of the oldest, most respected, and most famous American hacking group of all time. Though until now it has remained mostly anonymous, its members invented the concept of hacktivism, released the top tool for testing password security, and created what was for years the best technique for controlling computers from afar, forcing giant companies to work harder to protect customers.  Cult of the Dead Cow explores some of the world’s most infamous hacking groups – particularly the cDc – and explains how technology, data, and – well – the world has changed because of them. Read more at Good Reads. The Making of a Manager: What to Do When Everyone Looks to You Julie Zhuo  Congratulations, you’re a manager! After you pop the champagne, accept the shiny new title, and step into this thrilling next chapter of your career, the truth descends like a fog: you don’t really know what you’re doing. Read more at Good Reads. CISM Certified Information Security Manager All-in-One Exam Guide Yes, this is an exam guide…and yes you should add it to your reading list. If nothing else, to have on-hand as a reference. Why? It covers everything. Security governance, risk management, security program development, and security incident management. Curious as to whether or not other security professionals have their CISM certification? We interviewed 12 women about their journeys in cybersecurity. Read their profiles here and the full report, Opportunity in Cybersecurity Report 2020. Read more on Good Reads. The health benefits of reading Whatever you choose to read these holidays, the health benefits of reading are well documented. As our Lost Hours report revealed, many CISOs aren’t taking time out from their jobs to de-stress and unwind. So make sure you schedule a little you time with a good book.  
Read Blog Post
This Crazy Simple Technique Phished 84% of Executives Who Received it
By KC Busch
Friday, April 1st, 2022
It’s a fact that C-suite executives are high-profile targets. According to Verizon’s Data Breach Investigations Report, a C-level executive is 12 times more likely to get phished than a junior employee.    Why? Because when you control the head you can control the body. Hack a CxO, and you have the power to command subordinates with requests such as wire transfers, account access resets, network access, corporate credit card details, and company financial functions.    So how can you help better protect your C-suite from attacks?    In former roles, a current Tessian employee used this training technique to test the c-suite’s resilience to an attack and improve their security awareness. Here’s how you can run this experiment on your c-suite; we’ve anonymized and edited some of the details for security and clarity.
The technique   The target for the attack is a CxO or another highly senior person in your company. Your first step is to send a ‘spoof’ email from one of the senior VPs on the team to the target. For real attackers, finding the targets is easy enough as most companies list out their senior team on their websites, sometimes even complete with links to their LinkedIn profiles. And even if the information isn’t available there, it’s easy enough to find with a quick Google search or browse around social media. 
The message you design from the spoof account has to say something along the lines of “Hey Tom (CxO), Dick here, Harry (another real VP on the team) is being a blocker and I need this out ASAP, can you just take a look at this work he sent me.”   
The ‘work’ is of course a document containing the malware. Phrased in this way there is an extremely high probability that Tom, the CxO, will open that attachment. As our anonymous employee explains, “you don’t even need to hack a VP account, you can simply spoof it. I’ve had clicks on this with email metadata that clearly does not match.”
The psychology of why this is so effective   Why this works so well, despite not being a particularly technologically sophisticated technique, is all to do with human nature and the power dynamics found in the top tier of organizations. The message is ‘validated’ because it is supported in the target’s mind by using a bond of trust the CxO has in Harry, the innocent VP. But the attacker is borrowing that trust bond to psychologically ‘authenticate’ the email with the target. 
This is textbook social engineering, which is the technique of manipulating the victim’s natural emotional responses and reactions. Which is why we see other classic ingredients common in phishing attacks like a sense of urgency (I need this ASAP) and a cry for help or a perceived problem that only the target can rectify. After all, there’s some evidence that humans are born with an urge to help. 
Why do senior leaders fall for this?   This exercise manipulates the tendency for senior leaders to overvalue trust bonds from people closest to them and undervalue potentially conflicting information coming from those far below in the org chart. The fact that the message appears to have come from two people that the CXO talks to all the time means they will tend to overvalue communication with them. Furthermore, the message is contextualized around potential turbulence in the relationship dynamic within that inner circle, rather than an external generic ‘your accounts been suspended’ phishing attempt. This only adds to its legitimacy.    As for that relationship turbulence, our recent psychology of Human Error 2022 report found that people make mistakes when they are in highly emotional states. Catch the CxO when they’re distracted, stressed or working quickly and the odds of them clicking only increase.  Finally, traditional SAT training for the C-suite often means they’re aware of it’s deployment across the organization, making it even harder for them to experience a genuine attack moment.
Advice for running this experiment in your organization   If you’re looking to run this phishing test on your team, there are two important principles to consider. The first: don’t cause chaos on your own network. The second: this is about building relationships with the security team, not about adversarial content. “The most important thing is consent,” advises our employee. All too often companies run phishing programs that don’t involve other people and teams before they start their campaigns.    To do this successfully, you’ll need to define parameters such as:   Who is this test targeted at? What will the message describe? What am I allowed to include in the payload? Between what dates will I run the test and at what time? What happens after the click?   That last one is perhaps the most important. What program will you have in place for those who fall for the phish? Remember, these are high-profile people with limited time, so your response has to be concise yet impactful.    Consider scheduling a face-to-face 15-minute discussion where you can explain how the attack worked and what the employee should do in the future. Be sure to  back this up by evidence and statistics on what happens when a C-suite member gets hacked, and what the impact is on the organization. The aim is to connect their click event, with specific actionable risks being brought into the business. 
Technology can help, too. Tessian Defender would have detected this social engineering attack as the metadata in the email address wouldn’t have matched the legitimate address. Defender could have quarantined it for SOC analysis or alerted Tom, the CxO, in real-time with an ‘in the moment’ notification.    Learn more about how Tessian detects and prevents BEC attacks, ATO attacks, CEO Fraud, and other advanced email threats by watching our product tour, or book a demo to learn more. 
Read Blog Post
ATO/BEC, Email DLP, Human Layer Security
New Research: One in Four Employees Who Made Cybersecurity Mistakes Lost Their Jobs Last Year
By Laura Brooks
Tuesday, March 29th, 2022
According to our new research, one in four employees lost their job in the last 12 months after making a mistake that compromised their company’s security. The new report, which explores human error on email at work, also found that:   Just over one in four respondents (26%) fell for a phishing email at work, in the last 12 months  Two-fifths (40%) of employees sent an email to the wrong person, with almost one-third (29%) saying their business lost a client or customer because of the error Over one-third (36%) of employees have made a mistake at work that compromised security and fewer are reporting their mistakes to IT
Why do people make mistakes at work?   When asked why these mistakes happened, half of employees said they had sent emails to the wrong person because they were under pressure to send the email quickly – up from 34% reported by Tessian in its 2020 study – while over two-fifths of respondents cited distraction and fatigue as reasons for falling for phishing attacks. More employees attributed their mistakes to fatigue and distraction in the past year, versus figures reported in 2020, likely brought on by the shift to hybrid working   “With the shift to hybrid work, people are contending with more distractions, frequent changes to working environments, and the very real issue of Zoom fatigue – something they didn’t face two years ago,” said Jeff Hancock, a professor at Stanford University who contributed to the report. 
People are falling for more advanced phishing attacks    While the number of employees who fell for phishing attacks only increased by 1% in the last 12 months, people were far more likely to fall for more advanced phishing attacks than they were in 2020.    Over half of employees (52%) said they fell for a phishing email because the attacker impersonated a senior executive at the company – up from 41% reported in 2020. In comparison, click-through rates on phishing emails whereby threat actors impersonated well-known brands dropped. These findings mirror those reported by the FBI, which found that business email compromise attacks (BEC) are eight times more common than ransomware and the losses from these attacks continue to grow year on year.    People were also susceptible to phishing attacks over SMS (smishing), with one-third of respondents being duped by a smishing request in the last 12 months, compared to 26% of those who fell for phishing scams over email. Older employees were more susceptible to smishing attacks; one-third of respondents aged over 55 complied with requests in smishing scam versus 24% of 18-to 24-year-olds.
The consequences for accidental data loss are more severe   On average, a US employee sends four emails to the wrong person every month – and organizations are taking tougher action in response to these mistakes that compromise data. Nearly a third of employees (29%) said their business lost a client or customer after sending an email to the wrong person – up from the 20% in 2020. One in four respondents (21%) also lost their job because of the mistake, versus 12% in July 2020.    Over a one-third (35%) of respondents had to report the accidental data loss incidents to their customers, breaking the trust they had built. Businesses also had to report the incidents to regulators. In fact, the number of breaches reported to the Information Commissioner’s Office, caused by data being sent to the wrong person on email, was 32% higher in the first nine months of 2021 than the same period in 2020.
Employees are fearful of reporting mistakes   With harsher consequences in place, Tessian found that fewer employees are reporting their mistakes to IT. Almost one in four (21%) said they didn’t report security incidents, versus 16% in 2020, resulting in security teams having less visibility of threats in the organization.
Josh Yavor, CISO at Tessian, said, “We know that the majority of security incidents begin with people’s mistakes. For IT and security teams to be successful, they need visibility into the human layer of an organization, so they can understand why mistakes are happening and proactively put measures in place to prevent them from turning into serious security incidents. This requires earning the trust of employees; and bullying employees into compliance won’t work. Security leaders need to create a culture that builds trust and confidence among employees and improves security behaviors, by providing people with the support and information they need to make safe decisions at work.”
Read Blog Post
ATO/BEC, Email DLP
Buyer’s Guide to Integrated Cloud Email Security
By John Filitz
Tuesday, March 29th, 2022
The next generation of email security, referred to by Gartner as Integrated Cloud Email Security (ICES) solutions, bring a fresh approach to solving increasingly sophisticated and elusive email security threats.    Born in the cloud, for the cloud, ICES solutions are seen as an integral additional layer of email security to complement the native email security capabilities present in cloud productivity suites, such as Microsoft 365 and Google Workspace.   At last count, according to the latest Gartner Market Guide for Email Security (2021) there were 13 ICES vendors – giving customers a lot of choice to choose from.    Not every ICES vendor however, offers the same completeness of vision, degree of protection, or intelligent capabilities.   This short guide will bring insight on some of the key fundamentals that prospective buyers of an ICES solution should be aware of.
Why is there a need for ICES solutions in the first place?   Evidence shows that email remains an important and attractive attack vector for threat actors; according to a recent study, it’s responsible for up to 90% of all breaches.    The fact that the vast majority of breaches are attributed to an email compromise, indicates that the current status quo regarding email security is incapable and insufficient at preventing breaches. This was confirmed in a Forrester survey conducted on behalf of Tessian, with over 75% of organizations reporting on average of 20% of email security incidents getting by their existing security controls.   Threat actors are using more sophisticated email-based techniques, and attacks are achieving greater success. This is largely due to the commercialization of cybercrime, with Phishing-as-a-Service and Ransomware-as-a-Service offerings becoming more prevalent on the dark web.    In this new world, threat actors develop exploit kits and offer their services for sale. This has unfortunately led to a dramatic increase in the ability of attackers to find targets. And this explains why the cost of damages from cybercrime is expected to rocket to $10.5 trillion by 2025 – representing a +350% increase from 2015.   Digital transformation is another key reason too. Cloud adoption was accelerating prior to the Covid-19 pandemic. In the wake of the pandemic, cloud adoption accelerated even more quickly. This dramatic shift to the cloud has significantly expanded attack surface risk, with employees working from home, and often on personal devices.    This structural shift in computing has also revealed the soft underbelly of legacy cybersecurity solutions built for an on-premise world, including the rule-based and static protection for email offered by Secure Email Gateways (SEGs). And this explains why 58% of cybersecurity leaders are actively looking to displace SEGs for the next generation of email security – with behavioral intelligence and machine learning at the core.
ICES fundamentals  Approach to threat detection and prevention   The key differentiator between SEGs and ICES solutions from a threat detection standpoint is that ICES are underpinned by machine learning and utilize a behavioral intelligence approach to threat detection.    The algorithm of an ICES solution develops a historical behavioral map of an organization’s email ecosystem. This historical behavioral map is leveraged along with Natural Language Processing (NLP) and Natural Language Understanding (NLU) capabilities, to dynamically, and in-real-time, scan and detect any anomalous email behavior. Unlike SEGs, this enables these solutions to detect threats as they arise, in real time.  Deployment architecture   There are also important differences in the architecture and configuration of ICES solutions from SEGs. ICES solutions do not sit in-line like SEGs, they also do not require MX re-routing, but rather connect either via connect or API and scan email either pre-delivery or post-delivery – detecting and quarantining any malicious email. 
Degree of security automation    ICES solutions also offer a high degree of email security automation, including triaging of security incidents, which significantly reduces alert fatigue and the SOC burden, ultimately improving security effectiveness.
Key differences between SEGs and ICES   SEGs ICES Requires MX records changes, sits in-line, acts as a gateway for all email flow Requires no MX record changes and scans incoming email downstream from the MX record, either pre-delivery via a connector, or post-delivery via an API Designed to detect basic phishing attacks, spam, malware and graymail. No zero day protection Designed to detect advanced social engineering attacks including spear phishing, impersonation attacks, business email compromise (BEC), and account takeover (ATO). Advanced zero day protection Static, rule and policy based protection. No intelligent component to threat detection for inbound or outbound, resulting in high false positives and significant triaging of email security incidents  Behavioral and machine learning detection engine for advanced inbound and outbound threats, resulting in greater detection efficacy and lower false positives i.e. less business interruption and more SOC optimization Limited insider threat detection and no lateral attack detection capability. Once the threat has bypassed the gateway the threat actor as unlimited access to the victims’ data and information systems Advanced insider and lateral attack detection capability, stopping threats where and when they arise Basic email field scanning capability. Relies a threat engine of previously identified threats, and static rules and policies All of the email fields are analyzed using machine learning and compared against a historical mapping of email correspondence. Fields scanned include the sender, recipient, subject line, body, URL and attachments Advanced malicious emails go undetected and reach target inboxes. Some of the less sophisticated malicious emails end up in the spam or junk folder – enabling users to accidentally interact with it Advanced malicious emails are detected and automatically hidden from users’ inboxes. With the pre-delivery option, only email that is determined to be safe is delivered. Post-delivery solutions will in nanoseconds claw-back a suspected email determined to be malicious.  No in-the-moment employee security warnings. Security alerts are retroactive and aimed at SecOps, offering no context to employees or the ability to improve the security culture An in-the-moment security notification banner can be added to an incoming or outgoing email indicating the level of risk of the scanned email and the context. These real-time security notifications lead to improved security culture, by empowering employees to take safe action, in real time Basic DLP capability Some ICES like Tessian have advanced DLP capability
Five market differentiators for ICES solutions   Not all ICES solutions however, offer the same degree of completeness in product and protection. It is important that prospective customers of ICES solutions understand and interrogate the following key differentiators during the vendor selection process:   1: Completeness of the product offering and product roadmap Does the solution cover inbound and outbound email protection (i.e. does it prevent email data loss events from occurring?) Does it have pre-built integrations with other cybersecurity tools such as SIEMs?   2: Degree of protection offered During the POV it is important to test the efficacy of the algorithm and determine a true baseline of detection, including the % of false positives. Verify the actual results from the POV against the vendors stated claims.   3: Deployment and management overhead Some vendors have unrealistic claims of “protection within seconds” – understanding the actual amount of FTE resources and time needed for deployment is crucial, as well as the product’s ability to scale. Determining the degree of management FTE required for managing the tool on a day-to-day basis is equally important.   4: UX and reporting capability The overall UX including UI for SecOps teams, and feedback from employees after using the product during the POV is essential. Evidence shows that if the UX is poor, the security effectiveness of the tool will be diminished.  Having the ability to on-demand pull or automate risk metric reporting down to the employee level, for inbound and outbound email, is crucial for cybersecurity and risk compliance leaders.   5: Degree of automation Automation is fast becoming a buzzword in cybersecurity. Here buyers need to be aware of the degree of automation that the ICES solution actually delivers, ranging from threat detection to the triaging of threats, as well as risk reporting.
The final word   All it takes is one click on malicious content for a breach to take place. When assessing and selecting an ICES solution, it is important that customers consider the above listed criteria as part of their general vendor assessment criteria.     The considerations on the completeness of the product offering and the degree of protection offered should be weighed carefully.    Finally, it’s the human-side that often never gets mentioned in vendor assessments. The experience interacting with the vendor from the first interaction through to the end of the POV should provide key insight into what the future partnership with the vendor will look and feel like.
About Tessian Tessian is one of the few ICES vendors that offers comprehensive protection for inbound threats like advanced spear phishing attacks, as well as outbound protection, preventing malicious and accidental data loss.    Unlike many of our ICES competitors, we don’t treat our customers as test subjects – our algorithm was developed and fine tuned for 4 years before we went live. Due to this level of product maturity, we boast among the lowest percentage of false positives in our industry.   We have among the most attractive UI, delivering a phenomenal UX. This includes advanced and automated cyber risk reporting, making security and risk leaders lives’ easier.   We never make claims that we can’t back up. We deploy in seconds and protect within hours. Both the deployment and management overhead are extremely efficient due to product maturity and the degree of automation inherent in our product.   Finally it’s worthwhile mentioning we take our customers seriously. Here’s what some of them have to about using our product:
Read Blog Post
ATO/BEC
Tessian Defender API Deployment and Enhanced Quarantine Capability
By Robert Slocum
Friday, March 25th, 2022
In today’s threat environment of increasing cyber threats and complexity, the email threat vector is only growing in prominence. With Tessian’s behavioral intelligence email security, we provide comprehensive protection from the most advanced email threats of today and tomorrow. This includes advanced anti-phishing protection and email data loss prevention.   We’re excited to announce the release of our new Microsoft 365 API that enables deployment of Tessian’s inbound protection in seconds, and provides unparalleled protection within hours. The seamless Microsoft 365 integration presents an opportunity to consolidate your cybersecurity stack, making it easy to displace your Secure Email Gateway for the next generation of email security, Tessian. You can download our full solution brief here.    The release of the API and new advanced quarantine isolation capabilities mark yet another milestone in Tessian’s growth and solidifies its place as the Integrated Cloud Email Security (ICES) market leader – offering clients a simplified integration, to enable comprehensive email protection against the most advanced inbound threats.  
Taking the effort out of integration Where traditional gateway deployments take months, the Tessian API enables seamless integration for Microsoft 365 clients, whether on premise, in hybrid, or in cloud environments.  Deploy Tessian within seconds and protects within hours.  No configuration is required.    API deployment simplified   The API allows deployment in 3 simple steps:      Enable connection to user mailboxes feature and select the + Defender Protection option    Grant required permission for Tessian to connect    Assign user mailboxes to the Directory Group for Tessian protection
The benefits of API deployment  The benefits of Tessian’s API deployment include:   Low cost of effort integration and management  No complex manual configurations, no MX records configuration or email rerouting needed Low management overhead, enabling security teams to focus on only malicious emails No manual updates required, you’re always running the latest version of our advanced threat protection   Reduced operational risk and enhanced security Elimination of point-of-failure risk and negative performance impacts due to simplified architecture – does not sit-inline Significantly reduced SOC burden and alert fatigue  Significantly reduced false positives, filtering out the noise from the actual threats   Scaled protection on demand Enterprise scalable solution but also accommodates the SMB sector Simply add new users to the Directory Group  Protection extends to all devices, including mobile
New levels of control and enhanced protection    We’re also excited to announce new quarantine features as a part of the Microsoft 365 API for inbound protection providing enhanced levels of control with our advanced quarantine threat isolation capability. The two user-friendly quarantine features are designed to stop threats, without interrupting business, and were built with security admins and employees in mind. The end result: Significantly reduced SOC burden, saving resources, with only malicious emails quarantined.   Admin Quarantine: Depending on the level of enforcement threshold selected by the security admin, emails that have been determined to be malicious by Tessian’s algorithm will automatically be quarantined for further analysis.    Soft Quarantine: Only emails with a lower probability of being malicious are sent to employees. Here, the employee receives a “defanged” copy of the email together with an in-the-moment security warning message. This enables them to decide whether to allow, or to delete the original email. 
How it works Admin Quarantine The Admin Quarantine capability automatically detects malicious emails and quarantines them on arrival. These emails have the highest probability of being malicious. These emails are temporarily removed from the employee’s inbox and assigned to the security admin via an alert notification.  The security admin triages the threat and can decide to release, or to delete the email from either the Tessian portal, or from the alert notification itself.   Soft Quarantine  The Soft Quarantine function detects emails with a lower probability of being malicious and, instead of being sent to the security admin, they’re held in a “Soft Quarantine” or hidden folder in the employee’s email account. These emails are not sent to the “junk folder” in order to prevent accidental interaction by the employee. Tessian sends a “defanged” copy of the email to the employee with an alert notification, alerting them that the flagged email is potentially malicious. The “defanging” of the email effectively neutralizes hyperlinks and removes attachments, thus removing any malicious payloads and is not released until the email is determined not to be malicious.   
In-the-moment security training hardens your security posture in real time   We believe employees are a company’s greatest security asset. With our in-the-moment security awareness notifications, we provide the necessary contextual understanding to prompt safer behavior. Not only is each warning message contextualized to the specific threat, but it also delivers a memorable and individualized security awareness training session.   Our customers consider these warnings an extension of their security awareness training programs, which helps build a more security conscious employee base and improves the security culture, in real time.
Intelligent and comprehensive email security protects against advanced threats   The threatscape is only increasing in sophistication and scope, with threat actors continuously refining attack methods to circumvent rule-based security controls. This helps explain why social-engineering based attacks delivered via email remain the number one threat vector for attack.   Given the high success rate of email-based attacks, it is clear that legacy rule-based email security solutions are no longer capable of keeping employees and data safe. This new reality has driven the need for intelligent email security solutions that provide real time protection and threat defense capability against advanced threats.   The new Tessian API release for Microsoft 365 and quarantine functionality, together with the full capability of Tessian’s security platform provides comprehensive email security for  advanced inbound and outbound threats – giving customers peace of mind that email security is one less challenge they have to deal with.    This is why our customers can’t imagine a world of not having Tessian in their environment.  Want to learn more? See how Tessian prevents ransomware attacks, bolsters DLP, watch a product overview video, or book a demo.
Read Blog Post
Human Layer Security
IT Departments are Looking for New Jobs: Here’s How to Retain Talent
By Andrew Webb
Thursday, March 24th, 2022
You can’t stop people from leaving for pastures new; employee turnover is a natural function of any organization. But when that trickle turns into a flood, there’s an issue. Our recent Great Re-evaluation research conducted revealed that 55% of employees are thinking about leaving their jobs this year. What’s more, 39% are currently working their notice period or actively looking for a new role in the next six months. But who’s leaving, and why? According to research by Harvard Business Review, ‘mid career’ employees between 30 and 45 years old have had seen the greatest increase in resignation rates. The research also identified the most at risk sectors and alarmingly tech industry resignations came out on top, with an increase of 4.5% (compared to 3.6% in healthcare for example). If this sounds like the situation in your security or IT team, here’s why they might be leaving, and what you can do about it.
Why are people quitting?   A recent McKinsey report highlighted that it wasn’t always the promise of a higher salary that lures people away. Instead, the things employees were looking for were: feeling valued by either the organization or by their immediate managers, a sense of belonging, and a flexible work schedule. In essence, employees were far more likely to prioritize relational factors, whereas employers were more likely to focus on transactional ones   The past two years have certainly taken their toll on security teams from the CISO down, and people are a little burnt out and stressed. SOC teams are on the front line of a company’s defenses against cyberattacks – alert fatigue is real.  What to do: Work with your people team on an employee support plan, schedule regular check-ins with team members, and explore technological solutions like Spill.chat – full disclosure, it’s what we use here at Tessian.
Highlight team achievements   SOC team members have a thirst for knowledge – they have to reply to an attack quickly in a high-pressure situation. If they feel they haven’t got the support and encouragement they need, both managerially and technologically, they’ll walk. After all, it can be particularly demoralizing to devote eight hours a day to defending an organization when that defense is neither valued and acknowledged nor resourced sufficiently.    What to do: As the company’s security leader, you have to beat the drum for your team’s work and show the value that it brings to the company. Remember, IBM’s ‘Cost of a Data Breach’ report tells us the average cost of a breach is $4.24 million. Communicate that, whether it’s at the all-hands or a poster in the restrooms.
Automate and augment the mundane The IBM Pollyanna Principle states ‘machines should work; people should think’. That means you should review your security automation and response (SOAR) set-up periodically and see what can be automated. Things that automate well are repeatable manual tasks, threat investigations, triage of false positives, and creating reports. This Microsoft blog has some great tips on what security tasks and objectives you should automate, and why. After all, if attackers are automating many of their processes for increased efficiency, so should you.  What to do: Automating the everyday tasks from reporting to rooting out false positives will help you and your team concentrate on the critical issues. Be realistic about what automation is capable of. With that expectation, focus on areas where augmentation can help the team make faster and better decisions. That’s the winning formula.
Reward growth   As Mike Privette said in our podcast, security is the one corporate function that should always be growing. As we explored in this article, one of the key factors in building out a security team is that people must have confidence that they can grow and gain value by staying within the organization. So as well as increasing the team in terms of overall size, prioritize elevating existing team members into more senior roles.   What to do: Have a clear understanding of individuals’ potential career progression within the organization. Work with your People team on highlighting future opportunities and creating growth plans for 6-12 months down the line.  
Make time for training, learning and development   As well as promotions and increased responsibilities for some team members, training across the team keeps everyone united and aligned. Training in conjunction with things like automation is most effective when you’re looking to change behaviors, such as decreased response times or triaging.   For the fifth straight year, the ISSA and EGA Cyber security survey reveals that 59% of cybersecurity professionals agree that while they try to keep up with cybersecurity skills development, job requirements often get in the way. As the survey notes, ‘This training gap is quietly increasing cyber risks at your organization’   What to do: designate a baseline metric to improve upon, and design a training program that is focused, flexible, and able to meet that metric. If training lacks an objective and feels like a chore, people will treat it as a chore.    Finally, if people are dead set on leaving, the only thing you can do is wish them all the best. Infosec is a small world and chances are your paths might cross again.
Read Blog Post
ATO/BEC
Everything You Need to Know About Tax Day Scams 2022
By Maddie Rosenthal
Wednesday, March 23rd, 2022
Only two things are certain in life, death and taxes. As the 2022 Tax Day rolls around, making a payment to the IRS isn’t the only thing you need to be worried about.    These phishing attacks can take many different forms. In the US, these attacks will use the deadline of Monday, 18 April to file your income tax returns as bait. Meanwhile in the UK, these attacks will use your potential tax refund as bait.    But we’re here to help. Here’s what you need to look out for and what to do in case you’re targeted by Tax Day scams. 
 What do Tax Day scams look like?   As is the case with other phishing and spear phishing attacks, bad actors will be impersonating trusted brands and authorities and will be, in some way, motivating you to act.   In this article, we’re exploring Tax Day scams that arrive via email. You may also receive phone calls or text messages from bad actors, claiming that you’re being investigated for tax fraud or have an overdue bill. They may also simply request more information from you, like your name and address, or bank account details. You shouldn’t give any of this information away over the phone. Government organizations will never call you or use recorded messages to demand payment. Now, let’s take a closer look at some real scam examples. Example 1: IRS Impersonation 
What’s wrong with this email? The IRS has said they never contact taxpayers by email, so any correspondence “from” them is illegitimate There is an extra “r” in “internal” in the sender’s email address Email addresses from government agencies will always contain the top-level domain “.gov” There are spelling errors and inconsistencies in the text that you wouldn’t expect from a government agency Example 2: Tax-Preparation Software Impersonation
What’s wrong with this email? While the sender’s email address does contain the company name (Fast Tax), the top level domain name (.as) is unusual The sender is motivating the target to follow the embedded link by claiming their tax return is incomplete Upon hovering over the link, you’ll see the URL is suspicious. Please note, though: A suspicious URL can still take you to a landing page that appears legitimate. These are called malicious websites. Example 3: HMRC Impersonation
What’s wrong with this email? While the Display Name, email template, logos, and language used in the email seem consistent with HMRC, the sender’s email address contains the top-level domain “.net” instead of “.gov.uk” Upon hovering over the link, you’ll see the URL is suspicious Example 4: Client Impersonation
What’s wrong with this email? Unfortunately, in this case, there are no obvious giveaways that this is a phishing scam. However, if Joe, the tax accountant in this scenario, knew he hadn’t met or interacted with a woman named Karen Belmont, that could be a warning sign Individuals and organizations should always be wary of attachments and should have anti-malware and/or virus protection in place This examples demonstrates the importance of having policies in place to verify clients beyond email. And remember, there’s nothing wrong with being extra cautious this time of year. Example 5: CEO Impersonation
What’s wrong with this email? The the sender’s email address (@supplier-xyz.com) is inconsistent with the recipient’s email address (@supplierxyz.com) The attacker is impersonating the CEO, hoping that the target will be less likely to question the request; this is a common social engineering tactic  The attacker is using urgency both in the subject line and the email copy to motivate the target to act quickly Because this is a zero-payload attack (an attack that doesn’t rely on a link or attachment to carry malware), anti-malware or anti-virus software wouldn’t detect the scam
Who will be targeted by Tax Day scams?    From the examples above, you can see that cybercriminals will target a range of people with their Tax Day scams. Taxpayers, tax professionals, and businesses are all susceptible and savvy hackers will use different tactics for each. Here’s what you should look out for.   Taxpayers  Attackers will be impersonating trusted government agencies like the IRS and HMRC and third-parties like tax professionals and tax software vendors  Attackers will use coercive language and the threat of missed deadlines or promises of refunds to motivate their targets to act  Many phishing emails contain a payload; this could be in the form of a malicious link or attachment   Tax Professionals  Attackers will be impersonating either existing clients/customers or prospects. In either case, they’ll be pretending they need help with their tax return or tax refund  Attackers will use the lure of new business or the threat of losing a customer to motivate their targets to act  Many phishing emails contain a payload; this could be in the form of a malicious link or attachment.  Businesses  Attackers will be impersonating CEOs, HR representatives, Finance Directors, or other individuals or agencies who need access to sensitive tax information  Attackers are strategic in their impersonations of people in positions of power; people are less likely to question their superiors.   
What do I do if I’m targeted by a Tax Day scam? While it’s true that attackers use different tactics and capitalize on different moments in time to trick their targets, individuals and businesses should always follow the same guidelines if they think they’ve received a phishing email.    First and foremost, always, always, always check the sender. Confirm that the domain is legitimate and that the Display Name matches the email address. Be wary of any emails that aren’t from a “.gov” address.  If anything seems unusual, do not follow or click links or download attachments  Check for spelling errors or formatting issues. Be scrupulous! If anything feels off, proceed cautiously. (See below.  If the email appears to come from an individual you know and trust, like a colleague, customer, or client, reach out to the individual directly by phone, text or a separate email thread  If you’re an employee who’s been targeted, contact your line manager and/or IT team. Management should, in turn, warn the larger organization  The best way to avoid falling victim to one of these scams is to simply not provide any personal information until you verify with 100% certainty that you’re communicating with a genuine agency, organization, or agent. Visit the organization’s website via Google or your preferred search engine, find a support number, and ask them to confirm the request for information is valid.
More resources As a security start-up, we’re committed to helping you stay safe. If you’re looking for more information on Tax Day scams, consult the following government websites. Advice from the IRS Advice from HMRC Looking for more advice about scams? Sign-up to our newsletter below to get articles just like this, straight to your inbox. 
Read Blog Post
Email DLP, Data Exfiltration
Insider Threats Examples: 17 Real Examples of Insider Threats
By Maddie Rosenthal
Tuesday, March 22nd, 2022
Insider Threats are a big problem for organizations across industries. Why? Because they’re so hard to detect. After all, insiders have legitimate access to systems and data, unlike the external bad actors many security policies and tools help defend against.   It could be anyone, from a careless employee to a rogue business partner.   That’s why we’ve put together this list of Insider Threat types and examples. By exploring different methods and motives, security, compliance, and IT leaders (and their employees) will be better equipped to spot them before a data breach happens.  
Types of Insider Threats First things first, let’s define what exactly an insider Threats is.   Insider Threats stem from people – whether employees, former employees, contractors, business partners, or vendors – with legitimate access to an organization’s networks and systems who exfiltrate data for personal gain or accidentally leak sensitive information.   The key here is that there are two distinct types of Insider Threats: The Malicious Insider: Malicious Insiders knowingly and intentionally steal data. For example, an employee or contractor may exfiltrate valuable information (like Intellectual Property (IP), Personally Identifiable Information (PII), or financial information) for some kind of financial incentive, a competitive edge, or simply because they’re holding a grudge for being let go or furloughed. The Negligent Insider: Negligent insiders are just your average employees who have made a mistake. For example, an employee could send an email containing sensitive information to the wrong person, email company data to personal accounts to do some work over the weekend, fall victim to a phishing or spear phishing attack, or lose their work device.
1. The employee who exfiltrated data after being fired or furloughed   Since the outbreak of COVID-19, 81% of the global workforce have had their workplace fully or partially closed. And, with the economy grinding to a halt, employees across industries have been laid off or furloughed. This has caused widespread distress.   When you combine this distress with the reduced visibility of IT and security teams while their teams work from home, you’re bound to see more incidents of Malicious Insiders. One such case involves a former employee of a medical device packaging company who was let go in early March 2020.   By the end of March – and after he was given his final paycheck – Christopher Dobbins hacked into the company’s computer network, granted himself administrator access, and then edited and deleted nearly 120,000 records. This caused significant delays in the delivery of medical equipment to healthcare providers.
2. The employee who sold company data for financial gain   In 2017, an employee at Bupa accessed customer information via an in-house customer relationship management system, copied the information, deleted it from the database, and then tried to sell it on the Dark Web. The breach affected 547,000 customers and in 2018 after an investigation by the ICO, Bupa was fined £175,000.
3. The employee who stole trade secrets   In July 2020, further details emerged of a long-running insider job at General Electric (GE) that saw an employee steal valuable proprietary data and trade secrets. The employee, Jean Patrice Delia, gradually exfiltrated over 8,000 sensitive files from GE’s systems over eight years — intending to leverage his professional advantage to start a rival company.   The FBI investigation into Delia’s scam revealed that he persuaded an IT administrator to grant him access to files and that he emailed commercially-sensitive calculations to a co-conspirator. Having pleaded guilty to the charges, Delia faces up to 87 months in jail.   What can we learn from this extraordinary inside job? Ensure you have watertight access controls and that you can monitor employee email accounts for suspicious activity.
4. The employees who exposed 250 million customer records   Here’s an example of a “negligent insider” threat. In December 2019, a researcher from Comparitech noticed that around 250 million Microsoft customer records were exposed on the open web. This vulnerability meant that the personal information of up to 250 million people—including email addresses, IP addresses, and location—was accessible to anyone.   This incident represents a potentially serious breach of privacy and data protection law and could have left Microsoft customers open to scams and phishing attacks—all because the relevant employees failed to secure the databases properly.   Microsoft reportedly secured the information within 24 hours of being notified about the breach.
5. The nuclear scientists who hijacked a supercomputer to mine Bitcoin   Russian Secret Services reported in 2018 that they had arrested employees of the country’s leading nuclear research lab on suspicion of using a powerful supercomputer for bitcoin mining. Authorities discovered that scientists had abused their access to some of Russia’s most powerful supercomputers by rigging up a secret bitcoin-mining data center.   Bitcoin mining is extremely resource-intensive and some miners are always seeking new ways to outsource the expense onto other people’s infrastructure. This case is an example of how insiders can misuse company equipment.
6. The employee who fell for a phishing attack   While we’ve seen a spike in phishing and spear phishing attacks since the outbreak of COVID-19, these aren’t new threats. One example involves an email that was sent to a senior staff member at Australian National University. The result? 700 Megabytes of data were stolen.   That might not sound like a lot, but the data was related to both staff and students and included details like names, addresses, phone numbers, dates of birth, emergency contact numbers, tax file numbers, payroll information, bank account details, and student academic records.
7. The work-from-home employees duped by a vishing scam   Cybercriminals saw an opportunity when many of Twitter’s staff started working from home. One cybercrime group conducted one of the most high-profile hacks of 2020 — knocking 4% off Twitter’s share price in the process.   In July 2020, after gathering information on key home-working employees, the hackers called them up and impersonated Twitter IT administrators. During these calls, they successfully persuaded some employees to disclose their account credentials.   Using this information, the cybercriminals logged into Twitter’s admin tools, changed the passwords of around 130 high-profile accounts — including those belonging to Barack Obama, Joe Biden, and Kanye West — and used them to conduct a Bitcoin scam.   This incident put “vishing” (voice phishing) on the map, and it reinforces what all cybersecurity leaders know — your company must apply the same level of cybersecurity protection to all its employees, whether they’re working on your premises or in their own homes.
8. The ex-employee who got two years for sabotaging data   The case of San Jose resident Sudhish Kasaba Ramesh serves as a reminder that it’s not just your current employees that pose a potential internal threat—but your ex-employees, too.   Ramesh received two years imprisonment in December 2020 after a court found that he had accessed Cisco’s systems without authorization, deploying malware that deleted over 16,000 user accounts and caused $2.4 million in damage.   The incident emphasizes the importance of properly restricting access controls—and locking employees out of your systems as soon as they leave your organization.
9. The employee who took company data to a new employer for a competitive edge   This incident involves two of the biggest tech players: Google and Uber. In 2015, a lead engineer at Waymo, Google’s self-driving car project, left the company to start his own self-driving truck venture, Otto.   But, before departing, he exfiltrated several trade secrets including diagrams and drawings related to simulations, radar technology, source code snippets, PDFs marked as confidential, and videos of test drives.    How? By downloading 14,000 files onto his laptop directly from Google servers. Otto was acquired by Uber after a few months, at which point Google executives discovered the breach.   In the end, Waymo was awarded $245 million worth of Uber shares and, in March, the employee pleaded guilty.
10. The employee who stole a hard drive containing HR data   Coca-Cola was forced to issue data breach notification letters to around 8,000 employees after a worker stole a hard drive containing human resources records.   Why did this employee steal so much data about his colleagues? Coca-Cola didn’t say. But we do know that the employee had recently left his job—so he may have seen an opportunity to sell or misuse the data once outside of the company.   Remember – network and cybersecurity are crucial, but you need to consider whether insiders have physical access to data or assets, too.
11. The employees leaking customer data    Toward the end of October 2020, an unknown number of Amazon customers received an email stating that their email address had been “disclosed by an Amazon employee to a third-party.” Amazon said that the “employee” had been fired — but the story changed slightly later on, according to a statement shared by Motherboard which referred to multiple “individuals” and “bad actors.”   So how many customers were affected? What motivated the leakers? We still don’t know. But this isn’t the first time that the tech giant’s own employees have leaked customer data. Amazon sent out a near-identical batch of emails in January 2020 and November 2018.   If there’s evidence of systemic insider exfiltration of customer data at Amazon, this must be tackled via internal security controls.
12. The employee offered a bribe by a Russian national   In September 2020, a Nevada court charged Russian national Egor Igorevich Kriuchkov with conspiracy to intentionally cause damage to a protected computer. The court alleges that Kruichkov attempted to recruit an employee of Tesla’s Nevada Gigafactory.   Kriochkov and his associates reportedly offered a Tesla employee $1 million to “transmit malware” onto Tesla’s network via email or USB drive to “exfiltrate data from the network.” The Kruichkov conspiracy was disrupted before any damage could be done. But it wasn’t the first time Tesla had faced an insider threat. In June 2018, CEO Elon Musk emailed all Tesla staff to report that one of the company’s employees had “conducted quite extensive and damaging sabotage to [Tesla’s] operations.”   With state-sponsored cybercrime syndicates wreaking havoc worldwide, we could soon see further attempts to infiltrate companies. That’s why it’s crucial to run background checks on new hires and ensure an adequate level of internal security.
13. The ex-employee who offered 100 GB of company data for $4,000   Police in Ukraine reported in 2018 that a man had attempted to sell 100 GB of customer data to his ex-employer’s competitors—for the bargain price of $4,000. The man allegedly used his insider knowledge of the company’s security vulnerabilities to gain unauthorized access to the data.   This scenario presents another challenge to consider when preventing insider threats—you can revoke ex-employees’ access privileges, but they might still be able to leverage their knowledge of your systems’ vulnerabilities and weak points.
14. The employee who accidentally sent an email to the wrong person   Misdirected emails happen more than most think. In fact, Tessian platform data shows that at least 800 misdirected emails are sent every year in organizations with 1,000 employees. But, what are the implications? It depends on what data has been exposed.    In one incident in mid-2019, the private details of 24 NHS employees were exposed after someone in the HR department accidentally sent an email to a team of senior executives.   This included: Mental health information Surgery information   While the employee apologized, the exposure of PII like this can lead to medical identity theft and even physical harm to the patients. We outline even more consequences of misdirected emails in this article. 
15. The employee who accidentally misconfigured access privileges   NHS coronavirus contact-tracing app details were leaked after documents hosted in Google Drive were left open for anyone with a link to view. Worse still, links to the documents were included in several others published by the NHS.    These documents – marked “SENSITIVE” and “OFFICIAL” contained information about the app’s future development roadmap and revealed that officials within the NHS and Department of Health and Social Care are worried about the app’s reliance and that it could be open to abuse that leads to public panic.
16. The security officer who was fined $316,000 for stealing data (and more!)   In 2017, a California court found ex-security officer Yovan Garcia guilty of hacking his ex-employer’s systems to steal its data, destroy its servers, deface its website, and copy its proprietary software to set up a rival company.   The cybercrime spree was reportedly sparked after Garcia was fired for manipulating his timesheet. Garcia received a fine of over $316,000 for his various offenses.   The sheer amount of damage caused by this one disgruntled employee is pretty shocking. Garcia stole employee files, client data, and confidential business information; destroyed backups; and even uploaded embarrassing photos of his one-time boss to the company website.
17. The employee who sent company data to a personal email account   We mentioned earlier that employees oftentimes email company data to themselves to work over the weekend.    But, in this incident, an employee at Boeing shared a spreadsheet with his wife in hopes that she could help solve formatting issues. While this sounds harmless, it wasn’t. The personal information of 36,000 employees were exposed, including employee ID data, places of birth, and accounting department codes.
How common are Insider Threats?   Incidents involving Insider Threats are on the rise, with a marked 47% increase over the last two years. This isn’t trivial, especially considering the global average cost of an Insider Threat is $11.45 million. This is up from $8.76 in 2018.   Who’s more culpable, Negligent Insiders or Malicious Insiders?    Negligent Insiders (like those who send emails to the wrong person) are responsible for 62% of all incidents Negligent Insiders who have their credentials stolen (via a phishing attack or physical theft) are responsible for 25% of all incidents Malicious Insiders are responsible for 14% of all incidents   It’s worth noting, though, that credential theft is the most detrimental to an organization’s bottom line, costing an average of $2.79 million.    Which industries suffer the most? The “what, who, and why” behind incidents involving Insider Threats vary greatly by industry.    For example, customer data is most likely to be compromised by an Insider in the Healthcare industry, while money is the most common target in the Finance and Insurance sector.   But, who exfiltrated the data is just as important as what data was exfiltrated. The sectors most likely to experience incidents perpetrated by trusted business partners are:    Finance and Insurance  Federal Government  Entertainment  Information Technology  Healthcare  State and Local Government   Overall, though, when it comes to employees misusing their access privileges, the Healthcare and Manufacturing industries experience the most incidents.   On the other hand, the Public Sector suffers the most from lost or stolen assets and also ranks in the top three for miscellaneous errors (for example misdirected emails) alongside Healthcare and Finance.   The bottom line: Insider Threats are a growling problem. We have a solution.
Read Blog Post
Cyber Skills Gap
There Isn’t a Cyber Skills Shortage, You’re Just Not Hiring and Retaining The Right People
By Josh Yavor
Friday, March 18th, 2022
The Cyberseek heatmap shows there are over 500,000 cyber job openings in the US alone, and globally over 3.5 million.. With so many unfilled vacancies there must be a skills shortage, right? I’m not so sure. I think our perceived talent and skills shortage is largely self-inflicted because as an industry we’re sadly terrible at hiring, growing, and retaining people.  Too many organizations are chasing a finite number of senior-level people which results in two critical problems. The first is self-inflicted: over the past decade as an industry, we have failed to grow enough people from entry and mid-level positions into senior level roles. The second thing is that many organizations believe they can only hire senior talent rather than grow and retain the talent they already have. If we don’t invest in people earlier in their career, we will never have the talent pool our collective job postings demand.
The problem with hiring only senior talent   We tend to spend a lot of time and energy looking for “unicorn hires”. These hires can take months of our energy and attention for each role. In aggregate, we risk incurring opportunity costs that prevent us from  growing a person – or several people – into these capabilities. Of course, the security industry is not the only offender. Many technical roles outside of security are subject to the same type of bad behavior. We allow ourselves to create job postings with requirements that are sometimes impossible – like requesting 10+ years experience in a technology that has literally only existed for five.    So why are situations like this happening? Despite good intentions, a recruitment team supporting a security team without enough investment of time and partnership from the engineering managers is going to get these things wrong. It’s not their fault, but a clear indication that we need to be better together.
https://www.tessian.com/wp-content/uploads/2022/03/josh-audiogram.mp4
I challenge hiring managers to answer this important question: Describe the specific skills and experiences that 5-10 years of experience mean to you?    When I ask this, one of two things happens: they either can’t answer it – which is a good indicator that it shouldn’t go in the job description – or they can, and this becomes the start of better job requirements. Chronological time doesn’t tell us all that much about someone’s capabilities, how they grew (or didn’t), or what they’re good at.   Instead, we should be focussing on things like core experiences, history of growth, skill sets, and capabilities. That’s what we should switch our requirements and expectation language to. So we should seek people who have specific experiences or capabilities, such as leading specific team sizes, adapting to rapid change in a high growth organization, or have navigated significant technology migrations. These are more equitable, measurable, and useful capability assessments that don’t rule out qualified candidates by setting minimums for years of work experience.
Reminder: if a team runs itself for six months while you hire a manager, you shouldn't be hiring, you should be promoting. — Matt Wallaert (@mattwallaert) November 18, 2020  
The great resignation   We’ve covered the great resignation/re-evaluation/migration previously on this blog. But even before this movement, we were already seeing an average ‘in role’ time of just 18 to 36 months for many security individuals. That’s a high turnover, and The Great Migration has only increased it. Senior decision-makers across the US report an average security staff turnover rate of 20% according to research from ThreatConnect. Compare that to another study by Michael Booz that found that the global average for all roles was around 11%.
Organizations should be focused on what it takes to keep people longer. To retain people, there are two key factors. First, people must have confidence that they can grow and gain value by staying within the organization. Second, they need to be able to experience recognition, and crucially – rewards, for their increasing value both in the market and in their organization. Too often we prioritize budget for new hires when the best option is to invest in the people we already have on staff and reward them before someone else does.    In my experience, not enough is done during the first two years of employment to give employees confidence that there is an ongoing trajectory for them in terms of growth, recognition, and rewards. And by the time we get to that two-year point, the first time that the organization hears about it is when they’re getting the resignation letter.    Sadly that is THE WORST time to attempt a growth and rewards conversation.
Creating a better pipeline   Of course as people levelup and grow into new roles, you need new recruits. But many security leaders are reluctant to have their teams be the first stop in someone’s security career. However, there are plenty of security roles that are great places to get a start in security while applying relevant and overlapping skills from previous non-security roles.    There are very few cases where significant skill transfer from non-security to security roles is not possible. Some of the more obvious examples are IT system administrators becoming enterprise security engineers, software developers being successful in product security roles, etc. We need to look beyond these examples and expand our mapping of critical skills and capabilities to additional roles and backgrounds. Some of the most talented security professionals in our industry today come from much more diverse backgrounds. Some went to university to study linguistics, art, or math, and many never pursued higher education.
Your next security hire could come from customer success, marketing, or human resources   One of the things we need to be more conscious of is that security roles don’t just need technical skill sets. In fact training people up in specific technical skills is relatively easy to do. Instead, we should be optimizing security roles for people who are making a job transition. Security teams can benefit hugely from the things that are NOT easy to train people up on, like emotional intelligence, personal relationship management, and communication skills.   I’ve done this myself. I supported hiring someone with a background in customer service for a security operations role. 90% of the job is still based on providing effective customer service and rapidly triaging problems to identify the most appropriate solutions; it’s just a different set of customers and problems. We can train people on how to use our technology and how to think about security. What’s much harder is training people to be effective communicators with empathy and the high emotional intelligence to provide exceptional outcomes while supporting people.    I’ll finish how I started, by saying again that, there isn’t necessarily a skills shortage in many cybersecurity roles. We’re just setting the requirements poorly, largely ignoring retention, failing to take advantage of skill transference opportunities from non-security roles, and not giving people the opportunity to grow. Want to Join us at Tessian and start or develop your security career? Check out our open roles. What’s it like to work here? Here’s 200 reasons why you’ll love it. Want to find out more about diversity and the cyber skills gap? Register for our up-coming LinkedIn Live.
Read Blog Post
Email DLP
What is Data Loss Prevention (DLP)? Complete Overview of DLP
Thursday, March 17th, 2022
How does DLP work?   Put simply, DLP software monitors different entry and exit points (examples below) to “look” for data and keep it safe and sound inside the organization’s network.   A properly configured DLP solution can detect when sensitive or important data is leaving a company’s possession, alert the user and, ultimately, stop data loss.   A DLP solution has three main jobs. DLP software: Monitors and analyzes data while at rest, in motion, and in use. Detects suspicious activity or anomalous network traffic. Blocks or flags suspicious activity, preventing data loss.   Those entry and exit points we mentioned earlier include: Computers Mobile devices Email clients Servers Mail gateways   Different types of DLP solutions are required to safeguard data in these environments.   What are the different types of DLP?   DLP software can monitor and safeguards data in three states: Data in motion (or “in transit”): Data that is being sent or received by your network Data in use: Data that a user is currently interacting with Data at rest: Data stored in a file or database that is not moving or in use   There are three main types of DLP software designed to protect data in these different states.   Network data loss prevention   Network DLP software monitors network traffic passing through entry and exit points to protect data in motion. Network DLP scans all data passing through a company’s network. If it’s working properly, the software will detect sensitive data exiting the network and flag or block it while allowing other data to leave the network unimpeded where appropriate. Network administrators can customize network DLP software to block certain types of data from leaving the network by default or—by contrast—whitelist specific file types or URLs.   Endpoint data loss prevention   Endpoint DLP monitors data on devices and workstations, such as computers and mobile devices, to protect data in use. The software can monitor the device and detect a range of potentially malicious actions, including:   Printing a document Creating or renaming a file Copying data to removable media (e.g. a USB drive)   Such actions might be completely harmless—or they might be an attempt to exfiltrate confidential data. Effective endpoint DLP software (but not all endpoint DLP software) can distinguish between suspicious and non-suspicious activity.   Email data loss prevention   Email is the primary threat vector for most businesses, and the threat vector most security leaders are concerned about locking down with their DLP strategy.   Email represents a potential route straight through your company’s defenses for anyone wishing to deliver a malicious payload. And it’s also a way for insiders to send data out of your company’s network—whether by accident or on purpose.   Email DLP can therefore protect against some of the most common and serious causes of data loss, including: Email-based cyberattacks, such as phishing Malicious exfiltration of data by employees (also called insider threats) Accidental data loss (for example, sending an email to the wrong person or attaching the wrong file)
Does my company need a data loss prevention solution?   Almost certainly. DLP is a top priority for security leaders across industries and DLP software is a vital part of any organization’s security program.   Broadly, there are two reasons to implement an effective data loss prevention solution:   Protecting your customers’ and employees’ personal information. Your business is responsible for all the personal information it controls. Cyberattacks and employee errors can put this data at risk. Protecting your company’s non-personal data. DLP can thwart attempts to steal intellectual property, client lists, or financial data.   Want to learn more about how and why other organizations are leveraging DLP? We explore employee behavior, the frequency of data loss incidents, and the best (and worst) solutions in this report: The State of Data Loss Prevention.   Now let’s look at the practical ways DLP software can benefit your business.   What are the benefits of DLP?   There are 4 main benefits of data loss prevention, which we’ll unpack below: Protecting against external threats (like spear phishing attacks) Protecting against internal threats (like insider threats) Protecting against accidental data loss (like accidentally sending an email to the wrong person) Compliance with laws and regulations   Protecting against external threats   External security threats are often the main driver of a company’s cybersecurity program—although, as we’ll see below, they’re far from the only type of security threat that businesses are concerned about.   Here are some of the most significant external threats that can result in data loss: Phishing: Phishing is the most common online crime—and according to the latest FBI data, phishing rates doubled in 2020. Around 96% of phishing attacks take place via email. Spear phishing: A phishing attack targeting a specific individual. Spear phishing attacks are more effective than “bulk” phishing attacks and can target high-value individuals (whaling) or use advanced impersonation techniques (CEO fraud). Ransomware: A malicious actor encrypts company data and forces the company to pay a ransom to obtain the key. Cybercriminals can use various methods to undertake cyberattacks, including malicious email attachments or links and exploit kits.   DLP can prevent these external threats by preventing malicious actors from exfiltrating data from your network, storage, or endpoints.   Protecting against internal threats   Malicious employees can use email to exfiltrate company data. This type of insider threat is more common than you might think.   Verizon research shows how employees can misuse their company account privileges for malicious purposes, such as stealing or providing unauthorized access to company data. This problem is most significant in the healthcare and manufacturing industries.   Why would an employee misuse their account privileges in this way? In some cases, they’re working with outsiders. In others, they’re stealing data for their own purposes. For more information, read our 11 Real Examples of Insider Threats.   The difficulty is that your employees often need to send files and data outside of your company for perfectly legitimate purposes.   Thankfully, next-generation DLP can use machine learning to distinguish and block suspicious activity—while permitting data to leave your network where necessary.   Preventing accidental data loss   Human error is a widespread cause of data loss, but security teams sometimes overlook it.   In fact, misdirected emails—where a person sends an email to the wrong recipient—are the most common cause of data breaches, according to the UK’s data protection regulator.   Tessian platform data bears this out. In organizations with 1,000 or more employees, people send an average of 800 misdirected emails every year.   Misdirected emails take many forms. But any misdirected email can result in data loss—whether through accidentally clicking “reply all”, attaching the wrong file, accepting an erroneous autocomplete, or simply spelling someone’s email address wrong.   Compliance with laws and regulations   Governments are more and more concerned about data privacy and security.  Data protection and cybersecurity regulations are increasingly demanding—and failing to comply with them can incur increasingly severe penalties.   Implementing a DLP solution is an excellent way to demonstrate your organization’s compliance efforts with any of the following laws and standards: General Data Protection Regulation (GDPR): Any company doing business in the EU, or working with EU clients or customers, must comply with the GDPR. The regulation requires all organizations to implement security measures to protect the personal data in their control. California Consumer Privacy Act (CCPA): The CCPA is one example of the many state privacy laws emerging across the U.S. The law requires businesses to implement reasonable security measures to guard against the loss or exfiltration of personal information. Sector-specific regulations: Tightly regulated sectors are subject to privacy and security standards, such as the Health Insurance Portability and Accountability Act (HIPAA), which covers healthcare providers and their business associates, and the Gramm-Leach-Bliley Act (GLBA), which covers financial institutions. Cybersecurity frameworks: Compliance with cybersecurity frameworks, such as the NIST Framework, CIS Controls, or ISO 27000 Series, is an important way to demonstrate high standards of data security in your organization. Implementing a DLP solution is one step towards certification with one of these frameworks.   Bear in mind that, in certain industries, individual customers and clients will have their own regulatory requests, too.   Do DLP solutions work?   We’ve looked at the huge benefits that DLP software can bring your organization. But does DLP actually work? Some, but not all.   Effective DLP software works seamlessly in the background, allowing employees to work uninterrupted, but stepping in to prevent data loss whenever necessary. Likewise, they’re easy for SOC teams to manage.   Unfortunately, legacy features are still present in some DLP solutions, that either fail to prevent loss effectively, create too much noise for security teams, or are too cumbersome to enable employees to work unimpeded. Let’s take a look at some DLP methods and weigh up the pros and cons of each approach.   Blacklisting domains   IT administrators can block certain domains associated with malicious activity, for example, “freemail” domains such as gmail.com or yahoo.com. Blacklisting entire domains, particularly popular (if problematic) domains, is not ideal. There may be good reasons to communicate with someone using a freemail address—for example, if they are a customer, contractor, or a potential client.   Tagging sensitive data   Some DLP software allows users to tag certain types of sensitive data. For example, you may wish to block activity involving any file containing a 16-digit number (which might be a credit card number). But this rigid approach doesn’t account for the dynamic nature of sensitive data. In certain contexts, a 16 digit number might not be associated with a credit card. Or an employee may be using credit card data for legitimate purposes.   Implementing rules   Rule-based DLP uses “if-then” statements to block types of activities, such as “If an employee uploads a file of 10MB or larger, then block the upload and alert IT.” The problem here is that, like the other “data-centric” solutions identified above, rule-based DLP often blocks legitimate activity and allows malicious activity to occur unimpeded.   Machine learning   Tessian Cloud Email Security intelligently prevents advanced email threats and protects against data loss, to strengthen email security and build smarter security cultures in modern enterprises. Here’s how it works: machine learning technology learns how people, teams, and customers communicate and understands the context behind every interaction with data.   By analyzing the evolving patterns of human interactions, machine learning DLP constantly reclassifies email addresses according to the relationship between a business and customers, suppliers, and other third parties.
Read Blog Post
ATO/BEC
What is Email Impersonation? Everything You Need to Know
Wednesday, March 16th, 2022
Email impersonation might not be the most sophisticated phishing method, but it’s simple, it’s widespread, and it can be devastating. Here’s why…     Email impersonation vs. email spoofing vs. account takeover   First, we need to describe “email impersonation” and distinguish it from some closely-related concepts.   Email impersonation: The attacker sets up an email address that looks like a legitimate email address (e.g. bill.gates@micr0soft.com – note the zero instead of an o in the domain name). Email spoofing: A technical process where the attacker modifies an email’s headers so the receiving email client displays a false email address (the sender’s email address is “fraudster@cybercrime.com,” but the recipient sees “billgates@microsoft.com” in their inbox) Account takeover: The attacker gains access to another person’s account (using hacking or stolen credentials) and uses it to send phishing emails.   Email spoofing and account takeover require some technical ability (or, at least, access to the dark web). With email impersonation, though, the attacker just needs to secure a domain that looks like it could belong to a legitimate business.   This is easy (and cheap!) with domain registrars like GoDaddy. We explore different types of impersonation techniques below.   Phishing methods that use email impersonation   Cybercriminals can use email impersonation to facilitate any type of email-based phishing attack. There are some types of phishing in which email impersonation is particularly common, including:   Business Email Compromise (BEC) — Impersonating a business CEO fraud — Impersonating a company executive and targeting one of their employees Whaling — Targeting a company executive   These are all among the more sophisticated and targeted types of phishing attacks. These types of attacks must employ email impersonation, email spoofing, or account takeover to be successful.   Types of email impersonation   Now we’ll look at the various ways a cybercriminal can impersonate an email address. To understand these, you’ll need to know about the different parts of an email address:
Each of these elements of an email address is relevant to a different type of email impersonation.   Root domain-based email impersonation   A company’s root domain is usually the most distinctive part of its email address. It’s the part immediately before the top-level domain (e.g. “.com”) — the “Amazon” in “info@amazon.com”.   Root domain impersonation involves creating a root domain using replacement characters, so it looks like an email has arrived from a legitimate company. Here’s an example:
In this root domain impersonation, the attacker has replaced the “l” in “external” and “supplier” with a “1”. At first glance, the recipient might not notice this, and they might treat the email as though it has come from “External Supplier.”   Top-level domain-based email impersonation   The top-level domain is the part after the root domain: e.g., “.com”, “.jp”, or “.net”. The top-level domain usually denotes a country or a type of organization. For example:   .com — Commercial organizations .uk — Internet country code for the UK .gov — US government agency   Sometimes, a second-level domain accompanies a top-level domain:   .co.uk — Commercial organization from the UK .ac.jp — Higher education institution from Japan .waw.pl — Organization from Warsaw, Poland   Using top-level domain impersonation, a cybercriminal can create an authentic-looking email address that the recipient might assume belongs to a legitimate organization (if they even notice it).   Here’s an example:
Here we have “externalsupplier.io” imitating “externalsupplier.com”. The top-level domain “.io” is actually registered to British Indian Ocean Territory (BIOT), but Google recognizes it as “generic” because many non-BIOT organizations use it.   Subdomain-based email impersonation   A subdomain appears after the “@” sign, but before the root domain. For example, in “info@mail.amazon.com”, the subdomain is “mail”. Most email addresses don’t have a subdomain.   An attacker can use subdomains to impersonate a legitimate company in two main ways:   Using a company’s name as a subdomain to the attacker’s domain. For example, in “info@amazon.mailerinfo.com”, “amazon” is the subdomain and “mailerinfo” is the domain. Splitting a company’s name across a subdomain and domain.   Here’s an example of the second type of subdomain impersonation:
Display name impersonation   A display name is how an email client shows a sender’s name. You can choose your display name when you sign up for an email account. We explore display name impersonation in more detail in this article: How to Impersonate a Display Name.   Display name impersonation exploits a bad habit of mobile email clients. On mobile, common email clients like Outlook and Gmail only display a sender’s display name by default. They don’t display the sender’s email address.    So, even an email address like “cybercriminal@phishing.com” might show as “Amazon Customer Services” in your mobile email client — if that’s the display name that the attacker selected when setting up the account.   But this isn’t a mobile-only problem. According to new research, just 54% of employees even look at the email address of a sender before responding or actioning a request. This is good news for attackers, and bad news for businesses.      Username impersonation   The username is the part of the email address that appears before the “@” symbol. For example, in “bill.gates@microsoft.com”, the username is “bill.gates”.   Username impersonation is the least sophisticated form of email impersonation, but it can still work on an unsuspecting target. This technique is sometimes called “freemail impersonation,” because scammers can register false usernames with Gmail or Yahoo.    With this technique, they can create accounts that look like they could belong to your CEO, CFO, or another trusted person in your network.  Here’s an example:
More resources on email impersonation   Now you know the basic techniques behind email impersonation, read our articles on preventing email impersonation, CEO fraud, and Business Email Compromise to find out how to protect your business from these cyberattacks.   You can also learn how Tessian detects and prevents advanced impersonation attacks by reading our customer stories or booking a demo. Not quite ready for that? Sign-up for our newsletter below instead. You’ll be the first to know about new research and events and get helpful checklists and how-to guides straight to your inbox.
Read Blog Post
Page