Request a Demo of Tessian Today.
Automatically stop data breaches and security threats caused by employees on email. Powered by machine learning, Tessian detects anomalies in real-time, integrating seamlessly with your email environment within minutes and starting protection in a day. Provides you with unparalleled visibility into human security risks to remediate threats and ensure compliance.
Human Layer Security
Check out the Speaker Line-Up for Tessian Human Layer Security Summit!
By Maddie Rosenthal
Friday, February 5th, 2021
On March 3, Tessian is hosting the first Human Layer Security Summit of 2021. And, after a hugely successful series of summits in 2020,  we’re (once again) putting together an agenda that’ll help IT, compliance, legal, and business leaders overcome security challenges of today and tomorrow.
What’s on the agenda for Human Layer Security Summit? Panel discussions, fireside chats, and presentations will be focused on solving three key problems: Staying ahead of hackers to prevent advanced email threats like spear phishing, account takeover (ATO), and CEO fraud Reducing risk over time by building a strong security culture Building future-proof security strategies that engage everyone, from employees to the board So, who will be sharing their expertise to help you overcome these problems? 20+ speakers and partners. Some of the best and the brightest in the field.  If you want to learn more about what to expect, you can watch sessions from previous summits on-demand here.  Who’s speaking at Human Layer Security Summit? While we don’t want to give all the surprises away just yet, we will share a sneak peek at 11 speakers. Make sure to follow us on LinkedIn and Twitter and subscribe to our newsletter for the latest updates, including detailed information about each of the nine sessions. Elsa Ferriera, CISO at Evercore: For nearly 10 years, Elsa has managed risks, audited business processes, and maintained security at Evercore, one of the most respected investment banking firms in the world.  Gaynor Rich, Global Director of Cybersecurity Strategy at Unilever: Well-known for her expertise in cybersecurity, data protection, and risk management, Gaynor brings over 20 years of experience to the summit, the last six of which have been spent at one of the most universally recognized brands: Unilever.   Samy Kamkar, Renowned Ethical Hacker: As a teenager, Samy released one of the fastest-spreading computer viruses of all-time. Now, he’s a compassionate advocate for young hackers, whistleblower, and privacy and security researcher.  Marie Measures, CTO at Sanne Group: With over two decades of experience in the field, Marie has headed up information and technology at Capital One, Coventry Building Society, and now Sanne Group, the leading provider of alternative asset and corporate services.   Joe Mancini, SVP, EnterpriseRisk at BankProv: Joe is the Senior Vice President-Enterprise Risk at BankProv, an innovative commercial bank headquartered in Amesbury, MA. Joe has implemented a forward-thinking, business-enabling risk management strategy at BankProv which allows the fast-paced organization to safely expand its products and services to better suit their growing client base. Prior to his role at BankProv, he spent several years as the CISO at Radius Bank, and ten years at East Boston Savings Bank in various risk related roles. Joe is an expert in emerging technologies such as digital currency and blockchain, along with data security, and risk and compliance requirements in a digital world.  David Aird, IT Director at DAC Beachcroft: Having held the position of IT Director at DAC Beachcroft – one of the top 20 UK law firms – for nearly eight years, David has led the way to be named Legal Technology Team of the Year in 2019 and received awards in both 2017 and 2019 for Excellence in IT Security.  Dan Raywood, Former Security Analyst and Cybersecurity Journalist: Dan – the former Deputy Editor of Infosecurity Magazine and former analyst for 451 Research – is bringing decades of experience to the summit.  Jenny Radcliffe, “The People Hacker”: Jenny is a world-renowned Social Engineer, penetration tester, speaker, and the host of Human Factor Security podcast. Patricia Patton, Executive Coach: The former Global Head of Professional Development at Barclays and Executive Coach at LinkedIn, Patricia’s expertise will help security leaders forge better relationships with people and teams and teach attendees how to lead and influence. Nina Schick, Deep Fake Expert: Nina is an author, broadcaster, and advisor who specializes in AI and deepfakes. Over the last decade, she’s worked with Joe Biden, President of the United States, and has contributed to Bloomberg, CNN, TIME, and the BBC. Annick O’Brien, Data Protection Officer and Cyber Risk Officer: As an international compliance lawyer, certified Compliance Officer (ACCOI), member of the IAPP, and registered DPO, Annick specializes in privacy, GDPR program management, and training awareness projects.  Don’t miss out. Register for the Human Layer Security Summit now. It’s online, it’s free, and – for anyone who can’t make it on the day – you’ll be able to access all the sessions on-demand.
A word about our sponsors We’re thrilled to share a list of sponsors who are helping make this event the best it can be.  Digital Shadows, Detectify, The SASIG, Mischcon De Reya, HackerOne, AusCERT, and more.  Stay tuned for more announcements and resources leading up to the event.
Read Blog Post
DLP, Data Exfiltration
12 Examples of Data Exfiltration
By Maddie Rosenthal
Wednesday, February 3rd, 2021
Over the past two years, 90% of the world’s data has been generated. And, as the sheer volume of data continues to grow, organizations are becoming more and more susceptible to data exfiltration.  But, why would someone want to exfiltrate data? Data is valuable currency. From an e-commerce business to a manufacturing company, organizations across industries hold sensitive information about the business, its employees, customers, and clients. What is data exfiltration? Simply put, data exfiltration indicates the movement of sensitive data from inside the organization to outside without authorization. This can either be done accidentally or deliberately. The consequences of data exfiltration aren’t just around lost data. A breach means reputational damage, lost customer trust, and fines. The best way to illustrate the different types of data exfiltration and the impact these incidents have on businesses is with examples. Examples of data exfiltration  When it comes to data exfiltration, there are countless motives and methods. But, you can broadly group attempts into two categories: data exfiltration by someone within the organization, for example, a disgruntled or negligent employee, and data exfiltration by someone outside the organization; for example, a competitor.  Data exfiltration by insiders Data exfiltration by an insider indicates that company data has been shared by a member of the company to people (or organizations) outside of the company.   While most organizations have security software and policies in place to prevent insider threats from moving data outside of the office environment and outside of company control, insiders have easy access to company data, may know workarounds, and may have the technical know-how to infiltrate “secure” systems.  Here are six examples of data exfiltration by insiders:  Over the course of 9 months, an employee at Anthem Health Insurance forwarded 18,500 members records’ to a third-party vendor. These records included Personally Identifiable Information (PII) like social security numbers, last names, and dates of birth. After exfiltrating nearly 100 GB of data from an unnamed financial company that offered loan services to Ukraine citizens, an employee’s computer equipment was seized. Police later found out the suspect was planning on selling the data to a representative of one of his former employer’s competitors for $4,000.  Not all examples of data exfiltration are malicious, though. Some breaches happen inadvertently, like when an employee leaving the Federal Deposit Insurance Corporation (FDIC) accidentally downloaded data for 44,000 FDIC customers onto a personal storage device and took it out of the agency.  Jean Patrice Delia exfiltrated over 8,000 files from his employer, General Electric (GE), over eight years. Delia hoped to set up a rival company using insider secrets.The FBI investigation into Delia’s scam began in 2016. Details released in July 2020 showed how Delia persuaded a GE IT administrator to grant him privileged systems access — and emailed commercially-sensitive documents to a co-conspirator. On three occasions — in November 2018, January 2020, and October 2020 — Amazon has emailed customers to inform them that an insider has disclosed their personal information (usually email address) to a third party. Amazon hasn’t been very forthcoming about the details of these incidents, but there appears to be a pattern of insider data exfiltration emerging — which should be a serious concern for the company. After a data exfiltration near-miss, a Nevada court charged Egor Igorevich Kriuchkov with “conspiracy to intentionally cause damage to a protected computer” in September 2020. Kriuchkov attempted to bribe a Tesla employee to “transmit malware” onto Tesla’s network via email or USB drive to “exfiltrate data from the network.” The FBI disrupted the scheme, which could have caused serious damage to one of the world’s leading companies. Exfiltration by outsiders Unlike exfiltration by insiders, exfiltration by outsiders indicates that someone from outside an organization has stolen valuable company data.  Here are six examples of data exfiltration by outsiders:  In 2014, eBay suffered a breach that impacted 145 million users. In this case, cybercriminals gained unauthorized access to eBay’s corporate network through a handful of compromised employee log-in credentials. At the time, it was the second-biggest breach of a U.S. company based on the number of records accessed by hackers.  Stealing login credentials isn’t the only way bad actors can gain access to a network. In 2019, malware was discovered on Wawa payment processing servers. This malware harvested the credit card data of over 30 million customers, including card number, expiration date, and cardholder name.  Did you know? 91% of data breaches start with a phishing email. While many phishing emails direct targets to wire money, pay an invoice, or provide bank account details, some request sensitive employee or client information, for example, W-2 forms. You can read more about Tax Day scams on our blog.  In February 2021, Talos Intelligence researchers discovered a new variant of the “Masslogger” Trojan. Masslogger is a perfect example of how cybercriminals can use malware to exfiltrate data from online accounts. This new Masslogger variant arrives via a phishing email with “a legitimate-looking subject line” containing a malicious email attachment. The Trojan targets platforms like Discord, Outlook, Chrome, and NordVPN, using “fileless” attack methods to exfiltrate credentials. In October 2020, the UK’s Information Commissioner’s Office (ICO) fined British Airways (BA) £20 million ($28 million) after attackers exfiltrated customers’ data, including credit card numbers, names, and addresses. This massive data breach started in June 2018, when attackers installed malicious code on BA’s website. The ICO held BA fully responsible for the breach, which affected over 400,000 customers. Healthcare company Magellan Health discovered in April 2020 that hackers had exfiltrated sensitive customer data, including names, tax IDs, and Social Security Numbers. The breach started with a phishing email that an employee received five days earlier. This data exfiltration incident occurred just months after Magellan announced a similar phishing attack that exposed 50,000 customer records from its subsidiary companies Looking for more information about data exfiltration or data loss prevention? Follow these links: What is Data Exfiltration? Tips for Preventing Data Exfiltration Attacks What is Data Loss Prevention (DLP)? A Complete Overview of DLP on Email
Read Blog Post
DLP, Compliance
14 Biggest GDPR Fines of 2020 and 2021 (So Far)
Wednesday, February 3rd, 2021
Since the GDPR (General Data Protection Regulation) came into effect in May 2018, countless organizations have made headlines for violations. British Airways, Marriot International Hotels, Austrian Post…but what about in 2020 and 2021? According to research from DLA Piper, between January 26, 2020, and January 27, 2021: GDPR fines rose by nearly 40% Penalties under the GDPR totaled €158.5 million ($191.5 million) Data protection authorities recorded 121,165 data breach notifications (19% more than the previous 12-month period) The UK’s Data Protection Authority, the Information Commissioner’s Office (ICO), recently published data covering July 1, 2020, to October 31, 2020. The ICO’s data shows: The ICO received 2,594 data breach notifications.  The most common cybersecurity incident was phishing. As usual, the most common cause of data breaches was misdirected email. Keep reading to find out which organizations have been slapped with the biggest fines, why, and how the violation could have been prevented.  Looking for information about achieving and maintaining compliance? We explore solutions for reducing email risk (the #1 threat vector according to security leaders) on this page.
The biggest GDPR fines of 2020 and 2021 (so far) 1. Google – €50 million ($56.6 million)  Although Google’s fine is technically from 2019, the company appealed against it. In March 2020, judges at France’s top court for administrative law dismissed Google’s appeal and upheld the eye-watering penalty. How the violation(s) could have been avoided: Google should have provided more information to users in consent policies and should have granted them more control over how their personal data is processed. 2. H&M — €35 million ($41 million) On October 5, 2020 the Data Protection Authority of Hamburg, Germany, fined clothing retailer H&M €35,258,707.95 — the second-largest GDPR fine ever imposed. H&M’s GDPR violations involved the “monitoring of several hundred employees.” After employees took vacation or sick leave, they were required to attend a return-to-work meeting. Some of these meetings were recorded and accessible to over 50 H&M managers. Senior H&M staff gained ”a broad knowledge of their employees’ private lives… ranging from rather harmless details to family issues and religious beliefs.” This “detailed profile” was used to help evaluate employees’ performance and make decisions about their employment. How the violation(s) could have been avoided: Details of the decision haven’t been published, but the seriousness of H&M’s violation is clear. H&M appears to have violated the GDPR’s principle of data minimization — don’t process personal information, particularly sensitive data about people’s health and beliefs, unless you need to for a specific purpose. H&M should also have placed strict access controls on the data, and the company should not have used this data to make decisions about people’s employment. 3. TIM – €27.8 million ($31.5 million) On January 15, 2020 Italian telecommunications operator TIM (or Telecom Italia) was stung with a €27.8 million GDPR fine from Garante, the Italian Data Protection Authority, for a series of infractions and violations that have accumulated over the last several years.  TIM’s infractions include a variety of unlawful actions, most of which stem from an overly-aggressive marketing strategy. Millions of individuals were bombarded with promotional calls and unsolicited communications, some of whom were on non-contact and exclusion lists.   How the violation(s) could have been avoided: TIM should have managed lists of data subjects more carefully and created specific opt-ins for different marketing activities.   4. British Airways – €22 million ($26 million) In October, the ICO hit British Airways with a $26 million fine for a breach that took place in 2018. This is considerably less than $238 million dollar fine that the ICO originally said it intended to issue back in 2019.  So, what happened back in 2018? British Airway’s systems were compromised. The breach affected 400,000 customers and hackers got their hands on log in details, payment card information, and PI like travellers’ names and addresses.   How the violation(s) could have been avoided: According to the ICO, the attack was preventable, but BA didn’t have sufficient security measures in place to protect their systems, networks, and data. In fact, they didn’t even have basics like multi-factor authentication in place at the time of the breach. Going forward, the airline should take a data-first security approach, invest in security solutions, and ensure they have strict data privacy policies and procedures in place. 5. Marriott – €20.4 million ($23.8 million) While this is an eye-watering fine, it’s actually significantly lower than the $123 million fine the ICO originally said they’d levy. So, what happened? 383 million guest records (30 million EU residents) were exposed after the hotel chain’s guest reservation database was compromised. PI like guests’ names, addresses, passport numbers, and payment card information was exposed.  Note: The hack originated in Starwood Group’s reservation system in 2014. While Marriott acquired Starwood in 2016, the hack wasn’t detected until September 2018. How the violation(s) could have been avoided: The ICO found that Marriott failed to perform adequate due diligence after acquiring Starwood. They should have done more to safeguard their systemswith a stronger data loss prevention (DLP) strategyand utilized de-identification methods.  6. Wind — €17 million ($20 million) On July 13, Italian Data Protection Authority imposed a fine of €16,729,600 on telecoms company Wind due to its unlawful direct marketing activities. The enforcement action started after Italy’s regulator received complaints about Wind Tre’s marketing communications. Wind reportedly spammed Italians with ads — without their consent — and provided incorrect contact details, leaving consumers unable to unsubscribe. The regulator also found that Wind’s mobile apps forced users to agree to direct marketing and location tracking and that its business partners had undertaken illegal data-collection activities.  How the violation(s) could have been avoided:Wind should have established a valid lawful basis before using people’s contact details for direct marketing purposes. This probably would have meant getting consumers’ consent — unless it could  demonstrate that sending marketing materials was in its “legitimate interests.” For whatever reason you send direct marketing, you must ensure that consumers have an easy way to unsubscribe. And you must always ensure that your company’s Privacy Policy is accurate and up-to-date. 7. Notebooksbilliger.de — €10.4 million ($12.5 million) German electronics retailer notebooksbilliger.de (NBB) received this significant GDPR fine on January 8, 2021. The penalty relates to how NBB used CCTV cameras to monitor its employees and customers. The CCTV system had been running for two years, and NBB reportedly kept recordings for up to 60 days. NBB said it needed to record its staff and customers to prevent theft. The Lower Saxony DPA said the monitoring was an intrusion on its employees’ and customers’ privacy. NBB is disputing the fine. How the fine could have been avoided: The NBB’s fine reflects strict attitudes towards CCTV monitoring in parts of Germany. The regulator said NBB’s CCTV program was not limited to a specific person or period. Using CCTV isn’t prohibited under the GDPR, but you must ensure it is a legitimate and proportionate response to a specific problem. The UK’s ICO has some guidance on using CCTV in a GDPR-compliant way. 8. Google – €7 million ($7.9 million) 2020 was not a good year for Google. In March, the Swedish Data Protection Authority of Sweden (SDPA) fined Google for neglecting to remove a pair of search result listings under Europe’s “right to be forgotten” rules under the GDPR, which the SDPA ordered the company to do in 2017.  How the violation(s) could have been avoided: Google should have fulfilled the rights of data subjects, primarily their  right to be forgotten. This is also known as the right to erasure. How? By “ensuring a process was in place to respond to requests for erasure without undue delay and within one month of receipt.”  You can find more information about how to comply with requests for erasure from the ICO here.  9. Caixabank — €6 million ($7.2 million) This fine against financial services company Caixabank is the largest fine ever issued by the Spanish DPA (the AEPD).  The AEPD finalized Caixabank’s penalty on January 13, 2021, breaking Spain’s previous record GDPR fine, against BBVA — issued just one month earlier. This suggests a significant toughening of approach from the Spanish DPA. The first issue, which accounts for €4 million of the total fine, related to how Caixabank established a “legal basis” for using consumers’ personal data under Article 6. Second, Caixabank was fined €2 million for violating the GDPR’s transparency requirements at Articles 13 and 14.  How the fine could have been avoided:The AEPD said Caixabank relied on the legal basis of “legitimate interests” without proper justification. Before you rely on “legitimate interests,” you must conduct and document a “legitimate interests assessment.”  The company also failed to obtain consumers’ consent in a GDPR-compliant way. If you’re relying on “consent,” make sure it meets the GDPR’s strict “opt in” standards. The AEPD criticized Caixabank’s privacy policy as providing vague and inconsistent information about its data processing practices. Make sure you use clear language in your privacy notices and keep them consistent across websites and platforms. 10. BBVA (bank) — €5 million ($6 million) This fine against financial services giant BBVA (Banco Bilbao Vizcaya Argentaria) dates from December 11, 2020.  The BBVA’s penalty is the second biggest that the Spanish DPA (the AEPD) has ever imposed, and it shares many similarities with the AEPD’s largest-ever penalty, against Caixabank, issued the following month. Taken together with the record fine against Caixabank, it’s tempting to conclude that the Spanish DPA has its eye on the GDPR compliance of financial institutions. How the fine could have been avoided: The AEPD fined BBVA €3 million for sending SMS messages without obtaining consumers’ consent. In most circumstances, you must ensure you have GDPR-valid consent for sending direct marketing messages. The remaining €2 million of the penalty related to BBVA’s privacy policy, which failed to properly explain how the bank collected and use its customers’ personal data. Make sure you include all the necessary information under Articles 13 and 14 in your privacy policy. 11. AOK (Health Insurance) — €1.24 million ($1.5 million) On June 30, the Data Protection Authority of Baden-Wuerttemberg, Germany, imposed a €1.24 million fine on health insurance company Allgemeine Ortskrankenkasse (AOK).  AOK set up contests and lotteries using its customers’ personal information — including their health insurance details. The company also used this data for direct marketing. AOK tried to get consent for this, but it ended up marketing to some users who had not consented. The regulator found that the company had sent people marketing communications without establishing a lawful basis. AOK also failed to implement proper technical and organizational privacy safeguards to ensure they only sent marketing to those who consented. How the violation(s) could have been avoided: What’s the main takeaway from the AOK case? Be very careful when sending direct marketing. If you need people’s consent, make sure you keep adequate, up-to-date records of who has consented. 12. BKR (National Credit Register) — €830,000 ($973,000) On July 6, the Dutch Data Protection Authority fined the Bureau Krediet Registration (‘BKR’) €830,000 for charging individuals to access their personal information digitally. BKR allowed customers to access their personal information for free on paper, but only once per year. BKR is appealing the fine. How the violation(s) could have been avoided: BKR shouldn’t have been charging individuals to access their personal information, and they shouldn’t have been imposing a once-per-year limit. The GDPR is clear — you may only charge for access to personal information, or refuse access, if a person’s request is “manifestly unfounded or excessive.” 13. Iliad Italia — €800,000 ($976,000) On July 13, the Italian Data Protection Authority fined telecoms company Iliad Italia €800,000 for processing its users’ personal information unlawfully in numerous ways. One issue was Iliad’s collection of consent for its marketing activities, which the regulator found had been “bundled” with an acknowledgment of the company’s terms and conditions. Iliad also failed to store its users’ communications data securely. How the violation(s) could have been avoided: Consent under the GDPR is defined very narrowly. If you’re going to ask for a person’s consent, you must make it specific to a particular activity. Don’t “bundle” your consent requests — for example, by asking people to agree to marketing and sign a contract using one tickbox. Data security is one of the cornerstones of the GDPR. Iliad appears to have failed to implement proper access controls on its users’ personal information. You must ensure that personal information is only accessible on a “need to know” basis. 14. Unknown – €725,000 ($821,600) In April, the Dutch Data Protection Authority handed out its largest fine to date to a so-far unknown company for unlawfully using employees’ fingerprint scans for its attendance and timekeeping records. The violation took place over the course of 10 months. Note: Under the GDPR, biometric data like fingerprints are classified as sensitive personal data and it is subject to more stringent protections.  How the violation(s) could have been avoided: The company should have had a valid, lawful reason to collect employees’ fingerprints. They should have also had technical measures in place to process the data and a clear process for deleting the data. 
What else can organizations be fined for under GDPR?  While the biggest fines so far in 2020 involve marketing activities, failure to remove personal data when requested by EU citizens, and unlawfully requiring employees to have their biometric data recorded, there are a number of ways in which a breach can occur.  In fact, so far this year, misdirected emails have been the primary cause of data loss reported to the ICO. But, how do you prevent an accident? By focusing on people rather than systems and networks. How does Tessian help organizations stay GDPR compliant?
Powered by machine learning, Tessian’s Human Layer Security technology understands human behavior and relationships, enabling it to automatically detect and prevent anomalous and dangerous activity, including misdirected emails. Tessian also detects and prevents spear phishing attacks and data exfiltration attempts on email.  Importantly, though, Tessian doesn’t just prevent breaches. Tessian’s key features – which are both proactive and reactive – align with the GDPR requirement “to implement appropriate technical and organizational measures together with a process for regularly testing, assessing and evaluating the effectiveness of those measures to ensure the security of processing” (Article 32). To learn more about how Tessian helps with GDPR compliance, you can read our customer stories or book a demo. Or, for information about other data privacy legislation, check out our compliance hub. 
Read Blog Post
Human Layer Security
The 7 Deadly Sins of SAT
Tuesday, February 2nd, 2021
Security Awareness Training (SAT) just isn’t working: for companies, for employees, for anybody.  By 2022, 60% of large organizations will have comprehensive SAT programs (source: Gartner Magic Quadrant for SAT 2019), with global spending on security awareness training for employees predicted to reach $10 billion by 2027. While this adoption and market size seems impressive, SAT in its current form is fundamentally broken and needs a rethink. Fast.  There are 7 fundamental problems with SAT today: 1. It’s a tick box SAT is seen as a “quick win” when it comes to security – a tick box item that companies can do in order to tell their shareholders, regulators and customers that they’re taking security seriously. Often the evidence of these initiatives being conducted is much more important than the effectiveness of them. 2. It’s boring and forgettable Too many SAT programs are delivered once or twice a year in unmemorable sessions. However we dress it up, SAT just isn’t engaging. The training sessions are too long, videos are cringeworthy, and the experience is delivered through clunky interfaces reminiscent of CD-Rom multimedia from the 90s. What’s more, after just one day people forget more than 70% of what was taught in training, while 1 in 5 employees don’t even show up for SAT sessions. 3. It’s one-size-fits-all We give the same training content to everyone, regardless of their seniority, tenure, location, department etc. This is a mistake. Every employee has different security characteristics (strengths, weaknesses, access to data and systems) so why do we insist on giving the same material to everybody to focus on? 4. It’s phishing-centric Phishing is a huge risk when it comes to Human Layer Security, but it’s by no means the only one. So many SAT programs are overly focused on the threat of phishing and completely ignore other risks caused by human error, like sending emails and attachments to the wrong people or sending highly confidential information to personal email accounts. Learn more about the pros and cons of phishing awareness training.  5. It’s one-off Too many SAT programs are delivered once or twice a year in lengthy sessions. This makes it really hard for employees to remember the training they were given (when they completed it five months ago), and the sessions themselves have to cram in too much content to be memorable. 
6. It’s expensive So often companies only look at the license cost of a SAT program to determine costs—this is a grave mistake. SAT is one of the most expensive parts of an organization’s security program, because the total cost of ownership includes not just the license costs, but also the total cost of all employee time spent going through it, not to mention the opportunity cost of them doing something else with that time. 
7. It’s disconnected from other systems SAT platforms are generally standalone products, and they don’t talk to other parts of the security stack. This means that organizations aren’t leveraging the intelligence from these platforms to drive better outcomes in their security practice (preventing future breaches), nor are they using the intelligence to improve and iterate on the overall security culture of the company.  The solution? SAT 2.0 So, should we ditch our SAT initiative altogether? Absolutely not! People are now the gatekeepers to the most sensitive systems and data in the enterprise and providing security awareness and training to them is a crucial pillar of any cybersecurity initiative. It is, however, time for a new approach—one that’s automated, in-the-moment, and long-lasting. Read more about Tessian’s approach to SAT 2.0 here.
Read Blog Post
Human Layer Security
SAT is Dead. Long Live SAT.
By Tim Sadler
Tuesday, February 2nd, 2021
Security Awareness Training (SAT)  just isn’t working: for companies, for employees, for anybody.  The average human makes 35,000 decisions every single day. On a weekday, the majority of these decisions are those made at work; decisions around things like data sharing, clicking a link in an email, entering our password credentials into a website. Employees have so much power at their fingertips, and if any one of these 35,000 decisions is, in fact a bad decision — like somebody breaking the rules, making a mistake or being tricked —it can lead to serious security incidents for a business.  The way we tackle this today? With SAT. By 2022, 60% of large organizations will have comprehensive SAT programs (source: Gartner Magic Quadrant for SAT 2019), with global spending on security awareness training for employees predicted to reach $10 billion by 2027. While this adoption and market size seems impressive, SAT in its current form is fundamentally broken and needs a rethink. Fast.  As Tessian’s customer Mark Lodgson put it, “there are three fundamental problems with any awareness campaign. First, it’s often irrelevant to the user. The second, that training is often boring. The third, it takes a big chunk of money out of the business.” 
The 3 big problems with security awareness training There are three fundamental problems with SAT today: SAT is a tick-box exercise SAT is seen as a “quick win” when it comes to security – a box ticking item that companies can do in order to tell their shareholders, regulators and customers that they’re taking security seriously. Often the evidence of these initiatives being conducted is much more important than the effectiveness of them.  Too many SAT programs are delivered once or twice a year in lengthy sessions. This makes it really hard for employees to remember the training they were given (when they completed it five months ago), and the sessions themselves have to cram in too much content to be memorable.  SAT is one-size-fits-all and boring We give the same training content to everyone, regardless of their seniority, tenure, location, department etc. This is a mistake. Every employee has different security characteristics (strengths, weaknesses, access to data and systems) so why do we insist on giving the same material to everybody to focus on? Also, however we dress it up, SAT just isn’t engaging. The training sessions are too long, videos are cringeworthy and the experience is delivered through clunky interfaces reminiscent of CD-Rom multimedia from the 90s. What’s more, after just one day people forget more than 70% of what was taught in training, while 1 in 5 employees don’t even show up for SAT sessions. (More on the pros and cons of phishing awareness training here.) SAT is expensive So often companies only look at the license cost of a SAT program to determine costs—this is a grave mistake. SAT is one of the most expensive parts of an organization’s security program, because the total cost of ownership includes not just the license costs, but also the total cost of all employee time spent going through it, not to mention the opportunity cost of them doing something else with that time.
Enter, security awareness training 2.0 So, should we ditch our SAT initiative altogether? Absolutely not! People are now the gatekeepers to the most sensitive systems and data in the enterprise and providing security awareness and training to them is a crucial pillar of any cybersecurity initiative. It is, however, time for a new approach. Enter SAT 2.0.  SAT 2.0 is automated, in-the-moment and continuous Rather than having SAT once or twice per year scheduled in hour long blocks, SAT should be continuously delivered through nudges that provide in-the-moment feedback to employees about suspicious activity or risky behavior, and help them improve their security behavior over time. For example, our SAT programs should be able to detect when an employee is about to send all of your customer data to their personal email account, stop the email from being sent, and educate the employee in-the-moment about why this isn’t OK.  SAT also shouldn’t have to rely on security teams to disseminate to employees. It should be as automated as possible, presenting itself when needed most and adapting automatically to the specific needs of the employee in the moment. Automated security makes people better at their jobs.  SAT 2.0 is engaging, memorable and specific to each employee Because each employee has different security strengths and vulnerabilities, we need to make sure that SAT is specifically tailored to suit their needs. For example, employees who work in the finance team might need extra support with BEC awareness, and people in the sales team might need extra support with preventing accidental data loss. Tailoring SAT means employees can spend their limited time learning the things that are most likely to drive impact for them and their organization.   SAT should put the real life threats that employees face into context. Today SAT platforms rely on simulating phishing threats by using pre-defined templates of common threats. This is a fair approach for generic phishing awareness (e.g. beware the fake O365 password login page), but it’s ineffective at driving awareness and preparing employees for the highly targeted phishing threats they’re increasingly likely to see today (e.g. an email impersonating their CFO with a spoofed domain).
SAT 2.0 delivers real ROI SAT 2.0 can actually save your company money, by preventing the incidents of human error that result in serious data breaches. What’s more, SAT platforms are rich in data and insights, which can be used in other security systems and tools. We can use this information as an input to other systems and tools and the SAT platform itself to provide adaptive protection for employees. For example, if my SAT platform tells me that an employee has a 50% higher propensity to click malicious links in phishing emails, I can use that data as input to my email security products to, by default, strip links from emails they receive, actively stopping the threat from happening. It’s also crucial to expand the scope of SAT beyond just phishing emails. We need to educate our employees about all of the other risks they face when controlling digital systems and data. Things like misdirected emails and attachments, sensitive data being shared with personal or unauthorized accounts, data protection and PII etc.
SAT 2.0 is win-win for your business and your employees The shift to SAT 2.0 is win-win for both the enterprise and employees.  Lower costs and real ROI for the business Today SAT Is one of the most expensive parts of an enterprise’s security program, but it doesn’t have to be this way. By delivering smaller nuggets of educational nudges to employees when it’s needed most it means no more wasted business hours. Not only this, but by being able to detect risky behavior in the moment, SAT 2.0 can meaningfully help reduce data breaches and deliver real ROI to security teams. Imagine being able to report the board that your SAT 2.0 program has actually saved your company money instead. SAT 2.0 builds employees’ confidence In a recent study about why fear appeals don’t work in cybersecurity, it was revealed that the most important thing for driving behavior change for your employees is to help them build self-efficacy: a belief in themselves that they are equipped with the awareness of threats and the knowledge of what to do if something goes wrong. This not only hones their security reflexes, but also increases their overall satisfaction with work, as they get to spend less time in boring training sessions and feel more empowered to do their job securely.  3 easy steps to SAT 2.0 A training program that stops threats – not business or employee productivity – might sound like a pipe dream, but it doesn’t have to be. SAT 2.0 is as easy as 1,2,3…  Step 1: Leverage your SAT data to build a Human Risk Score Your SAT platform likely holds rich data and information about your employees and their security awareness that you’re not currently leveraging. Start by using the output of your SAT platform (e.g. test results, completion times, confidence scores, phishing simulation click through rates) to manually build a Human Risk Score for each employee. This provides you with a baseline understanding of who your riskiest and safest employees are, and offers insight into their specific security strengths and weaknesses. You can also add to this score with external data sources from things like your data breach register or data from other security tools you use. Step 2: Tailor your SAT program to suit the needs of departments or employees Using the Human Risk Scores you’ve calculated, you can then start to tailor your SAT program to the needs of employees or particular departments. If you know your Finance team faces a greater threat from phishing and produces higher click through rates on simulations, you might want to double down on your phishing simulation training. If you know your Sales team has problems with sending customer data to the wrong place, you may want to focus training there. Your employees have a finite attention span, make sure you’re capturing their attention on the most critical things as part of your SAT program.  Step 3: Connect your SAT platform to your other security infrastructure Use the data and insights from your SAT platform and your Human Risk Scores to serve as input for the other security infrastructure you use. You might choose to have tighter DLP controls set for employees with a high Human Risk Score or stricter inbound email security controls for people who have a higher failure rate on phishing simulations.  Want an even easier path to SAT 2.0? Invest in a Human Layer Security platform Tessian’s Human Layer Security platform can help you automatically achieve all of this and transition your organization into the brave new world of SAT 2.0. Using stateful machine learning, Tessian builds an understanding of historical employee security behavior, to automatically map Human Risk Scores, remediate security threats caused by people, and nudge employees toward better security behavior through in-the-moment notifications and alerts. SAT is not “just a people problem” We so often hear in the security community that “the focus is too much on technology when it needs to be on people”.  I disagree. We need to ask more of technology to deliver more impact with SAT.  SAT 1.0 is reminiscent of a time when to legally drive a car all you had to do was pass a driving test. You’d been trained! The box had been checked! And then all you had to do was make sure you did the right thing 100% of the time and you’d be fine.  But that isn’t what happened.  People inevitably made mistakes, and it cost them their lives. Today, I still have to pass my driving test to get behind the wheel of a car.But now our cars are loaded with assistive technology to keep us safe doing the most dangerous thing we do in our daily lives. Seatbelts, anti-lock brakes, airbags, notifications that tell me when I’m driving too fast, when I lose grip or when I’m about to run out of fuel.  However hard we try, however good the training, you can never train away the risk of human error. Car companies realized this over 60 years ago—we need to leverage technology to protect people in the moment when they need it the most.  This is the same shift we need to drive (excuse the pun) in SAT.  One day, we’ll have self driving cars, with no driving tests. Maybe we’ll have self driving cybersecurity with no need for SAT. But until then, give your employees the airbags, the seatbelt and the anti-lock brakes, not just the driving test and the “good luck”. 
Read Blog Post
Spear Phishing
6 Reasons to Download “How to Hack a Human” Now
By Maddie Rosenthal
Tuesday, February 2nd, 2021
Over the last decade, phishing has evolved from spam to something much (much) more targeted. It’s now the threat most likely to cause a breach. At the same time, the number of adults on social media networks like Facebook has jumped by almost 1,300%. We explore the correlation between the two in our latest research report “How to Hack a Human”. You can download it here. Need a few good reasons to download it? Keep reading.  1. You’ll get a hacker’s perspective Actually, you’ll get ten (ethical) hackers’ perspectives. We partnered with HackerOne and other social engineering experts to learn how they use publicly available information – like social media posts, OOO messages, press releases, and more – to craft highly targeted,  highly effective social engineering attacks. In the end, we found out that they use everything. A photo from your gender reveal party can help them uncover your home address. A post about your dog can help them guess your password. An OOO message can tell them who to target, who to impersonate, and give them a sense of their window of opportunity. 2. You’ll learn how vulnerable organizations are to attack  By surveying 4,000 employees and using Tessian platform data, we were able to uncover how frequently people (and the companies they work for) are being targeted by social engineering attacks, business email compromise (BEC), wire transfer fraud, and more. The numbers are staggering. 88% of people have received a suspicious message in the last year.  Of course, some industries are more vulnerable than others. !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); And, we expect to see more next year. Why? Between H1 2020 and H2 2020, we saw a 15% increase in attacks.  Read the report to find out more.  3. We show two examples of social engineering – including the “clues” that enabled hackers to carry out the attack Using social media posts, news headlines, and OOO messages, we breakdown two attacks. CEO Fraud in Financial Services Account Takeover (ATO) in Healthcare We explain the hacker’s motivation, what the attack looked like, and – in the end – how it could have been prevented. (More on that below). 4. You’ll get access to a free, educational guide to help employees level-up their personal and professional cybersecurity  As we’ve said, hackers hack humans to hack the companies they work for. So, to help security leaders communicate the threat and teach their employees how to prevent being targeted and how to spot an attack if it lands their inbox, we put together a comprehensive list of do’s and don’ts.  You can find it on page 20. Bonus: Are you a Tessian customer? We’re happy to co-brand the list. Get in touch with your Customer Success Executive for more information. 5. The dataset is global In addition to interviewing employees in the US and the UK, Tessian platform data accounts for organizations across continents.  Why does this matter? It goes to show that this isn’t a problem that’s isolated to a specific region. Everyone is being targeted by social engineering attacks. But – interestingly – the online habits of Americans vs. Brits vary considerably. For example, while 93% of US employees say they update their job status on social media when they start a new role, just 63% of UK employees said the same.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Top tip: New starters are prime targets of social engineering attacks. They’re typically given their full access credentials when they start, but don’t yet know who’s who. They may also not have had their security training yet. Finally, given that they’re new, they’ll be especially keen to make a good impression. 6. You’ll get a peek inside a hacker’s toolkit  Yes, all of the information hacker’s use is easy enough to find  online (esspecially if they’re motivated to find it). But. there are plenty of tools that hackers use that make connecting the dots and cracking passwords quick and easy. We outline ten in the report. You’ll likely recognize some of them… Most – if not all – of these tools were designed for the “good guys”. Penetration testers, compliance teams, and even law enforcement. In fact, some are even marketing and sales tools! Flip to page 16 to learn more. Bonus: The report is ungated…for now For the next few weeks, you’ll be able to download the report without filling out a form. Yep, you just click “download” and it’s yours. Starting at the end of February, you’ll just need to provide your email address and a few other pieces of information about your role and company.  Ready? Set? Download.
Read Blog Post
Spear Phishing
Tessian Launches Account Takeover (ATO) Protection
By Harry Wetherald
Wednesday, January 27th, 2021
Today, a comprehensive email security strategy needs to do more more than just secure an organization’s own email platform and users. Why? Because more and more often, bad actors are gaining access to the email accounts of trusted senders (suppliers, customers, and other third-parties) to breach a target company. This is called account takeover (ATO) and one in seven organizations have experienced this kind of attack. And, since legitimate business email accounts are used to carry out these attacks, it is one of the most difficult impersonation attacks to detect, making most organizations vulnerable to ATO.  But, not Tessian customers. Tessian Defender can now detect and prevent ATO. How does Tessian Defender detect ATO? Unlike Secure Email Gateways (SEGs) – which rely almost exclusively on domain authentication and payload inspection – Tessian Defender uses machine learning (ML), anomaly detection, behavioral analysis, and natural language processing (NLP) to detect a variety of ATO signals:  Unusual sender characteristics: This includes anomalous geophysical locations, IP addresses, email clients, and reply-to addresses  Anomalous email sending patterns: Based on historical email analysis, Tessian can identity unusual recipients, unusual send times, and emails sent to an unusual number of recipients Malicious payloads: Tessian uses URL match patterns to spot suspicious URLs and ML to identify red flags indicative of suspicious attachments  Deep content inspection: Looking at the email content – for example, language that conveys suspicious intent – Tessian can detect zero-payload attacks, too
Importantly, Tessian’s ML algorithm gets smarter as it continuously analyzes email communications across its global network. This way, it can build profiles of organizations (and their employees) to understand what “normal” email communications look like at a granular level.  This allows Tessian Defender to catch even the most subtle ATO attacks. Once it detects a threat, Tessian alerts employees and admins that an email might be unsafe. The warnings are written in easy-to-understand language and explain why an email has been flagged, which prevents the users from responding to the email or clicking on malicious links or attachments. These warnings also act as in-the-moment training and help improve email behavior over time.  Administrators get real-time alerts of ATO and can track events in the Human Layer Security Intelligence portal. You can learn more about how Tessian detects and prevents ATO here. Keep reading to see an admin’s view of the portal and what a warning looks like for employees.
What are the benefits of Tessian ATO threat protection?  The consequences of ATO are far-reaching.  Attackers could gain access to credentials, employee data, and computer data. They could initiate fraudulent wire transfers, conduct bank fraud, and sell data. That means organizations could suffer significant financial loss, reputational damage, and lose customers (and their trust). And this doesn’t even account for lost productivity, data loss, or regulatory fines.  Between 2013 and 2015, Facebook and Google were scammed out of $121 million after a hacker impersonated a trusted vendor. And that’s just one example.  Tessian’s ATO threat protection minimizes these risks by preventing successful attacks. But, detecting and preventing threats is just one of the benefits of Tessian.   For security teams
Detection is automated, which means it’s not just effective, but also effortless for security teams Real-time alerts of ATO events and robust tools (like single-click quarantine) allow for rapid investigation and remediation directly in the portal  Tessian’s API can be integrated with SIEMs like Splunk and Rapid7, allowing security analysts and SOC teams to analyze Tessian data alongside insights from other solutions In-the-moment warnings reinforce security awareness training and help nudge employees towards safer email behavior For the C-suite
ATO protection doesn’t just keep your organization safe and compliant (and help you avoid reputational damage or financial loss). It’s a competitive differentiator and can help build trust with existing customers, clients, and your supply chain. Multi-layer threat insights, visualized data, and industry benchmarks help CISOs understand their organization’s security posture compared to their industry peers Automated reports make it easy to communicate success to the board and other key stakeholders For employees
Contextual warnings are helpful – not annoying – and act as in-the-moment training. This helps employees improve their security reflexes over time for safer email behavior. Flag rates are low (and false positives are rare) which means employees can do the job they were hired to do, without security getting in the way Learn more about Tessian Interested in learning more about Tessian Defender and ATO Protection? Current Tessian customers can get in touch with their Customer Success Manager. Not yet a Tessian customer? Learn more about our technology, explore our customer stories, or book a demo now.
Read Blog Post
Spear Phishing
5 Real-World Examples of Business Email Compromise (Updated 2021)
Monday, January 25th, 2021
Business Email Compromise (BEC) attacks use real or impersonated business email accounts to defraud employees. The FBI calls BEC a “$26 billion scam” that affects thousands of businesses every year. This article will look at some examples of BEC attacks that have cost organizations money, time, and reputation — to help you avoid making the same mistakes. Not sure what BEC is? We tell you everything you need to know about it – including how it works – in this article: What is Business Email Compromise and How Does it Work?  1. $17.2m acquisition scam Our first example demonstrates how fraudsters can play on a target’s trust and exploit interpersonal relationships. In June 2014, Keith McMurtry, a Scoular employee, received an email supposedly from his boss, CEO Chuck Elsea. The email informed McMurty that Scoular was set to acquire a Chinese company. Elsea instructed McMurty to contact a lawyer at accounting firm KPMG. The lawyer would help facilitate a transfer of funds and close the deal.  McMurty obeyed, and he soon found himself transferring $17.2 million to a Shanghai bank account in the name of “Dadi Co.” The CEO’s email, as you might have guessed, was fraudulent. The scammers had used email impersonation to create accounts imitating both Elsea and the KPMG lawyer. Aside from the gargantuan $17.2m loss, what’s special about the Scoular scam? Take a look at this excerpt from the email, provided by FT.com, from “Elsea” to McMurty: “We need the company to be funded properly and to show sufficient strength toward the Chinese. Keith, I will not forget your professionalism in this deal, and I will show you my appreciation very shortly.” Given the emotive language, the praise, and the promise of future rewards — it’s easy to see why an employee would go along with a scam like this. 2. BEC scammers exploit COVID-19 fears 2020 was a turbulent year, and we saw cybercriminals exploiting people’s fear and uncertainty like never before. A particularly prevalent example was the trend of COVID-19-related BEC scams. As the pandemic spread, governments worldwide issued warnings about a surge in cyberattacks. In April 2020, for example, the FBI warned that scammers were “using the uncertainty surrounding the COVID-19 pandemic” to conduct BEC scams.  The FBI gave one example of an unnamed company, whose supposed supplier requested payments to a new account “due to the Coronavirus outbreak and quarantine processes and precautions.” Criminals will always seek to capitalize on chaos. In December 2020, Keeper reported that uncertainty caused by COVID-19, Brexit, and the move to remote-working led to 70% of U.K. finance companies experiences experiencing BEC attacks over the preceding year. Looking for more examples of scammers exploiting COVID-19 fears? We share four more and outline the red flags contained in each here. BONUS! There’s a downloadable guide at the bottom of the article.  3. $46.7m vendor fraud In August 2015, IT company Ubiquiti filed a report to the U.S. Securities and Exchange Commission revealing it was the victim of a $46.7 million “business fraud.” This attack was an example of a type of BEC, sometimes called Vendor Email Compromise (VEC). The scammers impersonated employees at a third-party company and targeted Ubiquiti’s finance department. We still don’t know precisely how the cybercriminals pulled off this massive scam. VEC attacks previously relied on domain impersonation and email spoofing techniques, but these days, scammers are increasingly turning to the more sophisticated account takeover method. 4. Snapchat payroll information breach Many high-profile BEC attacks target a company’s finance department and request payment of an invoice to a new account. But not all BEC scams involve wire transfer fraud. Here’s an example of how BEC scams can target data, as well as money. In February 2016, cybercriminals launched a BEC attack against social media firm Snapchat. Impersonating Snapchat’s CEO, the attackers obtained “payroll information about some current and former employees.” The scam resulted in a breach of some highly sensitive data, including employees’ Social Security Numbers, tax information, salaries, and healthcare plans. Snapchat offered each affected employee two years of free credit monitoring and up to $1 million in reimbursement. 5. The big one: $121m BEC scam targeting Facebook and Google  Last — but by no means least — let’s look at the biggest known BEC scam of all time: a VEC attack against tech giants Facebook and Google that resulted in around $121 million in collective losses. The scam took place between 2013 and 2015 — and the man at the center of this BEC attack, Evaldas Rimasauskas, was sentenced to five years in prison in 2019. So how did some of the world’s most tech-savvy employees fall for this elaborate hoax?  Rimasauskas and associates set up a fake company named “Quanta Computer”  — the same name as a real hardware supplier. The group then presented Facebook and Google with convincing-looking invoices, which they duly paid to bank accounts controlled by Rimasauskas. As well as fake invoices, the scammers prepared counterfeit lawyers’ letters and contracts to ensure their banks accepted the transfers. The Rimasauskas scam stands as a lesson to all organizations. If two of the world’s biggest tech companies lost millions to BEC over a two-year period — it could happen to any business. If you’re worried that your organization might be targeted by a BEC attack and are looking for a solution, click here. You can also explore other examples of email attacks in these articles: 6 Examples of Social Engineering Attacks COVID-19: Real-Life Examples of Opportunistic Phishing Emails  Phishing Statistics (Updated 2021)
Read Blog Post
Spear Phishing
What is Business Email Compromise (BEC)? How Does it Work?
Monday, January 25th, 2021
In this article, we’ll look at why cybercriminals use BEC, how it works, and why it remains a serious problem.  Looking for exampels of BEC attacks or information about how to prevent business email compromise instead? Check out these pages instead: How to overcome this multi-billion dollar threat Real-world examples of Business Email Compromise Why compromise a business email account? BEC is a tried-and-tested cyberattack method that costs consumers and businesses billions every year. So what makes BEC such a prevalent cybercrime technique?  Simply put: cybercriminals use BEC as a way to make social engineering attacks more effective.  A social engineering attack is any form of cybercrime involving impersonation. The attacker pretends to be a trusted person so that the target does what they’re told.  Here are some examples of social engineering attacks that can involve BEC: Spear Phishing: A social engineering attack conducted via email (smishing and vishing are social engineering attacks conducted via SMS and voice respectively) CEO fraud: A phishing attack where the attack impersonates a company executive Whaling: A phishing attack targeting a corporate executive Wire transfer fraud: A phishing attack where the attacker persuades the target to transfer money to their account All these social engineering attacks involve some sort of impersonation. Fraudsters use every tool available to make their impersonation more convincing. And one of the best tools available is a genuine — or genuine looking — business email address. BEC attacks target both individuals and businesses and the attacker will (generally) use BEC to gain access to one of the following: Money. According to Verizon’s 2020 Data Breach Investigation Report, most BEC attacks now involve wire transfer fraud. Account credentials: A fraudulent email might contain a phishing link leading to a fake account login page. The FBI warns that this BEC variant is on the rise. Gift certificates: BEC attackers can persuade their target to purchase gift certificates rather than transferring them money. Now you know why cybercriminals launch BEC attacks, we’re going to look at how they do it. How does BEC work? There are various competing definitions of BEC — so before we explain the process, let’s clarify what we mean when we use this term. A BEC attack is any phishing attack where the target believes they have received an email from a genuine business. There are several methods that a cybercriminal can use to achieve this, including:  Email impersonation Email spoofing Email account takeover Let’s look at each of these techniques. Email impersonation is where the attacker sets up an email account that looks like a business email account. Here’s an example:
In this case, we can imagine Leon Green really is Tess’ boss and that an invoice for Amazon really is due to be paid. This information is easy enough to find online. But, note that the sender’s email address is “[email protected]”.  If you look carefully, you’ll see Microsoft is misspelled.  Many people miss small details like this. Worse still, mobile email clients typically only show the sender’s display name and hide their email address.
Email spoofing is where the attacker modifies an email’s envelope and header. The receiving mail server thinks the email came from a corporate domain and the recipient’s email client displays incorrect sender information.  You can read more about email spoofing – and see an example of a spoofed email header – in this article: What is Email Spoofing? How Does Email Spoofing Work? In account takeover (ATO), the attacker gains access to a corporate email account, whether via hacking or by using stolen account credentials. They gather information about the user’s contacts, email style, and personal data — then they use the account to send a phishing email.
How serious is BEC? We know BEC is a common cyberattack method. But how many businesses are affected, and how badly? Because many BEC attacks go unnoticed — and because different organizations use different definitions of BEC —  there’s no simple answer. So what do we know about the prevalence of BEC? The best source of cybercrime statistics comes from the FBI’s Internet Crime Complaint Center (IC3), which reports that: Between 2016 and 2020, the IC3 recorded 185,718 BEC incidents worldwide, resulting in losses totaling over $28 billion. In 2020, losses from BEC exceeded $1.8 billion—a fourfold increase since 2016. The number of BEC incidents went up by 61% between 2016 and 2020. Next steps We’ve looked at the different types of BEC, how a BEC attack works, and how serious and pervasive this form of cybercrime has become. Next, let’s look at examples of BEC attacks. This will help you learn from the experiences of other organizations.
Read Blog Post
Spear Phishing
What is Email Spoofing? How Does Email Spoofing Work?
Friday, January 22nd, 2021
Let’s start with a definition of email spoofing.
While email spoofing can have serious consequences, it’s not particularly difficult for a hacker to do. And, despite the fact that email filters and apps are getting better at detecting spoofed emails… they can still slip through.  Keep reading to find out: What motivates someone to spoof an email address How email spoofing works How common email spoofing is If you’re here to learn how to prevent email spoofing, check out this article instead: How to Prevent Email Spoofing. Why do people spoof emails? You might be wondering why someone would want to spoof another person or company’s email address in the first place. It’s simple: they want the recipient to believe that the email came from a trusted person. Most commonly it is used for activities such as: Spear phishing: A type of “social engineering” attack where the attacker impersonates a trusted person and targets a specific individual. Business Email Compromise (BEC): A phishing attack involving a spoofed, impersonated, or hacked corporate email address. CEO fraud: A BEC attack where the attacker impersonates a high-level company executive and targets an employee. Vendor Email Compromise (VEC): A BEC attack where the attack impersonates a vendor or another business in a company’s supply chain. Spamming: Sending unsolicited commercial email to large numbers of people. Now let’s look at the technical process behind email spoofing. How email spoofing works First, we need to distinguish between “email spoofing,” and “domain impersonation.” Sometimes these two techniques get conflated.  Here’s the difference: In an email spoofing attack, the sender’s email address looks identical to the genuine email address ([email protected]).  In a domain impersonation attack, the fraudster uses an email address that is very similar to another email address ([email protected]). When you receive an email, your email client (e.g. Outlook or Gmail) tells you who the email is supposedly from. When you click “reply,” your client automatically fills in the “to” field in your return email. It’s all done automatically and behind the scenes. But, this information is not as reliable as you might think. An email consists of several parts: Envelope: Tells the receiving server who sent the email and who will receive it. When you get an email, you don’t normally see the envelope. Header: Contains metadata about the email: including the sender’s name and email address, send date, subject, and “reply-to” address. You can see this part. Body: The content of the email itself. Spoofing is so common because it’s surprisingly easy to forge the “from” elements of an email’s envelope and header, to make it seem like someone else has sent it.  Obviously, we’re not going to provide instructions on how to spoof an email. But we can break down a spoofed email to help you understand how the process works.  Let’s take a look at the email header:
First, look at the “Received From” header, highlighted in blue, which shows that the email came from the domain “cybercrime.org.” But now look at the parts highlighted in yellow — the “Return-Path,” “From,” and “Reply-To” headers — which all point to “Mickey Mouse,” or “[email protected]”. These headers dictate what the recipient sees in their inbox, and they’ve all been forged. The standard email protocol (SMTP) has no default way of authenticating an email. There are authentication checks that depend on the domain owner protecting its domain. In this case, the spoof email failed two important authentication processes (also highlighted in blue, above): SPF, short for Sender Policy Framework: Checks if the sender’s IP address is associated with the domain specified in the envelope. DMARC, short for Domain-based Message Authentication, Reporting, and Conformance: Verifies an email’s header information. DKIM, short for DomainKeys Identified Mail: Designed to make sure messages aren’t altered in transit between the sending and recipient servers. As you can see, DMARC, SPF, and DKIM all = none. That means our spoofed email slipped right through. Here’s how the email looks in the recipient’s inbox:
The email above appears to have been sent by Mickey Mouse, using the email address [email protected] But we know from the header that it actually came from cybercrime.org. This demonstrates the importance of setting up DMARC policies. You can learn more about how to do that here. Note: Disney does have DMARC enabled. This is a hypothetical example! Want to find out which companies don’t have DMARC set-up? Check out this website.  How common is spoofing? Measuring the precise number of spoofed emails sent and received every day is impossible. But we can look at how many cybercrime incidents involving spoofing get reported each year. A good place to start is the U.S. Federal Bureau of Investigation (FBI)’s Internet Crime Complaint Center (IC3) annual report.  In 2020, the IC3 reported that: 28,218 of the 791,790 complaints the IC3 received related to spoofing The losses associated with spoofing complaints totaled over $216 million Spoofing was the sixth most costly type of cybercrime The number of spoofing attacks rose 81% since 2018 The losses from spoofing have more than doubled since 2018 Note that the IC3’s definition of “spoofing” includes incidents involving spoofed phone numbers. But we already know that 96% of phishing attacks start with email. Now you understand what email spoofing is, and how serious a threat it can be, it’s time to read our article on how to prevent email spoofing.
Read Blog Post
Spear Phishing
How to Prevent and Avoid Falling for Email Spoofing Attacks
By Maddie Rosenthal
Friday, January 22nd, 2021
Email spoofing is a common way for cybercriminals to launch phishing attacks — and just one successful phishing attack can devastate your business. That’s why every secure organization has a strategy for detecting and filtering out spoofed emails. Do you? This article will walk you through some of the best methods for preventing email spoofing. Want to learn more about email spoofing, how hackers do it, and how common these attacks are? Check out this article: What is Email Spoofing and How Does it Work? And, if you’re wondering how to prevent your email address or domain from being spoofed…the first step is to enable DMARC. But, even that isn’t enough. We explain why in this article: Why DMARC Isn’t Enough to Stop Impersonation Attacks.  Security awareness training Email spoofing is a common tactic in social engineering attacks such as spear phishing, CEO fraud, and Business Email Compromise (BEC). Social engineering attacks exploit people’s trust to persuade them to click a phishing link, download a malicious file, or make a fraudulent payment. That means part of the solution lies in educating the people being targeted.  It’s important to note that cyberattacks target employees at every level of a company — which means cybersecurity is everyone’s responsibility. Security awareness training can help employees recognize when such an attack is underway and understand how to respond.  In this article  – What Is Email Spoofing and How Does it Work? – we looked at how an email’s header can reveal that the sender address has been spoofed. Looking “under the hood” of an email’s header is a useful exercise to help employees understand how email spoofing works. You can see if the email failed authentication processes like SPF, DKIM, and DMARC, and check whether the “Received” and “From” headers point to different domains. But it’s not realistic to expect people to carefully inspect the header of every email they receive. So what are some other giveaways that might suggest that an email spoofing scam is underway? The email doesn’t look how you expect. The sender might be “paypal.com.” But does the email really look like PayPal’s other emails? Most sophisticated cybercriminals use the spoofed company’s branding — but some can make mistakes. The email contains spelling and grammar errors. Again, these mistakes aren’t common among professional cybercriminals, but they still can occur. The email uses an urgent tone. If the boss emails you, urgently requesting that you pay an invoice into an unrecognized account — take a moment. This could be CEO fraud. You must get your whole team on board to defend against cybersecurity threats, and security awareness training can help you do this. However, Tessian research suggests that the effectiveness of security training is limited.  Email provider warnings Your mail server is another line of defense against spoofing attacks. Email servers check whether incoming emails have failed authentication processes, such as SPF (Sender Policy Framework), DKIM (DomainKeys Identified Mail), and DMARC (Domain-based Message Authentication, Reporting, and Conformance). Many email providers will warn the user if an email has failed authentication. Here’s an example of such a warning from Protonmail:
As part of your company’s security awareness training, you can urge employees to pay close attention to these warnings and report them to your IT or cybersecurity team. However, it’s not safe to rely on your email provider. A 2018 Virginia Tech study looked at how 35 popular email providers handled email spoofing. The study found: All except one of the email providers allowed fraudulent emails to reach users’ inboxes. Only eight of the providers provided a warning about suspicious emails on their web apps.  Only four of the providers provided such a warning on their mobile apps. Authentication protocols As noted by the Virginia Tech study, email providers often allow fraudulent emails through their filters — even when they fail authentication. But, perhaps more importantly, whether a fraudulent email fails authentication in the first place is out of your hands. For example, SPF lets a domain owner list which email servers are authorized to send emails from its domain. And DMARC enables domain owners to specify whether recipient mail servers should reject, quarantine, or allow emails that have failed SPF authentication.  So, for domain owners, setting up SPF, DKIM, and DMARC records is an essential step to prevent cybercriminals and spammers from sending spoofed emails using their domain name. But as the recipient, you can’t control whether the domain owner has properly set up its authentication records. You certainly don’t want your cybersecurity strategy to be dependent on the actions of other organizations.  Email security software Effective email spoofing attacks are very persuasive. The email arrives from a seemingly valid address — and it might contain the same branding, tone, and content you’d expect from the supposed sender. This makes email spoofing attacks one of the hardest cybercrimes to detect manually. Humans aren’t good at spotting the subtle and technical indicators of a well-planned email spoofing attack. Legacy solutions like Secure Email Gateways and native tools like spam filters aren’t either.  The best approach to tackling spoofing — or any social engineering attack — is intelligent technology. Email security solutions powered by machine learning (ML) automates the process of detecting and flagging spoofed emails, making it easier, more consistent, and more effective. Here’s how Tessian Defender solves the problem of email spoofing: Tessian’s machine learning algorithms analyze each employee’s email data. The software learns each employee’s email style and maps their trusted email relationships. It learns what “normal” looks like so it can spot suspicious email activity. Tessian performs a deep inspection on inbound emails. By checking the sender’s IP address, email client, and other metadata, Tessian can detect indications of email spoofing and other threats.  If it suspects an email is malicious, Tessian alerts employees using easy-to-understand language. Want to learn more? Here are some resources: Tessian Defender Data Sheet Customer Stories Report: To Prevent Spear Phishing Look for Impersonation If you’d rather talk to someone about your specific challenges, you can talk to an expert at Tessian.  
Read Blog Post
Human Layer Security, Podcast
Episode 4: The Fear Factor with Dr. Karen Renaud and Dr. Mark Dupuis
By Laura Brooks
Wednesday, January 20th, 2021
We have a fascinating episode lined up for you this week, as I’m delighted to be joined by Dr. Karen Renaud and Dr. Mark Dupuis. Dr. Renault is an esteemed Professor and Computer Scientist from Abertay University, whose research focuses on all aspects of human centred security and privacy. Through her work, she says, she wants to improve the boundary where humans and cybersecurity meet. And Dr Dupuis is an Assistant Professor within the Computing and Software Systems division at the University of Washington Bothell. He also specializes in the human factors of cybersecurity primarily examining psychological traits and their relationship to the cybersecurity and privacy behaviour of individuals.  And together they are exploring the use of fear appeals in cybersecurity, answering questions like whether they work or are they more effective ways to drive behavioral change. They recently shared their findings in the Wall Street Journal, a brilliant article titled Why Companies Should Stop Scaring Employees About Security. And they’re here today to shed some more light on the topic. Karen, Mark, welcome to the podcast! Tim Sadler: To kick things off, let’s discuss that Wall Street Journal article, in which you essentially concluded that fear and scaremongering just don’t work when it comes to encouraging people to practice safer cybersecurity behaviors. So why is this the case? Dr Marc Dupuis: Well, I think one of the interesting things if we look at the use of fear, fear is an emotion. And emotions are inherently short-term type of effects. So in some research that I did, about eight years ago, one thing I looked at was trade effect – which is a generally stable, lifelong type of effect. And I tried to understand how it relates to how individuals, whether in an organizational setting or home setting, how they perceive a threat, that cybersecurity threat, as well as their belief in being able to take protective measures to try and address that threat.  And one of the interesting things from that research was, how important the role of self-efficacy was, but more, perhaps more importantly, the relationship between trade positive aspect and self-efficacy. And so a trade positive effect is generally feelings of happiness and positivity in one aspect. And so what this gets at is, the higher levels of positivity we have with respect to trade effect, the more confident we feel, and being able to take protective measures. 
So how this relates to fear is, if we need people to take protective measures, and we know that their self-efficacy, their level of confidence, is related to positive effect, why then are we continually going down the road of using fear – a short term emotion to try and engender behavioral change? And so that was a, you know, interesting conversation that Karen and I had, and then we started thinking about well, let’s take a look at the role of fear specifically. TS: Karen, what would you add to that? Dr Karen Renaud: Well, you know, I had seen Mark’s background, and I’d always wanted to look at fear because I don’t like to be scared into doing things, personally. And I suspect I’m not unusual in that. And when we started to look at the literature, we just confirmed that businesses were trying to use a short-term measure to solve a long-term problem. Yeah. And so yeah, I was gonna say, why do you think that is? And you know, it almost seems using fear is just such a sort of default approach and so many, in so many things, you know, when we think about how, I’m thinking about how people sell insurance, and you know, it’s the fear, to try and drive people to believe that, hey, your home’s gonna get burgled.  Tomorrow, you better get insurance so you can protect against the bad thing happening. And why do you think companies actually just go to fear as this almost carrot to get people to do what they’re supposed to do? It feels to me as if the thing that intuitively you think will work often doesn’t work. So you know, there’s nasty pictures they put on the side of cigarette packets actually are not very effective in stopping heavy smokers. So, whereas somebody who doesn’t smoke thinks, oh my gosh, this is definitely going to scare people, and we’re going to get behavioral change, it actually doesn’t work. So sometimes intuition is just wrong. I think in this case, it’s a case of not really doing the research the way we did to say, actually, this is probably not effective, but going well, intuitively, this is going to work. You know, they used to, when I was at school, they used to call up kids to get them to study. Now, we know that that was really a bad thing to do. The children don’t learn when they’re afraid. So we should start taking those lessons from education and applying them in the rest of our lives as well. 
TS: Yeah, I think it’s a really good call that it’s almost like we just generally, as society, need to do better at understanding actually how these kinds of fear appeals work and engage with people. And, then, maybe if we just go a layer deeper into this concept of fear tactics. You know, are people becoming immune to fear tactics? 2020 was a really bad year, a lot of people faced heightened levels of stress and anxiety as a result of the pandemic and all of that change. Do you think that this is playing a part in why fear appeals don’t work?  KR: Well, yeah, I think you’re right. The literature tells us that when people are targeted by a fear appeal, they can respond in one of two ways. They can either engage in a danger control response, which is kind of what the designer of the fear appeals recommends they do. For example, if you don’t make backups, you can lose all your photos if you get attacked. So, the person engaging in a danger control response will make the backup – they’ll do as they’re told.  But they might also engage in a fear control response, which is the other option people can take. In this case, they don’t like the feeling of fear. And so they act to stop feeling it. They attack the fear, rather than the danger. They might go into denial or get angry with you. The upshot is they will not take the recommended action. So if cybersecurity is all you have to worry about, you might say, “Okay, I’m going to engage in that danger control response.”  But we have so many fear appeals to deal with anyway. And this year, it’s been over the top. So if you add fear appeals to that folks will just say, “I can’t be doing with this. I’m not going to take this on board.” So I think you’re absolutely right. And people are fearful about other things, as well as just COVID. And so you know, adding the layer to that. But what we also thought about was how ethical it actually is to add to people’s existing levels of anxiety and fear…
TS: And do you think that this, sort of, compounds? Do you think there’s a correlation between if people are already feeling naturally kind of anxious, stressed about a bunch of other stuff that actually adding one more thing to feel scared about is even less likely to have the intended results on changing their behavior? MD: Yeah, I mean, I think so. I think it just burns people out. And you kind of get this repeated messaging. You know, one thing I think about, just because we in the States just got through this whole election cycle, and maybe we’re still in this election cycle, but where all these political ads are using fear time and time and time again. And especially with political ads. But I think, in general, people do start to tune out and they want to. They just want to be done with it.  And so it’s one of these things that, I think, just loses its efficacy, and people just kind of have had enough. I have a three and a half year old son. And you know, my daughter was very good at listening to us when we said, “This is dangerous, don’t do this.” But my son, I’m like, I’m like, “Don’t get up there. You’re gonna crack your head open, and don’t do this.” And he ignores me, first of all, and then he does it anyway. And he doesn’t crack his head open. And he says, “See, Daddy, I didn’t crack my head open.” And I’m like, no. But it gets to another point; if we scare people and we try to get them scared enough to do something. But when they don’t do it and if nothing bad happens, it only reinforces the idea that “Oh, it can’t be this bad anyway.” KR: Yeah, you’re right. Because of the cause and the effects. If you divulge your email address or your password somewhere, and the attack is so far apart, a lot of the time you don’t make that connection even.  But it’s really interesting. If you look way back during the first Second World War, Germany decided to bomb the daylights out of London. And the idea was to make the Londoners so afraid that the British would capitulate. But what happened was a really odd thing. They became more defiant. And so we need to get a look back at that sort of thing. And somebody called McCurdy who wrote a book about this — she said people got tired and afraid of being afraid. And so they just said, “No, I don’t care how many bombs you’re throwing on us. We’re just not going to be afraid.” Now, one day if people are having so many fear appeals thrown at them, they’re losing their efficacy. TS: A very timely example talking about the Blitz in World War II, as I just finished reading a book about exactly that, which is the resilience of the British people through that particular period of time. And as you say, Karen, I knew very little about this topic, but it absolutely had the unintended consequence of bringing people together. It was like a rallying cry for the country to say, “We’re not going to stand for this, we are going to fight it.”  And I guess everything you’re saying is reinforced by the research you conducted as well, which completely makes sense. I’m going to read from some notes here. And in the research paper you surveyed CISOs about their own use of fear appeals in their organization. How Chief Information Security Officers actually engage with their employees, and it said 55% were against using fear appeals, with one saying, fear is known to paralyse normal decision making and reactions. And 36% thought that fear appeals were acceptable, with one saying that fear is an excellent motivator. And not a single CISO ranked scary messages as the most effective technique. What were your thoughts on these findings? Were you surprised by them?
MD: We were, I think, surprised that many were against the use of fear appeals. You look at these individuals that are the chief person responsible for the security, information security of the organization. And here they’re coming out and telling us, yeah, we don’t believe in using fear appeals. And there’s multiple reasons for this one, maybe they don’t believe in the efficacy of it. But I think it’s also because we don’t know how effective it’s going to be, but we do know that it can also damage the employee employer relationship.  And as well as some ethical issues related to it, you start to add up the possible negative ramifications of using fear appeals. And it was interesting, even going back to that example, during World War II, you think about why this was effective in what England was doing. It’s because they were in this together, they have this sense of this communal response of, you know. We’re sick of being scared, we’re in this together, we’re gonna fight in this together, and I think maybe CISOs are starting to see that, to try and help make the employee/employer relationship more positive and empower their employees rather than trying to scare them and hurt that relationship. TS: And there was one really interesting finding, which was that you found the longest serving CISOs – i.e. those with more experience – were more likely to approve the use of cybersecurity fear appeals. Why do you think that is? Is fear, maybe kind of an old school way of thinking about cybersecurity?  KR: I think as a CISO, it’s really difficult to stay up to date with the latest research, the latest way of thinking. They spend a lot of time keeping their finger on the pulse of cyber threat models, the compromises hackers are coming with. But if you go and look at the research, the attitudes towards users are slowly changing. And maybe the people who approve of fear appeals aren’t that aware of that. Or it might be they just become so exasperated by the behavior of their employees over the years that they just don’t have the appetite for slower behavioral change mechanisms. You know, and I understand that exasperation. But I was really quite heartened to see that the others said no, this is not working – especially the younger ones. So you feel that cultural change is happening. TS: One thing I was gonna ask was, there’s this interesting concept of, you know, the CISOs themselves, and whether they use fear appeals in their organization. Do you think that’s somewhat a function of how fear appeals are used to them, if that makes sense? Like they have a board that they’re reporting to, they have a boss, they have stakeholders that they’ve got to deliver results for – namely, keep the organization secure, keep our data secure, keep our people secure. Do you think there’s a relationship between how fear appeals are used to them in terms of how they use that then to others in their organization? MD: I think that’s an interesting question. I mean, I think that’s always possible. And I, you know, I think a lot of times people default to what they know and what they’re comfortable with, and what they’ve experienced and so on. And maybe that’s why we see some of the CISOs that have been in that role longer to default to that. And, you know, some of them might be organizational structural as well. Like I said, if they are constantly being bombarded with fear appeals by those that they report to, then, maybe they are more likely to engage in fear appeals. That question is a little unclear. But I do think it’s an interesting question because it, again, intuitively it makes sense. I can have a conversation with someone and, you know, if I want to use fear appeals, I don’t have to make a case for them. The case is almost intuitively made in and of itself. But trying to do the counter and say, well, maybe fear appeals don’t work, it’s a much bigger leap to try and make that argument than I think to try and say, “Well, yeah, let’s scare someone into doing something, of course, that’s gonna work, right.”
TS: I think it’s an interesting point. I think it’s just really important that we also remember, certainly in the context of using fear appeals, that there is a role beyond the CISO, as well. And it’s the role the board plays, it’s the culture of the organization, and how you set those individuals up for success. Like, on one hand as a CISO, the sky is always falling. There is always some piece of bad news or something that’s going wrong, or something you’re defending. And I think it’s again, maybe there’s something in that for thinking about how organizations can kind of empower CISOs, so that they can then go on to empower their people.  And so shifting gears slightly, we’ve spoken a lot about why fear appeals are maybe not a good idea, and how they are limited in their effectiveness. But what is the alternative? What advice would you give to the listeners on this podcast about how they can improve employee cybersecurity behavior through other means, especially as so many are now working remotely?  KR: Well, going back to what Mark was saying, we think the key really is self efficacy. You’ve got to build confidence in people, and without making them afraid.  A lot of the cybersecurity training that you get in organizations is a two-hour session that brings everyone into a room and we talk with them. Or maybe people are required to do this online. This is not self efficacy. This is awareness. And there’s a big difference. So the thing is, you can’t deliver cybersecurity knowledge and self efficacy like a COVID vaccination. It’s a long-term process and employers really have to engage with the fact that it is a long-term process, and just keep building people’s confidence and so on.  What you said earlier about the whole community effect, up to now cybersecurity has been a solo game. And it’s not a tennis solo game, right. It’s a team sport. And we need to get all the people in the organization helping each other to spot phishing messages or whatever. But you know, make it a community sport, number one. And everybody supports each other in building that level of self efficacy that we all need. TS: I love that. And, yeah, I think we said it earlier. But you know, just this concept of teamwork, and coming together, I think is so, so important. Mark, would you add anything to that in terms of just these alternative means to fear appeals that leaders and CISOs can think about using with their employees? MD: Yeah, I mean, it’s not gonna be one size fits all. But I think whatever approach we use, as Karen said, we really do need to tap into that self efficacy. And by doing that, people are going to feel confident and empowered to be able to take action.  And we need to think about how people are motivated to take action, you know. So fear is scaring them personally, about consequences they may face like termination or fines or something else. But if you start thinking about developing this and, as I mentioned before, this being in-this-together, this is developing an intrinsic motivation that “I’m not doing this, because I’m fearful of the consequences”, so much. It’s more “I’m doing this because, you know, we’re all in this together.” We want to make this better for everyone. We want to have a good company, we want to be able to help each other. And we want people to take the actions that are necessary to make sure that we are secure, and we’re here to be able to talk about it.  TS: Yeah, it’s exactly what both of you are saying that if somebody feels that they can’t, if they don’t have that self efficacy, they’re not going to raise things, they’re not going to bring it forward. And ultimately, that’s when disasters happen, and things can go really bad. And then, I love the idea of, you know, it makes complete sense that if you are striking fear into the hearts of people, it’s not necessarily going to have the desired outcome 100% of the time, but isn’t it a little bit of fear needed? I mean, when I say this, of course, it has to be used ethically. But when I’m thinking about just the nature of what organizations are facing today, and we’ve just heard about the Solar Winds hack, and there are a number of others as well. These things are pretty scary, and the techniques that are being used are pretty scary. So isn’t a little bit of fear required here? And is there any merit to using that to make people understand the severity and the consequences of what’s at stake? MD: Yeah, I think there’s a difference between fear and providing people with information that might inherently have scary components to it. And, so what I mean by that is, when people are often using fear appeals, they’re doing it to scare people into complying with some specific goal. But instead we should provide information to people – which we should, we should let people know that there are some possible things that can happen or some possible consequences – but not with the goal of scaring them, but more with the goal of empowering them by giving them information. They, again, tap into that self efficacy, more so than anything else, because then they know that there’s some kind of threat out here. They’re not scared, but they know there’s a threat. And if they feel empowered through knowledge, and through that self efficacy, then they’re more likely to take that action, as opposed to designing a message that’s just designed to scare them into compliance.
TS: From your experience, can either when you think of any really good examples of how companies or any campaigns that have maybe built this kind of self efficacy or empowered people without having to use fear as the motivating factor? KR: And I think I mentioned one of them in the paper. So there’s an organization that I’m familiar with and they had a major problem with phishing. They appointed one person and if anybody had a suspicious message, they say “you were quite right to report this to me, thank you so much for being part of the security perimeter of this organization. But email looks fine, you can click.” Overtime, this is actually built up efficacy. They don’t have phishing problems anymore, in that organization, because they have this person. And it’s almost an informal thing he does but he’s building up self efficacy slowly but surely, across the organization, because nobody ever gets made to feel small by reporting or made to feel humiliated. We’re all participating. We’re all part of this, that that is the best example I’ve seen of actually how this has worked.  TS: Yeah, I really like that. It’s like, when people do risk audits, they will say that the time the alarm should sound is when there’s nothing on the risk register. When the risk registers may be getting 510 entries every single week, you know, that people actually do have that confidence to come forward. And also they’re paying attention, right? They’re actually aware of these things.  And where I want to go next is to talk about this is a side of things in the cybersecurity vendor world. You know, many companies that are trying to provide solutions to organizations do rely quite heavily on this concept of fear, uncertainty and doubt. It’s even got its own acronym right? FUD. And, essentially, FUD is used so heavily. As the saying goes “bad news sells” – we see scary headlines, the massive data breaches dominate the media landscape. So I think it’s fair to say eliminating fraud is going to be tough. And there is a lot of work to do here. In your opinion, who is responsible for changing the narrative? And what advice would you give to them for how they can start doing this? MD: I think it definitely, you know, starts in things such as having these conversations and trying to, I guess, place a little uncertainty or doubt into those decision makers and CISOs about how effective fear is. It’s kind of flipping the script a little bit. And maybe part of it is we need a new acronym, to say, well give this a try, or this is why we think this is going to work, or this is what the research shows. And this is what your peer organizations are doing, and they find it very effective. Their employees feel more empowered. So, I think a lot of it is just beginning with those conversations and trying to flip the script a little bit to start to help CISOs know. Well, you know, it’s always easy to criticize something, but then the bigger question is, okay, if, if we’re taking the use of fear and its effectiveness for granted, then what are we going to replace it with?  And a lot of it, we know that self efficacy is the major player there but what’s that going to look like? And I think Karen gave a great example looking at what an organization is doing, which is increasing improving levels of self efficacy. It’s creating that spirit of we’re all in this together and it’s less about a formalised punitive type of system. And so looking at ways to tap into that and for one organization, it might be you have a slightly different approach, but I think the concepts and stuff will be the same.
TS: Again, it ties in a really important point, which is just more understanding is needed, I think, by the lay person, or the people that are putting this out.  And, and then I think, Marc, to your point just about this being collective responsibility. I mean, I see it as a great opportunity as well, because I think everyone would welcome some more positivity and optimism, right? And if we can actually bring that to the security community, which is, you know, generally a fearful community, focusing on defense and threat actors. The language, the aesthetic, everything is generally negative, fearful, scary. I think there’s a great opportunity here, which is that, you know, doesn’t have to be that way and that we can come together. And we can have a much more positive dialogue and a much more positive response around it.  There was something that I wanted to touch on. Karen, you speak about, in your research, this concept of “Cybersecurity Differently.” And you explain, and I’m going to quote you verbatim here – “It’s so important that we change mindsets from the human-as-a-problem to human-as-solution in order to improve cybersecurity across the sociotechnical system.” What do you mean by that? And what are the core principles of Cybersecurity Differently? KR: When you treat your users as a problem, right, then that informs the way you manage them. And, so, then what you see in a lot of organizations because they see their employees’ behaviors as a problem. They’ll train them, they’ll constrain them, and then they’ll blame them when things go wrong. So that’s the paradigm.  But what you’re actually doing is excluding them from being part of the solution. So, it creates the very problem you’re trying to solve. What you want is for everyone to feel that they’re part of the security defense of the organization. I did this research with Marina Timmerman, from the University of Darmstadt, technical University Darmstadt. And so the principles are:  One we’ve been speaking about a lot: encouraged collaboration and communication between colleagues, so that people can support each other. We want to encourage everyone to learn. It should be a lifelong learning thing, not just something that IT departments have to worry about.  It isn’t solo, as I’ve said before, you have to build resilience as well as resistance. So currently, a lot of the effort is on resisting anything that somebody could do wrong. But you don’t then have a way of bouncing back when things do go wrong, because all the focus is on sort of resistance. And, you know, a lot of the time we treat security awareness training and policies like a-one-size-fits-all. But that doesn’t refer to people’s expertise. It doesn’t go to the people and say, “Okay, here’s what we’re proposing, is this going to be possible for you to do these things in a secure way?” And if not, how can we support you to make what you’re doing more secure.  Then, you know, people make mistakes. Everyone focuses on if a phishing message comes to an organization, people focus on the people who fell for it. But there were many, many more people who didn’t fall for it. And so what we need to do is examine the successes, what can we learn from the people? Why did they spot that phishing message so that we can encourage that in the people who did happen to make mistakes?  I didn’t get these ideas, just out of the air. I got them from some very insightful people. One of them was Sidney Dekker, who has applied this paradigm in the safety field. What’s interesting was that he got Woolworths in Australia to allow him to apply the paradigm in some of their stores. They previously had all these signs up all over the store – “Don’t leave a mess here” and “Don’t do this” – and they had weekly training on safety. He said, right, we’re taking all the signs out. Instead, what we’re gonna do is just say, you have one job, don’t let anyone get hurt. And the stores that applied the job got the safety prize for Woolworths that next year. So, you know, just the idea that everyone realized it was their responsibility. And it wasn’t all about fear, you know, rules and that sort of thing. So I thought if he could do this in safety, where people actually get harmed for life or killed, surely we can do this in cyber?! And then I found a guy who ran a nuclear submarine in the United States. His name is David Marquet. He applied the same thing in his nuclear submarine which you would also think, oh, my goodness, a nuclear submarine. There’s so much potential for really bad things to happen! But he applied the same sort of paradigm shift – and it worked! He won the prize for the best run nuclear submarine in the US Navy. So it’s about being brave enough to go actually, you know, what we’re doing is not working, and every year it’s not working Maybe it’s time to think well, can we do something different?  But like you said, Marc, we need a brave organization to say, okay, we’re gonna try this. And we haven’t managed to find one yet. But we will, we will! TS: And that’s one of the things I wanted to close out on. I spoke to you at the beginning of this podcast is how much I love the article in the Wall Street Journal, but also just the mission that both of you are on – to improve, what I see really is the relationship between people and the cybersecurity function. And my question to you is, again, touches on that concept of how much progress have we actually made? And then, to close, how optimistic are you that we can actually flip the script and stop using fear appeals? MD: Yeah, I feel like we’ve made a lot of progress, but not nearly enough. So, you know, there’s, and part of the challenge, too, is, none of this stuff is static, right? All this stuff is constantly changing; the cybersecurity threats out there change, we’re talking, so much, about phishing today, and social engineering is going to be something different next year. And so it’s always this idea of playing catch-up. But also, you know, having the fortitude to take that step out there to take that leap of faith that maybe we can do something else besides using fear. 
MD: I think I am optimistic that it can be done. We can make a lot of progress. For it to actually be done to, you know, 100%… I don’t know that we’ll ever get to that point. But I feel like we can make a lot of progress. And looking at part of this is recognizing the fact that – you’re mentioning the socio technical side of this – this isn’t just a technical problem, right? And a lot of times the people we throw into cybersecurity positions have this very strong technical background but they’re not bringing in other disciplines. Perhaps from the arts, from literature, from the humanities, and from design, we can bring new considerations to try and look at this as a very holistic multidisciplinary problem. If the problem is like that, well, then solutions definitely have to be as well.  We have to acknowledge that and start trying to get creative with the solutions. And we need those brave organizations to try these different approaches. I think they’ll be pleased with the results because they’re probably spending a lot of time and money right now, to try and make the organization more secure. They’re telling their bosses, the CISOs are telling their bosses, well, this is what we’re doing. We’re scaring them. But the results don’t always speak for themselves.  TS: And, Karen, what would you add to that? KR: Well, I just totally concur with everything Mark said, I think he’s rounded this off very nicely. I ran a study recently – it was really unusual study – where we put old fashioned typewriters in coffee shops and all over, and we put pieces of paper in. We just typed something along the top that said, “When I think about cybersecurity, I feel…” and we got unbelievable stuff back from people going: “I don’t understand it, I’m uncertain.” Lots and lots of negative responses – so there’s a lot of negative emotion around cyber. And that’s not good for cybersecurity. So I’d really like to see something different. And, you know, the old saying, If you keep doing the same thing without getting results, there’s something wrong. We see it’s not working, this might be the best way of changing and making it work. TS: I completely agree. I completely agree. Thank you both so much for that great discussion. I really enjoy getting deeper as well, and hearing your thoughts on all of this. As you say, I think it’s a win-win scenario on so many counts. More positivity means better outcomes for employees. And I think it means better outcomes for the security function.   If you enjoyed our show, please rate and review it on Apple, Spotify, Google or wherever you get your podcasts. And remember you can access all the RE:Human Security Layer podcasts here. 
Read Blog Post
Page
[if lte IE 8]
[if lte IE 8]