The Human Layer Security Summit is back. Save your spot today.

Request a Demo of Tessian Today.
Automatically stop data breaches and security threats caused by employees on email. Powered by machine learning, Tessian detects anomalies in real-time, integrating seamlessly with your email environment within minutes and starting protection in a day. Provides you with unparalleled visibility into human security risks to remediate threats and ensure compliance.
Human Layer Security Spear Phishing Customer Stories DLP Compliance Data Exfiltration
18 Actionable Insights From Tessian Human Layer Security Summit
By Maddie Rosenthal
09 September 2020
In case you missed it, Tessian hosted its third (and final) Human Layer Security Summit of 2020 on September 9. This time, we welcomed over a dozen security and business leaders from the world’s top institutions to our virtual stage, including: Jeff Hancock from Stanford University David Kennedy, Co-Founder and Chief Hacking Officer at TrustedSec Merritt Baer, Principal Security Architect at AWS Rachel Beard, Principal Security Technical Architect at Salesforce  Tim Fitzgerald, CISO at Arm  Sandeep Amar, CPO at MSCI  Martyn Booth, CISO at Euromoney  Kevin Storli, Global CTO and UK CISO at PwC Elvis M. Chan, Supervisory Special Agent at the FBI  Nina Schick, Author of “Deep Fakes and the Infocalypse: What You Urgently Need to Know” Joseph Blankenship, VP Research, Security & Risk at Forrester Howard Shultz, Former CEO at Starbucks  While you can watch the full event on YouTube below, we’ve identified 18 valuable insights that security, IT, compliance, and business leaders should apply to their strategies as they round out this year and look forward to the next.
Here’s what we learned at Tessian’s most recent Human Layer Security Summit. Not sure what Human Layer Security is? Check out this guide which covers everything you need to know about this new category of protection.  1. Cybersecurity is mission-critical Security incidents – whether it’s a ransomware attack, brute force attack, or data leakage from an insider threat – have serious consequences. Not only can people lose their jobs, but businesses can lose customer trust, revenue, and momentum. While this may seem obvious to security leaders, it may not be so obvious to individual departments, teams, and stakeholders. But it’s essential that this is communicated (and re-communicated).  Why? Because a company that’s breached cannot fulfill its mission. Keep reading for insights and advice around keeping your company secure, all directly from your peers in the security community. 2. Most breaches start with people People control our most sensitive systems and data. It makes sense, then, that most data breaches start with people. But, that doesn’t mean employees are the weakest link. They’re a business’ strongest asset! So, it’s all about empowering them to make better security decisions. That’s why organizations have to adopt people-centric security solutions and strategies.
The good news is, security leaders don’t face an uphill battle when it comes to helping employees understand their responsibility when it comes to cybersecurity… 3. Yes, employees are aware of their duty to protect data Whether it’s because of compliance standards, cybersecurity headlines in mainstream media, or a larger focus on privacy and protection at work, Martyn Booth, CISO at Euromoney reminded us that most employees are actually well aware of the responsibility they bear when it comes to safeguarding data.  This is great news for security leaders. It means the average employee will be more likely to abide by policies and procedures, will pay closer attention during awareness training, and will therefore contribute to a more positive security culture company-wide. Win-win. 4. But, employees are more vulnerable to phishing scams outside of their normal office environment  While – yes – employees are more conscious of cybersecurity, the shift to remote working has also left them more vulnerable to attacks like phishing scams.  “We have three “places”: home, work, and where we have fun. When we combine two places into one, it’s difficult psychologically. When we’re at home sitting at our coffee table, we don’t have the same cues that remind us to think about security that we do in the office. This is a huge disruption,” Jeff Hancock, Professor at Stanford University explained.  Unfortunately, hackers are taking advantage of these psychological vulnerabilities. And, as David Kennedy, Co-Founder and Chief Hacking Officer at TrustedSec pointed out, this isn’t anything new. Cybercriminals have always been opportunistic in their attacks and therefore take advantage of chaos and emotional distress.  To prevent successful opportunistic attacks, he recommends that you: Reassess what the new baseline is for attacks Educate employees on what threats look like today, given recent events Identify which brands, organizations, people, and departments may be impersonated (and targeted) in relation to the pandemic But, it’s not just inbound email attacks we need to be worried about.  5. They’re more likely to make other mistakes that compromise cybersecurity, too This change to our normal environment doesn’t just affect our ability to spot phishing attacks. It also makes us more likely to make other mistakes that compromise cybersecurity. Across nearly every session, our guest speakers said they’ve seen more incidents involving human error and that security leaders should expect this trend to continue. That’s why training, policies, and technology are all essential components of any security strategy. More on this below. 6. Security awareness training has to be ongoing and ever-evolving At our first Human Layer Security Summit back in March, Mark Logsdon, Head of Cyber Assurance and Oversight at Prudential, highlighted three key flaws in security awareness training: It’s boring It’s often irrelevant It’s expensive What he said is still relevant six months on and it’s a bigger problem than ever, especially now that the perimeter has disappeared, security teams are short-handed, and individual employees are working at home and on their own devices. So, what can security leaders do?  Kevin Storli, Global CTO and UK CISO at PwC highlighted the importance of tailoring training to ensure it’s always relevant. That means that instead of just reminding employees about compliance standards and the importance of a strong password, we should also be focusing on educating employees about remote access, endpoints, and BYOD policies. But one training session isn’t enough to make security best practice really stick. These lessons have to be constantly reinforced through gamification, campaigns, and technology.  Tim Fitzgerald, CISO at Arm highlighted how Tessian’s in-the-moment warnings have helped his employees make the right decisions at the right time.  “Warnings help create that trigger in their brain. It makes them pause and gives them that extra breath before taking the next potentially unsafe step. This is especially important when they’re dealing with data or money. Tessian ensures they question what they’re doing,” he said.
7. You have to combine human policies with technical controls to ensure security  It’s clear that technology and training are both valuable. That means your best bet is to combine the two. In discussion with Ed Bishop, Tessian Co-Founder and CTO, Merritt Baer, Principal Security Architect at AWS and Rachel Beard, Principal Security Technical Architect at Salesforce, both highlighted how important it is for organizations to combine policies with technical controls. But security teams don’t have to shoulder the burden alone. When using tools like Salesforce, for example, organizations can really lean on the vendor to understand how to use the platform securely. Whether it’s 2FA, customized policies, or data encryption, many security features will be built-in.  8. But…Zero Trust security models aren’t always the answer While – yes – it’s up to security teams to ensure policies and controls are in place to safeguard data and systems, too many policies and controls could backfire. That means that “Zero Trust” security models aren’t necessarily the best way to prevent breaches.
9. Security shouldn’t distract people from their jobs  Security teams implement policies and procedures, introduce new software, and make training mandatory for good reason. But, if security becomes a distraction for employees, they won’t exercise best practice.  The truth is, they just want to do the job they were hired to do!  Top tip from the event: Whenever possible, make training and policies customized, succinct, and relevant to individual people or departments.  10. It also shouldn’t prevent them from doing their jobs  This insight goes back to the idea that “Zero Trust” security models may not be the best way forward. Why? Because, like Rachel, Merrit, Sandeep, and Martyn all pointed out: if access controls or policies prevent an employee from doing their job, they’ll find a workaround or a shortcut. But, security should stop threats, not flow. That’s why the most secure path should also be the path of least resistance. Security strategies should find a balance between the right controls and the right environment.  This, of course, is a challenge, especially when it comes to rule-based solutions. “If-then” controls are blunt instruments. Solutions powered by machine learning, on the other hand, detect and prevent threats without getting in the way. You can learn more about the limitations of traditional data loss prevention solutions in our report The State of Data Loss Prevention 2020.  11. Showing downtrending risks helps demonstrate the ROI of security solutions  Throughout the event, several speakers mentioned that preemptive controls are just as important as remediation. And it makes sense. Better to detect risky behavior before a security incident happens, especially given the time and resources required in the event of a data breach.  But tracking risky behavior is also important. That way, security leaders can clearly demonstrate the ROI of security solutions. Martyn Booth, CISO at Euromoney, explained how he uses Tessian Human Layer Security Intelligence to monitor user behavior, influence safer behavior, and track risk over time. “We record how many alerts are sent out and how employees interact with those alerts. Do they follow the acceptable use policy or not? Then, through our escalation workflows that ingest Tessian data, we can escalate or reinforce. From that, we’ve seen incidents involving data exfiltration trend downwards over time. This shows a really clear risk reduction,” he said. 12. Targeted attacks are becoming more difficult to spot and hackers are using more sophisticated techniques As we mentioned earlier, hackers take advantage of psychological vulnerabilities. But, social media has turbo-charged cybercrime, enabling cybercriminals to create more sophisticated attacks that can be directed at larger organizations. Yes, even those with strong cybersecurity. Our speakers mentioned several examples, including Garmin and Twitter. So, how do they do it? Research! LinkedIn, company websites, out-of-office messages, press releases, and news articles all provide valuable information that a hacker could use to craft a believable email. But, there are ways to limit open-source recon. See tips from David Kennedy, Co-Founder and Chief Hacking Officer at TrustedSec, below. 
13. Deepfakes are a serious concern Speaking of social media, Elvis M Chan, Supervisory Special Agent at the FBI and Nina Schick, Author of “Deep Fakes and the Infocalypse: What You Urgently Need to Know”,  took a deep dive into deepfakes. And, according to Nina, “This is not an emerging threat. This threat is here. Now.” While we tend to associate deepfakes with election security, it’s important to note that this is a threat that affects businesses, too.  In fact, Tim Fitzgerald, CISO at Arm, cited an incident in which his CEO was impersonated in a deepfake over Whatsapp. The ask? A request to move money. According to Tim, it was quite compelling.  Unfortunately, deepfakes are surprisingly easy to make and generation is outpacing detection. But, clear policies and procedures around authenticating and approving requests can ensure these scams aren’t successful. Not sure what a deepfake is? We cover everything you need to know in this article: Deepfakes: What Are They and Why Are They a Threat? 14. Supply chain attacks are, too  In conversation with Henry Treveleyan Thomas, Head of Customer Success at Tessian, Kevin Storli, Global CTO and UK CISO at PwC discussed how organizations with large supply chains are especially vulnerable to advanced impersonation attacks like spear phishing. “It’s one thing to ensure your own organization is secure. But, what about your supply chain? That’s a big focus for us: ensuring our supply chain has adequate security controls,” he said. Why is this so important? Because hackers know large organizations like PwC will have robust security strategies. So, they’ll look for vulnerabilities elsewhere to gain a foothold. That’s why strong cybersecurity can actually be a competitive differentiator and help businesses attract (and keep) more customers and clients.  15. People will generally make the right decisions if they’re given the right information 88% of data breaches start with people. But, that doesn’t mean people are careless or malicious. They’re just not security experts. That’s why it’s so important security leaders provide their employees with the right information at the right time. Both Sandeep Amar, CPO at MSCI and Tim Fitzgerald, CISO at Arm talked about this in detail.  It could be a guide on how to spot spear phishing attacks or – as we mentioned in point #6 – in-the-moment warnings that reinforce training.   Check out their sessions for more insights.  16. Success comes down to people While we’ve talked a lot about human error and psychological vulnerabilities, one thing was made clear throughout the Human Layer Security Summit. A business’s success is completely reliant on its people. And, we don’t just mean in terms of security. Howard Shultz, Former CEO at Starbucks, offered some incredible advice around leadership which we can all heed, regardless of our role. In particular, he recommended: Creating company values that really guide your organization Ensuring every single person understands how their role is tied to the goals of the organization Leading with truth, transparency, and humility
17. But people are dealing with a lot of anxiety right now Whether you’re a CEO or a CISO, you have to be empathetic towards your employees. And, the fact is, people are dealing with a lot of anxiety right now. Nearly every speaker mentioned this. We’re not just talking about the global pandemic.  We’re talking about racial and social inequality. Political unrest. New working environments. Bigger workloads. Mass lay-offs.  Joseph Blankenship, VP Research, Security & Risk at Forrester, summed it up perfectly, saying “We have an anxiety-ridden user base and an anxiety-ridden security base trying to work out how to secure these new environments. We call them users, but they’re actually human beings and they’re bringing all of that anxiety and stress to their work lives.” That means we all have to be human first. And, with all of this in mind, it’s clear that….. 18. The role of the CISO has changed  Sure, CISOs are – as the name suggests – responsible for security. But, to maintain security company-wide, initiatives have to be perfectly aligned with business objectives, and every individual department, team, and person has to understand the role they play. Kevin Storli, Global CTO and UK CISO at PwC touched on this in his session. “To be successful in implementing security change, you have to bring the larger organization along on the journey. How do you get them to believe in the mission? How do you communicate the criticality? How do you win the hearts and minds of the people? CISOs no longer live in the back office and address just tech aspects. It’s about being a leader and using security to drive value.” That’s a tall order and means that CISOs have to wear many hats. They need to be technology experts while also being laser-focused on the larger business. And, to build a strong security culture, they have to borrow tactics from HR and marketing.  The bottom line: The role of the CISO is more essential now than ever. It makes sense. Security is mission-critical, remember? If you’re looking for even more insights, make sure you watch the full event, which is available on-demand. You can also check out previous Human Layer Security Summits on YouTube.
Human Layer Security Spear Phishing
Why We Click: The Psychology Behind Phishing Scams and How to Avoid Being Hacked
07 September 2020
We all know the feeling, that awful sinking in your stomach when you realize you’ve clicked a link that you shouldn’t have. Maybe it was late at night, or you were in a hurry. Maybe you received an alarming email about a problem with your paycheck or your taxes. Whatever the reason, you reacted quickly and clicked a suspicious link or gave away personal information only to realize you made a dangerous mistake.  You’re not alone. In a recent survey conducted by my company Tessian, two-fifths (43%) of people admitted to making a mistake at work that had security repercussions, while nearly half (47%) of people working in the tech industry said they’ve clicked on a phishing email at work. In fact, most data breaches occur because of human error. Hackers are well aware of this and know exactly how to manipulate people into slipping up. That’s why emails scams — also known as phishing — are so successful.  Phishing has been a persistent problem during the COVID-19 pandemic. In April, Google alone saw more than 18 million daily email scams related to COVID-19 in a single week. Hackers are taking advantage of psychological factors like stress, social relationships and uncertainty that affect people’s decision-making. Here’s a look at some of the psychological factors that make people vulnerable and what to look out for in a scam. 
Stress and Anxiety Take A Toll Hackers thrive during times of uncertainty and unrest, and 2020 has been a heyday for them. In the last few months they’ve posed as government officials, urging recipients to return stimulus checks or unemployment benefits that were “overpaid” and threatening jail time. They’ve also impersonated health officials, prompting the World Health Organization to issue an alert warning people not to fall for scams implying association with the organization. Other COVID scams have lured users by offering antibody tests, PPE and medical equipment. Where chaos leads, hackers follow. The stressful events of this year mean that cybersecurity is not top-of-mind for many of us. But foundational principles of human psychology also suggest that these same events can easily lead to poor or impulsive decisions online. More than half (52%) of those in our survey said that stress causes them to make more mistakes. The reason for this has to do with how stress impacts our brains, specifically our ability to weigh risk and reward. Studies have shown that anxiety can disrupt neurons in the brain’s prefrontal cortex that help us make smart decisions, while stress can cause people to weigh the potential reward of a decision over possible risks, to the point where they even ignore negative information. When confronted with a potential scam, it’s important to stop, take a breath, and weigh the potential risks and negative information like suspicious language or misspelled words. Urgency can also add stress to an otherwise normal situation — and hackers know to take advantage of this. Look out for emails, texts or phone calls that demand money or personal information within a very short window. Hacking Your Network Some of the most common phishing scams impersonate someone in your “known” network, but your “unknown” network can also be manipulated. Your known network consists of your friends, family and colleagues — people you know and trust. Hackers exploit these relationships, betting they can sway someone to click on a link if they think it’s coming from someone they know. These impersonation scams can be quite effective because they introduce emotion to the decision-making progress. If a phone call or email claims your family member needs money for a lawyer or a medical procedure, fear or worry replace logic. Online scams promising money add greed into the equation, while phishing emails impersonating someone in authority or someone you admire, like a boss or colleague, cloud deductive reasoning with our desire to be liked. The difference between clicking a dangerous link or deleting the email can involve simply recognizing the emotions being triggered and taking a second look with logic in mind.  Meanwhile, the rise of social media and the abundance of personal information online has allowed hackers to impersonate your “unknown” network as well — people you might know. Hackers can easily find out where you work or where you went to school and use that information to send an email posing as a college alumnus to seek money or personal information. An easy way to check a suspicious email is by looking beyond the display name to examine the full email address of the sender by clicking the name. Scammers will often change, delete or add on a letter to an email address. 
The Impact of Distraction and New Surroundings The rise of remote work brought on by COVID-19 can also impact people’s psychological states and make them vulnerable to scams. Remote work can bring an overwhelming combination of video call fatigue, an “always on” mentality and household responsibilities like childcare. In fact, 57% of those surveyed in our report said they feel more distracted when working from home. Why is this a problem from a cybersecurity standpoint? Distraction can impair our decision-making abilities. Forty-seven percent of employees cited distraction as the top reason for falling for a phishing scam. While many people tend to have their guard up in a physical office, we tend to relax at home and may let our guard down, even if we’re working. With an estimated 70% of employees working from home part or full-time due to COVID-19, this creates an opportunity for hackers.  It’s also more difficult to verify a legitimate request from an impersonation when you’re not in the same office as a colleague. One common scam impersonates an HR staff member to request personal information from employees at home. When in doubt, don’t click any links, download attachments or provide sensitive data like passwords, financial information or a social security number until you can confirm a request with a colleague directly. Self-Care and Awareness  These scams will always be out there, but that doesn’t mean people should constantly worry and keep their guard up — that would be exhausting. A simple combination of awareness and self-care when online can make a big difference.  Once you know the tactics a hacker might use and the psychological factors like stress, emotions and distraction to look out for, it will be easier to spot an email scam without the anxiety. It’s also important to take breaks and prioritize self-care when you’re feeling stressed or tired. Step away from the computer when you can and have a conversation with your manager about why the pressure to be “always-on” when working remotely can have a negative impact psychologically and create cybersecurity risks. By understanding why people fall for these scams, we can start to find ways to easily identify and avoid them.  This article was originally published in Fast Company and was co-authored by Tim Sadler, CEO of Tessian and Jeff Hancock, Harry and Norman Chandler Professor of Communication at Stanford University 
Spear Phishing
How to Avoid Falling Victim to Voting Scams in the 2020 U.S. Election
By Laura Brooks
28 August 2020
Scammers thrive in times of crisis and confusion. This is perhaps why the controversy surrounding mail-in voting could prove to be another golden opportunity for cybercriminals.  Throughout 2020, we’ve seen a surge of cybercriminals capitalizing on key and newsworthy moments in the COVID-19 crisis, creating scams that take advantage of the stimulus checks, the Paycheck Protection Program and students heading back to school.  Knowing that people are seeking answers during uncertain times, hackers craft scams – usually in the form of phishing emails – that appear to provide the information people are looking for. Instead, victims are lured to fake websites that are designed to steal their valuable personal or financial information.  Hackers are creating websites related to mail-in voting Given the uncertainties surrounding election security and voters’ safety during the pandemic, fueled further by President Trump’s recent attacks against the US Postal Service, it’s highly likely that scammers could set their sights on creating scams associated with mail-in voting.  In fact, our researchers discovered that around 75 domains spoofing websites related to mail-in voting were registered between July 2 to August 6.  Some of these websites tout information about voting-by-mail, such as mymailinballot.com and mailinyourvote.com. Others encourage voters to request or track their ballot, such as requestmailinballot.com and myballotracking.com.  Anyone accessing these websites should be wary, though. Keep reading to find out why. What risks do these spoofed domains pose?  To understand the risks these spoofed domains pose, consider why hacker’s create them. They’re after sensitive information like your name, address, and phone number as well as financial information like your credit card details. For example, if a malicious website claims to offer visitors a way to register to vote or cast their vote – which several of these newly created domains did – there will be a form that collects personally identifiable information (PII). Likewise, if a malicious website is asking for donations, visitors will be asked to enter credit card details.  If any of this information falls into the wrong hands, it could be sold on the dark web, resulting in identity theft or payment card fraud.  Of course, not every domain that our researchers discovered can be deemed malicious. But, it’s important you stay vigilant and never provide personal information unless you trust the domain.
So, how can voters avoid falling for mail-in voting scams?  Here are some tips to help you avoid falling victim to voting scams in the upcoming election:  1. Find answers online, but don’t trust everything you read It’s perfectly reasonable to look online for answers about how to vote. There’s a lot of useful information about ordering absentee ballots and locating local secure ballot boxes. However, be aware that there is a lot of misinformation online, particularly around this year’s election. Source information from trusted websites like https://www.usa.gov/how-to-vote.  2. Think twice before sharing personal details Before entering any personal or financial details, always check the URL of the domain and verify the legitimacy of the service by calling them directly. Question domains or pop-ups that request personal information from you, especially as it relates to your voting preference or other personal information. 3. Never share direct deposit details, credit card information, or your Social Security number on an unfamiliar website This information should be kept private and confidential. If a website asks you to share details like this, walk away.  Keep up with our blog for more insights, analysis, and tips for staying safe online. 
Spear Phishing DLP Compliance Data Exfiltration
August Cybersecurity News Roundup
By Maddie Rosenthal
28 August 2020
The end of the month means another roundup of the top cybersecurity headlines. Keep reading for a summary of the top 12 stories from August. Bonus: We’ve included links to extra resources in case anything piques your interest and you want to take a deeper dive. Did we miss anything? Email [email protected] Russian charged with trying to recruit Tesla employee to plant malware  Earlier this week, news broke that the FBI had arrested Egor Igorevich Kriuchkov – a 27-year-old Russian citizen – for trying to recruit a fellow Tesla employee to plant malware inside the Gigafactory Nevada. The plan? Insert malware into the electric car maker’s system, causing a distributed denial of service (DDos) attack to occur. This would essentially give hackers free rein over the system.  But, instead of breaching the network, the Russian-speaking employee turned down Egor’s million-dollar offer (to be paid in cash or bitcoin) and instead worked closely with the FBI to thwart the attack. Feds warn election officials of potentially malicious ‘typosquatting’ websites Stories of election fraud have dominated headlines over the last several months. The latest story involves suspicious “typosquatting” websites that may be used for credential harvesting, phishing, and influence operations.
While the FBI hasn’t yet identified any malicious incidents, they have found dozens of illegitimate websites that could be used to interfere with the 2020 vote.   To stay safe, make sure you double-check any URLs you’ve typed in and never input any personal information unless you trust the domain.  Former Google engineer sent to prison for stealing robocar secrets An Insider Threat at Google who exfiltrated 14,000 files five years ago has been sentenced to 18 months in prison. The sentencing came four months after Anthony Levandowski plead guilty to stealing trade secrets, including diagrams and drawings related to simulations, radar technology, source code snippets, PDFs marked as confidential, and videos of test drives.  He’s also been ordered to pay more than $850,000. Looking for more information about the original incident? Check out this article: Insider Threats: Types and Real-World Examples. All the information you need is under Example #4. For six months, security researchers have secretly distributed an Emotet vaccine across the world Emotet – one of today’s most skilled malware groups – has caused security and IT leaders headaches since 2014.  But, earlier this year, James Quinn, a malware analyst working for Binary Defense, discovered a bug in Emotet’s code and was able to put together a PowerShell script that exploited the registry key mechanism to crash the malware. According to ZDNet, he essentially created “both an Emotet vaccine and killswitch at the same time.” Working with Team CYMRU, Binary Defense handed over the “vaccine” to national Computer Emergency Response Teams (CERTs), which then spread it around the world to companies in their respective jurisdictions. Online business fraud down, consumer fraud up New research from TransUnion shows that between March and July, hackers have started to change their tactics. Instead of targeting businesses, they’re now shifting their focus to consumers. Key findings include: Consumer fraud has increased 10%, while business fraud has declined 9% since the beginning of the pandemic Nearly one-third of consumers have been targeted by COVID-19 related fraud Phishing is the most common method used in fraud schemes You can read the full report here. FBI and CISA issue warning over increase in vishing attacks A joint warning from the Federal Bureau of Investigations (FBI) and Cybersecurity Infrastructure Security Agency (CISA) was released in mid-August, cautioning the public that they’ve seen a spike in voice phishing attacks (known as vishing).  They’ve attributed the increase in attacks to the shift to remote working. Why? Because people are no longer able to verify requests in-person. Not sure what vishing is? Check out this article, which outlines how hackers are able to pull off these attacks, how you can spot them, and what to do if you’re targeted.  TikTok sues U.S. government over Trump ban In last month’s cybersecurity roundup, we outlined why India had banned TikTok and why America might be next. 30 days later, we have a few updates. On August 3, President Trump said TikTok would be banned in the U.S. unless it was bought by Microsoft (or another company) before September 15. Three days later, Trump signed an executive order barring US businesses from making transactions with TikTok’s parent company, ByteDance. The order will go into effect 45 days after it was signed. A few weeks later, ByteDance filed a lawsuit against the U.S. government, arguing the company was denied due process to argue that it isn’t actually a national security threat. In the meantime, TikTok is continuing its sales conversations with Microsoft and Oracle. Stay tuned next month for an update on what happens in the next 30 days. A Stanford deception expert and cybersecurity CEO explain why people fall for online scams According to a new research report – The Psychology of Human Error – nearly half of employees have made a mistake at work that had security repercussions. But why? Employees say stress, distraction, and fatigue are part of the problem and drive them to make more mistakes at work, including sending emails to the wrong people and clicking on phishing emails.  And, as you might expect, the sudden transition to remote work has only added fuel to the fire. 57% of employees say they’re even more distracted when working from home.  To avoid making costly mistakes, Jeff Hancock, a professor at Stanford, recommends taking breaks and prioritizing self-care. Of course, cybersecurity solutions will help prevent employees from causing a breach, too. University of Utah pays $457,000 to ransomware gang On August 21, the University of Utah posted a statement on its website saying that they were the victim of a ransomware attack and, to avoid hackers leaking sensitive student information, they paid $457,000. But, according to the statement, the hackers only managed to encrypt .02% of the data stored on their servers. While the University hasn’t revealed which ransomware gang was behind the attack, they have confirmed that the attack took place on July 19, that it was the College of Social and Behavioral Sciences that was hacked, and that the university’s cyber insurance policy paid for part of the ransom. Verizon analyzed the COVID-19 data breach landscape This month, Verizon updates its annual Data Breach Landscape Report to include new facts and figures related to COVID-19. Here some of the trends to look out for based on their findings: Breaches caused by human error will increase. Why? Many organizations are operating with fewer staff than before due to either illness or layoffs. Some staff may also have limitations because of new remote working set-ups. When you combine that with larger workloads and more distractions, we’re bound to see more mistakes. Organizations should be especially wary of stolen-credential related hacking, especially as many IT and security teams are working to lock down and maintain remote access.  Ransomware attacks will increase in the coming months. SANS Institute Phishing Attack Leads to Theft of 28,000 Records  The SANS institute – a global cybersecurity training and certifications organization – revealed that nearly 30,000 accounts of PII were compromised in a phishing attack that convinced an end-user to install a self-hiding and malicious Office 365 add-on. While no passwords or financial information were compromised and all the affected individuals have been notified, the breach goes to show that anyone – even cybersecurity experts – can fall for phishing scams. The cybersecurity skills shortage is getting worse In March, Tessian released its Opportunity in Cybersecurity Report which set out to answer one (not-so-simple) question: Why are there over 4 million unfilled positions in cybersecurity and why is the workforce twice as likely to be male than female? The answer is multi-faceted and has a lot to do with a lack of knowledge of the industry and inaccurate perceptions of what it means to work in cybersecurity.  The bad news is, it looks like the problem is getting worse. A recent report, The Life and Times of Cybersecurity Professionals 2020, shows that only 7% of cybersecurity professionals say their organization has improved its position relative to the cybersecurity skills shortage in the last several years. Another 58% say their organizations should be doing more to bridge the gap. What do you think will help encourage more people to join the industry?  That’s all for this month! Keep up with us on social media and check our blog for more updates.
Spear Phishing
Deepfakes: What are They and Why are They a Threat?
By Ed Bishop
21 August 2020
According to a recent Tessian survey, 74% of IT leaders think deepfakes are a threat to their organizations’ and their employees’ security*. Are they right to be worried? We take a look. What is a deepfake?
How could deepfakes compromise security? “Hacking humans” is a tried and tested method of attack used by cybercriminals to breach companies’ security, access valuable information and systems, and steal huge sums of money.  In the world of cybersecurity, attempts to “hack humans” are known as social engineering attacks. In layman’s terms, social engineering is simply an attempt to trick people. These tactics and techniques have been around for years and they are constantly evolving.  For example, cybercriminals have realized that the “spray-and-pray” phishing campaigns they previously used were losing their efficacy. Why? Because companies have strengthened their defenses against these bulk attacks and people have begun to recognize the cues that signalled a scam, such as poor grammar or typos.  As a result, hackers have moved to crafting more sophisticated and targeted spear phishing attacks, impersonating senior executives, third party suppliers, or other trusted authorities in emails to deceive employees.  Some even play the long game, building rapport with their targets over time before asking them to wire money or share credentials. Attackers will also directly spoof the sender’s domain and add company logos to their messages to make them look more legitimate. It’s working.  Last year alone, scammers made nearly $1.8 billion through Business Email Compromise attacks. While spear phishing attacks take more time and effort to create, they are more effective and the ROI for an attacker is much higher. So, what does this have to do with deep fakes? Deepfakes – either as videos or audio recordings – are the next iteration of advanced impersonation techniques malicious actors can use to abuse trust and manipulate people into complying with their requests.  These attacks have proven even more effective than targeted email attacks. As the saying goes, seeing – or hearing – is believing. If an employee believes that the person on the video call in front of them is the real deal – or if the person calling them is their CEO – then it’s unlikely that they would ignore the request. Why would they question it?
Examples of deepfakes In 2019, cybercriminals mimicked the voice of a CEO at a large energy firm, demanding a fraudulent transfer of £220,000. And, just last month, Twitter also experienced a major security breach after employees were targeted by a “phone spear phishing attack” or “vishing” attack. Targeted employees received phone calls from hackers posing as IT staff, tricking them into sharing passwords for internal tools and systems.  While it’s still early days and, in some cases, the deepfake isn’t that convincing, there’s no denying that deepfake technology will continue to get better, faster, and cheaper in the near future.  You just have to look at advanced algorithms like GPT-3 to see how quickly it can become a reality.  Earlier this year, OpenAI released GPT-3—an advanced natural language processing (NLP) algorithm that uses deep learning to produce human-like text. It’s so convincing, in fact, that a student used the tool to produce a fake blog post that landed in the top spot on Hacker News—proving that AI-written content can pass as human-authored.
It’s easy to see why the security community is scared about the potential impact of deepfakes.. Gone are the days of hackers drafting poorly written emails, full of typos and grammatical errors. Using AI, they can craft highly convincing messages that actually look like they’ve been written by the people they’re impersonating.  Who is most likely to be targeted by deepfake scams? The truth is, anyone could be a target. There is no one group of people more likely than another to be targeted by deepfakes.  Within your organization, though, it is important to identify who might be most vulnerable to these types of advanced impersonation scams and make them aware of how – and on what channels – they could be targeted.  For example, a less senior employee may have no idea what their CEO sounds like or even looks like. That makes them a prime target.  It’s a similar story for new joiners. Hackers will do their homework, trawl through LinkedIn, and prey on new members of staff, knowing that it’s unlikely they would have met senior members of the organization. New joiners, therefore, would not recognize their voices if they receive a call from them.  Attackers may also pretend to be someone from the IT team who’s carrying out a routine set-up exercise. This would be an opportune time to ask their targets to share account credentials.  As new joiners have no reference points to verify whether the person calling them is real or fake, -or if the request they’re being asked to carry out is even legitimate – it’s likely that they’ll fall for the scam. 
How easy are deepfakes to make? Researchers have shown that you only need about one minute of audio to create an audio deepfake, while “talking head” style fake videos require around 40 minutes of input data.  If your CEO has spoken at an industry conference, and there’s a recording of it online, hackers have the input data they need to train its algorithms and create a convincing deepfake. But crafting a deepfake can take hours or days, depending on the hacker’s skill level. For reference, Timothy Lee, a senior tech reporter at Ars Technica was able to create his own deepfake in two weeks and he spent just $552 doing it.  Deepfakes, then, are a relatively simple but effective way to hack an organization. Or even an election.
How could deepfakes compromise election security? There’s been a lot of talk about how deepfakes could be used to compromise the security of the 2020 U.S. presidential election. In fact, an overwhelming 76% of IT leaders believe deepfakes will be used as part of disinformation campaigns in the election*.  Fake messages about polling site disruptions, opening hours, and voting methods could affect turnout or prevent groups of people from voting. Worse still, disinformation and deepfake campaigns -whereby criminals swap out the messages delivered by trusted voices like government officials or journalists – threaten to cause even more chaos and confusion among voters.  Elvis Chan, a Supervisory Special Agent assigned to the FBI told us that people are right to be concerned.  “Deepfakes may be able to elicit a range of responses which can compromise election security,” he said. “On one end of the spectrum, deepfakes may erode the American public’s confidence in election integrity. On the other end of the spectrum, deepfakes may promote violence or suppress turnout at polling locations,” he said. So, how can you spot a deepfake and how can you protect your people from them? 
How to protect yourself and your organization from deepfakes Poorly-made video deepfakes are easy to spot – the lips are out of sync, the speaker isn’t blinking, or there may be a flicker on the screen. But, as the technology improves over time and NLP algorithms become more advanced, it’s going to be more difficult for people to spot deepfakes and other advanced impersonation scams.  Ironically, AI is one of the most powerful tools we have to combat AI-generated attacks.  AI can understand patterns and automatically detect unusual patterns and anomalies – like impersonations – faster and more accurately than a human can.  But, we can’t just rely on technology. Education and awareness amongst people is also incredibly important. It’s therefore encouraging to see that 61% of IT leaders are already educating their employees on the threat of deepfakes and another 27% have plans to do so.* To help you out, we’ve put together some of our top tips which you and your employees can follow if you are being targeted by a deepfake or vishing attack:  Pause and question whether it seems right for a colleague – senior or otherwise – to ask you to carry out the request. Verify the request with the person directly via another channel of communication, such as email or instant messaging. People will not mind if you ask.  Ask the person requesting an action something only you and they would know, to verify their identity. For example, ask them what their partner’s name is or what the office dog is called.  Report incidents to the IT team. With this knowledge, they will be able to put in place measures to prevent similar attacks in the future.   *About the research: Tessian surveyed 250 IT decision makers using third-party research company OnePoll.
Human Layer Security Spear Phishing
Is Your Office 365 Email Secure?
By Maddie Rosenthal
20 August 2020
In July this year, Microsoft took down a massive fraud campaign that used knock-off domains and malicious applications to scam its customers in 62 countries around the world.  But this wasn’t the first time a successful phishing attack was carried out against Office 365 (O365) customers. In December 2019, the same hackers gained unauthorized access to hundreds of Microsoft customers’ business email accounts.  According to Microsoft, this scheme “enabled unauthorized access without explicitly requiring the victims to directly give up their login credentials at a fake website…as they would in a more traditional phishing campaign.” Why are O365 accounts so vulnerable to attacks? Exchange Online/Outlook – the cloud email application for O365 users – has always been a breeding ground for phishing, malware, and very targeted data breaches.  Though Microsoft has been ramping up its O365 email security features with Advanced Threat Protection (ATP) as an additional layer to Exchange Online Protection (EOP), both tools have failed to meet expectations because of their inability to stop newer and more innovative social engineering attacks, business email compromise (BEC), and impersonations.  One of the biggest challenges with ATP in particular is its time-of-click approach, which requires the user to click on URLs within emails to activate analysis and remediation.   Is O365 ATP enough to protect my email? We believe that O365’s native security controls do protect users against bulk phishing scams, spam, malware, and domain spoofing. And these tools are great when it comes to stopping broad-based, high-volume, low-effort attacks – they offer a baseline protection.  For example, you don’t need to add signature-based malware protection if you have EOP/ATP for your email, as these are proven to be quite efficient against such attacks. These tools employ the same approach used by network firewalls and email gateways – they rely on a repository of millions of signatures to identify ‘known’ malware.  But, this is a big problem because the threat landscape has changed in the last several years.  Email attacks have mutated to become more sophisticated and targeted and  hackers exploit user behavior to launch surgical and highly damaging campaigns on people and organizations. Attackers use automation to make small, random modifications to existing malware signatures and use transformation techniques to bypass these native O365 security tools. Unsuspecting – and often untrained – users fall prey to socially engineered attacks that mimic O365 protocols, domains, notifications, and more.  See below for a convincing example.
It is because such loopholes exist in O365 email security that Microsoft continues to be one of the most breached brands in the world.  What are the consequences of a compromised account? There is a lot at stake if an account is compromised.  With ~180 million O365 active email accounts, organizations could find themselves at risk of data loss or a breach, which means revenue loss, damaged reputation, customer churn, disrupted productivity, regulatory fines, and penalties for non-compliance. This means they need to quickly move beyond relying on largely rule- and reputation-based O365 email filters to more dynamic ways of detecting and mitigating email-originated risks. Enter machine learning and behavioral analysis. There has been a surge in the availability of platforms that use machine learning algorithms. Why? Because these platforms detect and mitigate threats in ways other solutions can’t and help enterprises improve their overall security posture. Instead of relying on static rules to predict human behavior, solutions powered by machine learning actually adapt and evolve in tandem with relationships and circumstances. Machine learning algorithms “study” the email behavior of users, learn from it, and – finally – draw conclusions from it.  But, not all of ML platforms are created equal. There are varying levels of complexity (going beyond IP addresses and metadata to natural language processing); algorithms learn to detect behavior anomalies at different speeds (static vs. in real-time); and they can achieve different scales (the number of data points they can simultaneously study and analyze). How does Tessian prevent threats that O365 security controls miss? Tessian’s Human Layer Security platform is designed to offset the rule-based and sandbox approaches of O365 ATP to detect and stop newer and previously unknown attacks from external sources, domain / brand / service impersonations, and data exfiltration by internal actors.  Learn more about why rule-based approaches to spear phishing attacks fail. By dynamically analyzing current and historical data, communication styles, language patterns, and employee project relationships both within and outside the organization, Tessian generates contextual employee relationship graphs to establish a baseline normal behavior. By doing this, Tessian turns both your employees and the email data into an organization’s biggest defenses against inbound and outbound email threats.  Conventional tools focus on just securing the machine layer – the network, applications, and devices. By uniquely focusing on the human layer, Tessian can make clear distinctions between legitimate and malicious email interactions and warn users in real-time to reinforce training and policies to promote safer behavior.  How can O365 ATP and Tessian work together?  Often, customers ask us which approach is better: the conventional, rule-based approach of the O365 native tools, or Tessian’s powered by machine learning? The answer is, each has their unique place in building a comprehensive email security strategy for O365. But, no organization that deals with sensitive, critical, and personal data can afford to overlook the benefits of an approach based on machine learning and behavioral analysis.  A layered approach that leverages the tools offered by O365 for high-volume attacks, reinforced with next-gen tools for detecting the unknown and evasive ones, would be your best bet.  A very short implementation time coupled with the algorithm’s ability to ‘learn’ from historical email data over the last year – all within 24 hours of deployment – means Tessian could give O365 users just the edge they need to combat modern day email threats. 
Spear Phishing
What is Social Engineering? 4 Types of Attacks
13 August 2020
You may have heard of social engineering, but do you know what it is?
Social engineering basics The key difference between social engineering attacks and brute force attacks is the techniques that hackers employ. Instead of trying to exploit weaknesses in security software, a social engineer will use coercive language, a sense of urgency, and even details about the person’s personal or work life to influence the target to hand over information or access to other accounts or systems. 
How does social engineering work? There is no set (or foolproof) ‘method’ that cybercriminals use to carry out social engineering attacks. But, the goal is generally the same: they want to take advantage of people in order to obtain personal information or get access to other systems or accounts. Why? Personal data and intellectual property are incredibly valuable.  While you can read more about the “types” of data that are compromised in this blog: Phishing Statistics 2020, you can learn more about the different types of social engineering attacks below.  Types of social engineering attacks When we say “social engineering”, we’re talking about the exploitation of human psychology. But, hackers can trick people in a few different ways and are always working hard to evade security solutions.  Phishing and spear phishing scams Phishing is one of the most common types of social engineering attacks and is generally delivered via email.  But, more and more often, we’re seeing attacks delivered via SMS, phone, and even social media.  Here are three hallmarks of phishing attacks: An attempt to obtain personal information such as names, dates of birth, addresses, passwords, etc.  Wording that evokes fear, a sense of urgency, or makes threats in an attempt to persuade the recipient to respond quickly. The use of shortened or misleading domains, links, buttons, or attachments. Spear phishing attacks are similar, but are much more targeted. Whereas phishing attacks are sent in bulk, spear phishing attacks are sent to a single person or small group of people and require a lot more forethought. For example, hackers will research targets on LinkedIn to find out who they work with and who they report to. This way, they can craft a more believable email. Want to learn more? We’ve covered phishing and spear phishing in more detail in these blogs: How to Identify and Prevent Phishing Attacks How to Catch a Phish: A Closer Look at Email Impersonation Phishing vs. Spear Phishing: Differences and Defense Strategies  COVID-19: Real-Life Examples of Opportunistic Phishing Emails Pretexting  While pretexting and phishing are categorized separately, they actually go hand-in-hand. In fact, pretexting is a tactic used in many phishing, spear phishing, vishing, and smishing attacks.  Here’s how it works: hackers create a strong, fabricated rapport with the victim. After establishing legitimacy and building trust, the hacker will either blatantly ask for or trick the victim into handing over personal information.   While there is an infinite number of examples we could give, ranging from BEC scams to CEO Fraud, we’ll use a consumer-focused example. Imagine you receive a call from someone who says they work at your bank. The person on the other end of the phone (the scammer) tells you they’ve seen unusual transactions on your account and that, in order to review the transactions and pause activity, you need to confirm your full name, address, and credit or debit card number. If you do share the information, the scammer will have everything they need to access your bank account and even carry out secondary attacks with the information they’ve learned. Together with phishing, pretexting represents 98% of social engineering incidents and 93% of breaches according to Verizon’s 2018 Data Breach Incident Report.  Physical and virtual baiting Like all other types of social engineering, baiting takes advantage of human nature. In particular: curiosity. Scammers will lure the target in (examples below) before stealing their personal data, usually by infecting their computer with some type of malware. The most common type of baiting attack involves the use of physical media – like a USB drive – to disperse malware. These malware-infected USB drives are left in conspicuous areas (like a bathroom, for example) where they are likely to be seen by potential victims. To really drive interest, hackers will sometimes even label the device with curious notes like “confidential” or logos from the target’s organization to make it seem more legitimate.  In an effort to identify who the owner of the USB (or simply because they can’t help themselves) employees often plug the USB be into their computer. Harmless, right? Unfortunately not. Once inserted, the USB deploys malware.  Baiting doesn’t necessarily have to take place in the physical world, though. After the outbreak of COVID-19, several new bait sites were set up. These sites feature fraudulent offers for special COVID-19 discounts, lure people into signing up for free testing, or claim to sell face masks and hand sanitizer.  Whaling attack ‘Whaling’ is a more sophisticated evolution of the phishing attack. In these attacks, hackers use very refined social engineering techniques to steal confidential information, trade secrets, personal data, and access credentials to restricted services, resources, or anything with economic or commercial value.  While this sounds similar to phishing and spear phishing, it’s different. How? Whaling tends to target business managers and executives (the ‘bigger fish’) who are likely to have access to higher-level data.  But, it’s not just their access to data.  Whaling is also seen as an effective attack vector because senior leaders themselves are perceived to be “easy targets”. Leaders tend to be extremely busy, too, and are therefore more likely to make mistakes and fall for scams.  Perhaps that’s why senior executives are 12x more likely to be the target of social engineering attacks compared to other employees. How to defend against social engineering attacks According to Verizon’s 2020 Data Breach Investigations Report (DBIR), 22% of breaches in 2019 involved phishing and other types of social engineering attacks. And, when you consider the cost of the average breach ($3.92 million) it’s absolutely essential that IT and security teams do everything they can to protect their employees. Here’s how: 1. Put strict policies in place The best place to start is by ensuring that you’ve got strong policies in place that govern the use of company IT systems, including work phones, email accounts, and cloud storage.  For example, you could ban the use of IT systems for personal reasons like accessing personal email accounts, social media, and non-work-related websites. You can learn more about why accessing personal email accounts and social media on work devices is dangerous on this blog: Remote Worker’s Guide to: Preventing Data Loss.  2. Educate your workforce Awareness training is key to help employees understand social engineering risks, learn how to spot these types of attacks, and what to do if and when they are targeted.  In addition to quarterly training sessions either online or in-person, organizations can also invest in phishing simulations. This way, employees get some “real-world” experience, without the risk of compromising data. But, it’s important to note that training alone isn’t enough. We explore this in detail in this blog: Pros and Cons of Phishing Awareness Training.  3. Filter inbound emails 90% of all data breaches begin with email. It’s one of the most common attack vectors used by hackers for social engineering purposes, and more.  But, with the right threat management tools, IT and security teams can mitigate the risk associated with social engineering attacks by monitoring and filtering inbound emails.  It’s important that solutions don’t impede employee productivity, though. For example, if a solution issues false positives, employees may become desensitized to warnings and end up ignoring them instead of heeding the advice.  Tessian protects employees from inbound email threats without getting in the way. How does Tessian detect and prevent social engineering? Powered by machine learning, Tessian Defender analyzes and learns from an organization’s current and historical email data and protects employees against inbound email security threats, including whaling, CEO Fraud, BEC, spear phishing, and other targeted social engineering attacks. Best of all, it does all of this silently in the background in real-time and, in-the-moment warnings help bolster training and reinforce policies. That means employee productivity isn’t affected and security reflexes improve over time. To learn more about how Tessian can protect your people and data against social engineering attacks on email, book a demo today.
Human Layer Security Spear Phishing DLP Data Exfiltration
Research Shows Employee Burnout Could Cause Your Next Data Breach
By Laura Brooks
12 August 2020
Understanding how stress impacts your employees’ cybersecurity behaviors could significantly reduce the chances of people’s mistakes compromising your company’s security, our latest research reveals.   Consider this. A shocking 93% of US and UK employees told us they feel tired and stressed at some point during their working week, with one in 10 feeling tired every day. And perhaps more worryingly, nearly half (46%) said they have experienced burnout in their career.  Then consider that nearly two-thirds of employees feel chained to their desks, as 61% of respondents in our report said there is a culture of presenteeism in their organization that makes them work longer hours than they need to. Nearly 70% of employees also agreed that there is an expectation within their company to respond to emails quickly.  Employees are overwhelmed, overworked and are feeling the pressure to keep pace with their organization’s demands. 
The effects of the pandemic  The events of 2020 haven’t helped matters either. In the wake of the global pandemic, people have experienced extremely stressful situations that affected their health and finances, against a backdrop of political uncertainty and social unrest, while simultaneously juggling the demands of their jobs. The sudden shift to remote working also meant that people were surrounded by new distractions, and over half of respondents (57%) told us they felt more distracted when working from home.  According to Jeff Hancock, a professor at Stanford University who collaborated with us on this report, people tend to make mistakes or decisions they later regret when they are stressed and distracted. This is because when our cognitive load is overwhelmed, and when our attention is split between multiple tasks, we aren’t able to fully concentrate on the task in front of us. What does this mean for security?  Not only are these findings incredibly concerning for employees’ health and wellbeing, these factors could also explain why mistakes that compromise cybersecurity are happening more than ever. The majority of employees (52%) we surveyed said they make more mistakes at work when they are stressed.  !function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async"); Younger employees seem to be more affected by stress than their older co-workers, though. Nearly two-thirds of workers aged 18-30 years old (62%) said they make more mistakes when they are stressed, compared to 45% of workers over 51 years old.  Our research also revealed that 43% and 41% of employees believe they are more error-prone when tired and distracted, respectively. In fact, people cited distraction as the top reason for why they fell for a phishing scam at work while 44% said they had accidentally sent an email to the wrong person (44%) because they were tired.  While these mistakes may seem trivial on the surface, phishing is the number one threat vector used by hackers today and one in five companies told us they have lost customers as a result of an employee sending an email to the wrong person. Far from red-faced embarrassment, these mistakes are compromising businesses’ cybersecurity.
The other problem is that hackers are preying on our vulnerable states, and using them to their advantage. Cybercriminals know people are stressed and looking for information about the pandemic and remote working. They know that some individuals are struggling financially and others have lost their jobs. The lure of a ‘too-good-to-be-true’ deal or ‘get a new job fast’ offer may suddenly look very appealing, especially if the email appears to have come from a trusted source, and cause people to click.  So what can businesses do to protect employees from mistakes caused by burnout?  Business and security leaders need to realise that it’s unrealistic for employees to act as the company’s first line of defence. You cannot expect every employee to spot every scam or make the right cybersecurity decision 100% of the time, particularly when they’re dealing with stressful situations and working in environments filled with distractions. When faced with never-ending to-do lists and back-to-back Zoom calls, cybersecurity is the last thing on people’s minds. In fact, a third of respondents told us they “rarely” or “never” think about security when at work.  Businesses, therefore, need to create a culture that doesn’t blame people for their mistakes and, instead, empowers them to do great work without security getting in the way. Understand how stress impacts people’s cybersecurity behaviors and tailor security policies and training so that they truly resonate for every employee.
Educating people on how hackers might take advantage of their stress and explaining the types of scams that people could be susceptible to is an important first step. For example, a hacker could impersonate a senior IT director, supposedly communicating the implementation of new software to accommodate the move back into the office, and asks employees to share their account credentials. Or a hacker may pose as a trusted government agency requesting personal information in relation to a new financial relief scheme.  Businesses should also implement solutions that can help employees make good cybersecurity decisions and reduce risk over time. Security solutions like Tessian use machine learning to understand employee behaviors to alert people to risks on email as and when they arise. By warning individuals in real-time, we can educate individuals as to why the email they were about to send or have received is a threat to company security. It helps to make people think twice before they do something they might regret.  With remote working here to stay, and with hackers continually finding ways to capitalize on people’s stress in order to manipulate them, businesses must prioritize cybersecurity at the human layer. Only by understanding why people make mistakes that compromise cybersecurity, can you begin to prevent burnout from causing your next data breach.
Spear Phishing
Smishing and Vishing: What You Need to Know About These Phishing Attacks
10 August 2020
Whether or not you’re familiar with the terms “smishing” and “vishing,” you may have been targeted by these attacks. This article will: Explain what smishing and vishing attacks are, and how they relate to phishing Provide examples of each type of attack alongside tips on how to identify them Discuss what you should do if you’re targeted by a smishing or vishing attack Smishing, Vishing, and Phishing Smishing and vishing are two types of phishing attacks, sometimes called “social engineering attacks.” While 96% of phishing attacks arrive via email, hackers can also use social media channels. Regardless of how the attack is delivered, the message will appear to come from a trusted sender and may ask the recipient to: Follow a link, either to download a file or to submit personal information Reply to the message with personal or sensitive information Carry out an action such as purchasing vouchers or transferring funds Types of phishing include “spear phishing,” where specific individuals are targeted by name, and “whaling,” where high-profile individuals such as CEOs or public officials are targeted. All these hallmarks of phishing can also be present in smishing and vishing attacks. What Is Smishing?
These messages often contain a link (generally a shortened URL) and, like other phishing attacks, they’ll encourage the recipient to take some “urgent” action, for example: Claiming a prize Claiming a tax refund Locking their online banking account Example of a Smishing Attack Just like phishing via email, the rates of smishing continue to rise year-on-year. According to Consumer Reports, the Federal Trade Commission (FCC) received 93,331 complaints about spam or fraudulent text messages in 2018 — an increase of 30% from 2017. Here’s an example of a smishing message:
The message above appears to be from the Driver and Vehicle Licensing Agency (DVLA) and invites the recipient to visit a link. Note that the link appears to lead to a legitimate website — gov.uk is a UK government-owned domain. The use of a legitimate-looking URL is an excellent example of the increasingly sophisticated methods that smishing attackers use to trick unsuspecting people into falling for their scams. How to Identify a Smishing Attack As we’ve said, cybercriminals are using increasingly sophisticated methods to make their messages as believable as possible. That’s why many thousands of people fall for smishing scams every year. In fact, according to a study carried out by Lloyds TSB, participants were shown 20 emails and texts, half of which were inauthentic. Only 18% of participants correctly identified all of the fakes. So, what should you look for? Just like a phishing attack via email, a smishing message will generally: Convey a sense of urgency Contain a link (even if the link appears legitimate, like in the example above) Contain a request personal information Other clues that a message might be from a hacker include the phone number it comes from (large institutions like banks will generally send text messages from short-code numbers, while smishing texts often come from “regular” 11-digit mobile numbers) and may contain typos. If you’re looking for more examples of phishing attacks (which might help you spot attacks delivered via text message) check out these articles: How to Identify and Prevent Phishing Attacks How to Catch a Phish: A Closer Look at Email Impersonation Phishing vs. Spear Phishing: Differences and Defense Strategies  COVID-19: Real-Life Examples of Opportunistic Phishing Emails What Is Vishing?
Like targets of other types of phishing attacks, the victim of a vishing attack will receive a phone call (or a voicemail) from a scammer, pretending to be a trusted person who’s attempting to elicit personal information such as credit card or login details. So, how do hackers pull this off? They use a range of advanced techniques, including: Faking caller ID, so it appears that the call is coming from a trusted number Utilizing “war dialers” to call large numbers of people en masse Using synthetic speech and automated call processes A vishing scam often starts with an automated message, telling the recipient that they are the victim of identity fraud. The message requests that the recipient calls a specific number. When doing so, they are asked to disclose personal information. Hackers then may use the information themselves to gain access to other accounts or sell the information on the Dark Web.  The Latest Vishing News: Updated August 2020 On August 20, 2020, the Federal Bureau of Investigation (FBI) and Cybersecurity and Infrastructure Security Agency (CISA) issued a joint statement warning businesses about an ongoing vishing campaign. The agencies warn that cybercriminals have been exploiting remote-working arrangements throughout the COVID-19 pandemic.  The scam involves spoofing login pages for corporate Virtual Private Networks (VPNs), so as to steal employees’ credentials. These credentials can be used to obtain additional personal information about the employee. The attackers then use unattributed VoIP numbers to call employees on their personal mobile phones. The attackers pose as IT helpdesk agents, and use a fake verification process using stolen credentials to earn the employee’s trust. The FBI and CISA recommend several steps to help avoid falling victim to this scam, including restricting VPN connections to managed devices, improving 2-Step Authentication processes, and using an authentication process for employee-to-employee phone communications. Example of a Vishing Attack Again, just like phishing via email and smishing, the rates of vishing attacks are continually rising. According to one report, 49% of organizations surveyed were victims of a vishing attack in 2018.  Vishing made headlines most recently in July 2020 after the Twitter scam. After a vishing attack, high-profile users had their accounts hacked, and sent out tweets encouraging their followers to donate Bitcoin to a specific cryptocurrency wallet, supposedly in the name of charitable giving or COVID-19 relief. This vishing attack involved Twitter employees being manipulated, via phone, into providing access to internal tools that allowed the attackers to gain control over Twitter accounts, including those of Bill Gates, Joe Biden, and Kanye West. This is an example of spear phishing, conducted using vishing as an entry-point. It’s believed that the perpetrators earned at least $100,000 in Bitcoin before Twitter could contain the attack. You can read more cybersecurity headlines from the last month here.  How to Identify a Vishing Attack Vishing attacks share many of the same hallmarks as smishing attacks. In addition to these indicators, we can categorize vishing attacks according to the person the attacker is impersonating: Businesses or charities — Such scam calls may inform you that you have won a prize, present you with you an investment opportunity, or attempt to elicit a charitable donation. If it sounds too good to be true, it probably is. Banks — Banking phone scams will usually incite alarm by informing you about suspicious activity on your account. Always remember that banks will never ask you to confirm your full card number over the phone. Government institutions — These calls may claim that you are owed a tax refund or required to pay a fine. They may even threaten legal action if you do not respond.  Tech support — Posing as an IT technician, an attacker may claim your computer is infected with a virus. You may be asked to download software (which will usually be some form of malware or spyware) or allow the attacker to take remote control of your computer. How to Prevent Smishing and Vishing Attacks The key to preventing smishing and vishing attacks is security training.  While individuals can find resources online, employers should be providing all employees with IT security training. It’s actually a requirement of data security laws, such as the General Data Protection Regulation (GDPR) and the New York SHIELD Act. You can read more about how compliance standards affect cybersecurity on our compliance hub.  Training can help ensure all employees are familiar with the common signs of smishing and vishing attacks which could reduce the possibility that they will fall victim to such an attack. But, what do you do if you receive a suspicious message? The first rule is: don’t respond.  If you receive a text requesting that you follow a link, or a phone message requesting that you call a number or divulge personal information — ignore it, at least until you’ve confirmed whether or not it’s legitimate. The message itself can’t hurt them, but acting on it can.  If the message appears to be from a trusted institution, search for their phone number and call the institution directly. For example, if a message appears to be from your phone provider, search for your phone provider’s customer service number and discuss the request directly with the operator.   If you receive a vishing or smishing message at work or on a work device, make sure you report it to your IT or security team. If you’re on a personal device, you should report significant smishing and vishing attacks to the relevant authorities in your country, such as the Federal Communications Commission (FCC) or Information Commissioner’s Office (ICO).  For more tips on how to identify and prevent phishing attacks, including vishing and smishing, follow Tessian on LinkedIn or subscribe to our monthly newsletter. 
Human Layer Security Spear Phishing DLP Data Exfiltration
Research Shows How To Prevent Mistakes Before They Become Breaches
By Maddie Rosenthal
22 July 2020
We all make mistakes. But with over two-fifths of employees saying they’ve made mistakes at work that have had security repercussions, businesses need to find a way to stop mistakes from happening before they compromise cybersecurity.  That’s why we developed our report The Psychology of Human Error, with the help of Jeff Hancock, a professor at Stanford University and expert in social dynamics online.  We wanted to understand why these mistakes are happening, rather than simply dismissing incidents of human error as people acting carelessly or labeling people the ‘weakest link’ when it comes to security. By doing so, we hope businesses can better understand how to protect their people, and the data they control.  Key findings: 43% of employees have made mistakes that have compromised cybersecurity A third of workers (33%) rarely or never think about cybersecurity when at work 52% of employees make more mistakes when they’re stressed, while 43% are more error-prone when tired 58% have sent an email to the wrong person at work and 1 in 5 companies lost customers after an employee sent a misdirected email  Read on to learn why this matters. You can also register for our webinar on August 19 here. We’ll be exploring key findings from the report with Jeff Hancock. You’ll walk away with a better understanding of how hacker’s are manipulating employees and what you can do to stop them. What mistakes are people making?  The majority of our survey respondents said they had sent an email to the wrong person, with nearly one-fifth of these misdirected emails ending up in the wrong external person’s inbox.  Far from just red-faced embarrassment, this simple mistake has devastating consequences. Not only do companies face the wrath of data protection regulators for flouting the rules of regulations like GDPR, our research reveals that one in five companies lost customers as a result of a misdirected email, because the trust they once had with their clients was broken. What’s more, one in 10 workers said they lost their job.  !function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async"); Another mistake was clicking on links in phishing emails, something a quarter of respondents (25%) said they had done at work. This figure was significantly higher in the Technology industry however, with 47% of workers in this sector saying they’d fallen for phishing scams. It goes to show that even the most cybersecurity savvy people can make mistakes.  Interestingly, men were twice as likely as women to fall for phishing scams. While researchers aren’t 100% sure as to why gender differences play a factor in phishing susceptibility, our report does show that demographics play a role in people’s cybersecurity behaviors at work.  What’s causing these mistakes to happen?  1. Younger employees are 5x more likely to make mistakes 50% aged 18-30 years olds said they had made such mistakes with security repercussions for themselves or their organization. Just 10% of workers over 51 said the same.  This disparity, our report suggests, is not because younger workers are more careless. Rather, it may be because younger workers are actually more aware that they have made a mistake and are also more willing to admit their errors. For older generations, Professor Hancock explains, self-presentation and respect in the workplace are hugely important. They may be more reluctant to admit they’ve made a mistake because they feel ashamed due to preconceived notions about their generations and technology. Businesses, therefore, need to not only acknowledge how age affects cybersecurity behaviors but also find ways to deshame the reporting of mistakes in their organization. 2. 93% of employees are stressed and tired Employees told us they make more mistakes at work when they are stressed (52%), tired (43%), distracted (41%) and working quickly (36%).  This is concerning when you consider that an overwhelming 93% of employees surveyed said they were either tired or stressed at some point during the working week. This isn’t helped by the fact that nearly two-thirds of employees feel chained to their desks, with 61% saying there is a culture of presenteeism in their organization that makes them work longer hours than they need to.  The Covid-19 pandemic has put people under huge amounts of stress and change. In light of the events of 2020, our findings call for businesses to empathize with people’s positions and understand the impact stress and working cultures have on cybersecurity.
3. 57% of employees are being driven to distraction 47% of employees surveyed cited distraction as a top reason for falling for a phishing scam, while two-fifths said they sent an email to the wrong person because they were distracted.  With over half of workers (57%) admitting they’re more distracted when working from home, the sudden shift to remote-working could open businesses up to even more risks caused by human error. It’s hardly surprising. We suddenly had to set-up offices in the homes we share with our young children, pets and our housemates. There’s a lot going on, and mistakes are likely to happen. 
4. 41% thought phishing emails were from someone they trusted Over two-fifths of people (43%) mistakenly clicked on phishing emails because they thought the request was legitimate, while 41% said the email appeared to have come from either a senior executive or a well-known brand.  Over the past few months, we’ve seen hackers impersonating well-known brands and trusted authorities in their phishing scams, taking advantage of people’s desire to seek guidance and information on the pandemic. Impersonating someone in a position of trust or authority is a common and effective tactic used by hackers in phishing campaigns. Why? Because they know how difficult or unlikely it is to ignore a request from someone you like, respect or report into.  Businesses need to protect their people from these phishing scams. Educate staff on the ways hackers could take advantage of their circumstances and invest in solutions that can detect the impersonations, when your distracted and overworked employees can’t. !function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async"); But how can businesses prevent these mistakes from happening in the first place?  To successfully prevent mistakes from turning into serious security incidents, businesses have to take a more human approach.  It’s all too easy to place the blame of data breaches on people’s mistakes. But businesses have to remember that not every employee is an expert in cybersecurity. In fact, a third of our survey respondents (33%) said they rarely or never think about cybersecurity when at work. They are focused on getting the jobs they were hired to do, done. !function(e,i,n,s){var t="InfogramEmbeds",d=e.getElementsByTagName("script")[0];if(window[t]&&window[t].initialized)window[t].process&&window[t].process();else if(!e.getElementById(n)){var o=e.createElement("script");o.async=1,o.id=n,o.src="https://e.infogram.com/js/dist/embed-loader-min.js",d.parentNode.insertBefore(o,d)}}(document,0,"infogram-async"); Training and policies help. However, combining this with machine intelligent security solutions – like Tessian – that automatically alert individuals of potential threats in real-time is a much more powerful tool in preventing mistakes before they turn into breaches.  Alerting employees to the threat in-the-moment helps override impulsive and dangerous decision-making that could compromise cybersecurity. By using explainable machine learning, we arm employees with the information they need to apply conscious reasoning to their actions over email, making them think twice before doing something they might regret. 
And with greater visibility into the behaviors of your riskiest and most at-risk employees, your teams can tailor security training and policies to influence and improve staff’s cybersecurity behaviors. Only by protecting people and preventing their mistakes can you ensure data and systems remain secure, and help your people do their best work. Read the full Psychology of Human Error report here.
Spear Phishing DLP
Why Political Campaigns Need Chief Information Security Officers
20 July 2020
On July 10th, Joe Biden’s US presidential campaign announced it was hiring a Chief Information Security Officer (CISO) and a Chief Technology Officer (CTO). Biden’s campaign team told The Hill that these security professionals would help “mitigate cyber threats, bolster… voter protection efforts, and enhance the overall efficiency and security of the entire campaign.” This development confirms what cybersecurity experts have long understood — that, just like businesses, political campaigns require a CISO. We’ll tell you why. Are political campaigns likely targets of cybercrime? Rates of cybercrime — and the sophistication of cybercriminals — continue to increase across all sectors. Whether it’s phishing attacks, malware, ransomware, or brute force attacks, incidents are on the rise.  And, when you consider which industries are the most targeted (Healthcare, Financial Services, Manufacturing) It’s easy to understand why political campaigns are also targets of hackers and scammers: Political campaigns are a cornerstone of the democratic process They process the personal information of thousands of voters  They handle confidential and security-sensitive information These aren’t anecdotal reasons. Political campaigns have been targeted by cybercriminals before. For example, in 2016, Hillary Clinton’s campaign manager, John Podesta, received a spear phishing email disguised as a Google security alert. Podesta followed a link, entered his login credentials, and exposed over 50,000 emails to malicious actors. This is a great example of how human error can lead to data breaches and goes to show that anyone can make a mistake.  That’s why cybersecurity is so important. Learn how Tessian prevents spear phishing attacks.  How can a CISO help a political campaign? Hiring a CISO — and thus improving the cybersecurity of political campaigns — has three main benefits: Safeguarding the democratic process Protecting voter privacy Maintaining national security Let’s explore each of these in a bit more detail. You can also check out our CISO Spotlight Series to get a better idea of what role a CISO plays across different sectors.  Safeguarding the Democratic Process Whatever your political persuasion, it’s hard to ignore headlines that detail the role cybercriminals played in the 2016 US election, including: Cyberattacks occurred against politicians Electoral meddling undermined voters’ faith in the democratic process Better cybersecurity could have mitigated the impact of electoral cyberattacks A CISO ensures better coordination of a political campaign’s IT security program. This can involve: Mandating security software on all campaign devices  Setting up DMARC records for domains used in campaigning Assessing risk and responding to threats Increasing staff awareness of good cybersecurity practices Of course, these functions aren’t specific to political campaigns. A CISO’s job, whether at a big bank or a law firm, is to safeguard systems, data, and devices by implementing policies, procedures, and technology and to help build a positive security culture. The difference, though, is that while a CISO at your “average” organization helps prevent data breaches and other security incidents, the CISO of a political campaign does all of this while also helping maintain faith in the process among voters.  Keep reading to find out how. Protecting voter privacy Political campaigns must communicate directly with individual voters which means those working on the campaign have access to highly sensitive information. And, we’re not just talking about names and addresses. Even a person’s intention to vote is highly sensitive personal information.  While – yes – many people publicly proclaim their ideology and voting intention via social media, those people don’t expect their information to be mined by data-harvesting software, combined with other personal information, and shared with unauthorized third parties. They simply want to share their views with friends, family, and followers.
Like hacking, data mining operations can affect the outcome of elections. They also represent a gross invasion of individual privacy.  How valuable an asset is voter data? A few recent high-profile examples will give you an idea. (Click the links to learn more about each individual incident.) The UK pro-Brexit Vote Leave campaign’s involvement in the Cambridge Analytica scandal Rand Paul and Ted Cruz’s campaigns allegedly selling their voters’ contact information to the Trump campaign Rick Santorum’s campaign selling voters’ data to a “doomsday prepper” firm These examples prove that voter data can be used to raise funds or create a political advantage. But what are the consequences? To start, voter trust is lost which – as we’ve discussed – can impact the democratic process. Beyond that, there are also legal ramifications. Under state and federal privacy laws, selling personal information is a legally-regulated activity. Any allegation that a campaign has violated privacy law would be extremely damaging not just reputationally, but financially.  A CISO can help ensure that a political campaign is less likely to engage in risky behavior with voters’ personal information and assist the campaign to comply with privacy law.  But it’s not just personal information that political campaigns handle. Maintaining National Security Political campaigns also handle security-sensitive information which must be carefully safeguarded. Robert Deitz, former senior counselor to the CIA, told Washington Post that a Russian cyberattack on the Trump campaign could reveal information about Trump’s foreign investments and negotiating style. Having access to this data could help Russia understand “where it can get away with foreign adventurism.” A CISO has overall responsibility for information safeguarding within an organization. They understand:  What types of data exist about the candidate  How and where the information is processed, stored, and transferred Who can access the data All of this information helps CISOs implement data loss prevention (DLP) strategies in order to keep sensitive information out of the hands of bad actors.  Why does this matter?  Data privacy – and therefore cybersecurity – is essential for the modern world.  In fact, in business, a strong security posture fosters trust with customers and prospects and is therefore considered a competitive edge. Why? Because data is valuable currency. Customers and prospects expect the organizations they interact with to safeguard the information shared with them. Shouldn’t politicians foster trust with voters in the same way? 
Spear Phishing
Look Out for “Back to School” Scams
By Maddie Rosenthal
08 July 2020
It’s the time of year where universities are sending more emails than normal as they make preparations to welcome students back in the fall and relay updates on their plans to transition to remote learning. Staff and students need to be aware though; hackers will use this ‘back to school’ momentum and will likely be impersonating trusted universities in phishing attacks to try and steal intellectual property as well as students’ valuable personal and financial information. It is, therefore, worrying that nearly all of the top 20 US universities are potentially at risk of having their institution’s domain impersonated by scammers in phishing emails.
In fact, Tessian’s researchers reveal that 40% of the top 20 US universities are not using Domain-based Message Authentication, Reporting & Conformance (DMARC) records. And while the other universities we analysed have published a DMARC record, the DMARC policies had not been set up to ‘quarantine’ or ‘reject’ any emails from unauthorized senders using its domains. Why does this matter? Without DMARC records in place, or without having DMARC policies set at the strictest settings, hackers can easily impersonate a university’s email domain in phishing campaigns, convincing their targets that they are opening a legitimate email from a fellow student, professor or administrator at their university. From that phishing email, hackers could lure staff or students to a fake website that has been set up to steal account credentials or request that their targets send personal or financial information. Against the backdrop of “back to school” and the shift to hybrid learning environments (with some universities restricting access to campuses), it wouldn’t seem out of the ordinary for a university to request this information. Students, therefore, may not realise they are being scammed – especially if the email domain looks legitimate. Configuring email authentication records like DMARC, and setting policies to the strictest settings, are necessary measures for preventing attackers from directly impersonating your company’s email domain. However, organizations also need to be aware that DMARC is not a silver bullet and hackers will find ways around it.
Why isn’t DMARC enough to prevent impersonation? Firstly, DMARC records are inherently public, and an attacker can use this information to select their targets and attack methods, simply by identifying organizations without an effective DMARC record. If your company has a strict email policy in place, the attacker can still carry out an advanced spear phishing attack by registering look-a-like domains, betting on the fact that a busy employee or distracted student may miss the slight deviation from the original domain. Secondly, while your organization might have DMARC in place, your external contacts may not. This means that while your company domain is protected against direct impersonation, your employees may be vulnerable to impersonation of external contacts like partners, suppliers or government bodies. What can you do to avoid being targeted by these scams? As universities plan to welcome students back next month – and inundate inboxes with updates between now and then — it’s critical that they take action to build robust security measures that can protect their staff and students against email scams. Here are some top tips to help you avoid the back to school scams. Cybersecurity tips for universities: Assess email security policies and solutions: Are they robust enough to spot sophisticated spear phishing attacks? Enable multi-factor authentication: This easy-to-implement security precaution helps prevent unauthorized individuals from accessing systems and data in the event a password is compromised. Increase awareness: Make staff and students aware of potential scams and provide advice on what they should look out for (for example, carefully inspect deviations in the email domain and inspect URLs). Ask staff and students to report incidents: Security and IT teams have a better chance of remediating new threats and preventing future ones. Cybersecurity tips for faculty staff and students: Think before you share: Never share direct deposit details or your personal information like your Social Security number on an unfamiliar website. Think before you click: If anything seems unusual, do not follow or click links or download attachments. Verify the request: If you receive an email from your university asking for urgent action, question its legitimacy and if you’re not sure, contact the university directly to verify the request. Report threats to the university: Security and IT teams will be able to investigate incidents and take action to prevent similar threats in the future.
Page
[if lte IE 8]
[if lte IE 8]