Tim Fitzgerald is the Chief Information Security Officer (CISO) at ARM, and former CISO at Symantec.
What are some of the biggest challenges that you face, and how does that make you think about your security strategy?
Our challenges are—not to be trite, but they’re sort of opportunities as well. By far the biggest single challenge we have it ARM’s defaults around information sharing. We have a belief—and I think it has proven to be true over the 30 plus years that ARM has been in business—that the level of information sharing has allowed ARM to be extraordinarily successful and innovative. There’s no backing up from that, as an ethos of the company.
But that represents a huge amount of challenge, because we give a tremendous amount of personal freedom for how people can access our information and our systems, as well as how they use our data internally—with our peers—but also externally, with our customers, who we’re very deeply embedded with.
We don’t sell a traditional product where they buy it, then we deliver it to them, and then we’re done. The vast majority of our customers spend years with us developing their own product, based on their own intellectual property.
So the level of information sharing that happens in a relationship like that is quite difficult to manage, to be candid.
Has human layer security been part of your strategy at ARM, or even your career before ARM?
My career before ARM was at Symantec. Symantec was a very different company—you know, more of a traditional software company. It also had 25,000 people who thought they knew more about security than I did. So that presented a unique challenge in terms of how we worked with that community.
But even at Symantec, I was thinking quite hard about how we influence behavior. And ultimately, what it comes down to for me, is that I view my job in information security as something between a sociologist and a marketing expert. We’re really trying to change people’s behavior in a moment. Not universally, not their personal ethos, but will they make the right decision in this moment, to do something that won’t create a security risk for us.
I label that “microtransactions.” We get these small moments in time where we have an opportunity to interact with and to influence behavior.
And I’ve been evolving that strategy with ARM in a very different place, in some respects—but trying to think about not just how we influence their behavior in that moment in time, but actually—can we change their ethos? Can we make responsible security decision-making part of everyone’s job?
That turns out to be a very hard problem. And the way we think about that at ARM—we have a centralized security team, ultimately security is my responsibility at ARM, but we very much rely on what we very much consider to be our “extended” security team, which is all of our employees.
Essentially, our view is that they can undo all of the good that we do behind them to try and compensate for all the risk that a normal human being creates.
But I think that one of the ways we look at this that is unique at ARM is that we very much take the “people are people” view on this. Not that they’re the weakest link, not that they don’t come with good intent, or they don’t want to be good at their job, or that they’re going to take that shortcut just to get that extra moment of productivity.
But actually, that everyone wants to do a good job, and our job is to arm them with both the knowledge and the tools to be able to keep themselves secure, rather than trying to secure around them.
At Tessian, we think that technology should not only keep people safe, but it should do it in a way that empowers them to do their best work. What did Tessian address for you that you couldn’t quite address with other platforms?
Coming from Symantec, I used all their technology extensively, and one of the best products Symantec has to offer is their DLP solution. I’m very familiar with that, and I would argue we had one of the more advanced installations in the world running internally at Symantec. So, I’m extremely familiar with the capability of those technologies.
What I learned in my time doing that, is that when used correctly in a finite environment, on a finite data set, that sort of solution can be very effective at keeping that data where it’s supposed to be and understanding movement in that ecosystem.
When you try to apply that broadly, it has all the same problems as everything else. You start to run into the inability of the DLP system to understand where that data is supposed to be—is this person supposed to have it, based on their role and their function? It’s not a smart technology like that, so you end up having to write these very complex rules that are hard to manage.
What I liked about Tessian is that it gave us an opportunity to use the machine learning in the background, to try and develop context about whether something that somebody was doing was either atypical—or maybe it’s not atypical, it’s part of a bad process, but by the very nature of the type of information they’re sending around and the characteristics of that information—we can get a sense of what they’re doing at whether it’s causing us risk.
So, it doesn’t require us to be completely prescriptive about what we’re doing. It allows us to learn, with the technology and with the people, about what normal patterns of behavior look like—and, therefore, intervene when it matters, and not every time another bell goes off.