Tessian’s mission is to secure the human layer by empowering people to do their best work, without security getting in their way.
We have been using OKRs (Objectives and Key Results) at Tessian for over 18 months now, including in the Engineering team. They’ve grown into an essential part of the organizational fabric of the department, but it wasn’t always this way. In this article I will share a few of the challenges we’ve faced, lessons we’ve learned and some of the solutions that have worked for us. I won’t try and sell you on OKRs or explain what an OKR is or how they work exactly; there’s lots of great content that already does this!
When we introduced OKRs, there were about 30 people in the Engineering department. The complexity of the team was just reaching the tipping point where planning becomes necessary to operate effectively. We had never really needed to plan before, so we found OKR setting quite challenging, and we found ourselves taking a long time to set what turned out to be bad OKRs. It was tempting to think that this pain was caused by OKRs themselves. On reflection today, however, it’s clear that OKRs were merely surfacing an existing pain that would emerge at some point anyway. If teams can’t agree on an OKR, they’re probably not aligned about what they are working on. OKRs surfaced this misalignment and caused a little pain during the setting process that prevented a large pain during the quarter when the misalignment would have had a larger impact.
The Key Result part of an OKR is supposed to describe the intended outcome in a specific and measurable way. This is sometimes straightforward, typically when a very clear metric is used, such as revenue or latency or uptime. However, in Engineering there are often KRs that are very hard to write well. It’s too easy to end up with a bunch of KRs that drive us to ship a bunch of features on time, but have no aspect of quality or impact. The other pitfall is aiming for a very measurable outcome that is based on a guess, which is what happens when there is no baseline to work from. Again, these challenges exist without OKRs, but they may never precipitate into the conversation about what a good outcome is for a particular deliverable without OKRs there to make it happen. Unfortunately we haven’t found the magic wand that makes this easy, and we still have some binary “deliver the feature” key results every quarter, but these are less frequent now. We will often set a KR to ship a feature in Q1 and to set up a metric and will then set a target for the metric in Q2 once we have a baseline. Or if we have a lot of delivery KRs, we’ll pull them out of OKRs altogether and zoom out to set the KR around their overall impact.
An eternal debate in the OKR world is whether to set OKRs top-down (leadership dictate the OKRs and teams/individuals fill out the details) or bottom-up (leadership aggregates the OKRs of teams and individuals into something coherent) or some mixture of the two. We use a blend of the two, and will draft department OKRs as a leadership team and then iterate a lot with teams, sometimes changing them entirely. This takes time, though. Every iteration uncovers misalignment, capacity, stakeholder or research issues that need to be addressed. We’ve sometimes been frustrated and rushed this through as it feels like a waste of time, but when we’ve done this, we’ve just ended up with bigger problems later down the road that are harder to solve than setting decent OKRs in the first place. The lesson we’ve learned is that effort, engagement with teams and old-fashioned rigor are required when setting OKRs, so we budget 3-4 weeks for the whole process.
The last three points have all been about setting OKRs, but what about actually using them day to day? We’ve learned two things:
First, flex. Our OKRs are quarterly, but sometimes we need to set a 6 month OKR because it just makes more sense! We encourage this to happen. We don’t obsess about making OKRs ladder up perfectly to higher-level OKRs. It’s nice when they do, but if this is a strict requirement, then we find that it’s hard to make OKRs that actually reflect the priorities of the quarter. Sometimes a month into the quarter, we realize we set a bad OKR or wrote it in the wrong way. A bit of flexibility here is important, but not too much. It’s important to learn from planning failures, but it is probably more important that OKRs reflect teams’ actual priorities and goals, or nobody is going to take them seriously. So tweak that metric or cancel that OKR if you really need to, but don’t go wild.
Finally, process. If we don’t actively check in on OKRs weekly, we tend to find that all the value we get from OKRs is diluted. Course-corrections come too late or worries go unsolved for too long. To keep this sustainable, we do this very quickly. I have an OKR check-in on the agenda for all my 1-1s with direct reports, and we run a 15-minute group meeting every week with the Product team where each OKR owner flags any OKRs that are off track, and we work out what we need to do to resolve them. Often this causes us to open a slack channel or draft a document to solve the issue outside of the meeting so that we stick to the strict 15 minute time slot.
Many of these lessons have come from suggestions from the team, so my final tip is that if you’re embarking on using OKRs in your Engineering team, or if you need to get them back on track, make sure you set some time aside to run a retrospective. This invites your leaders and managers to think about the mechanics of OKRs and planning, and they usually have the best ideas on how to improve things.