The previous post in this series describes the current state of the peer review program. But perhaps more important than what the system looks like today is how it evolved. We made a conscious decision to apply many of the same principles that our development teams would. Instead of designing an entirely new program in a conference room and then rolling it out to the entire company, we would proceed incrementally. It’s served us well so far and we don’t plan on stopping now.
We expect our development teams to be customer-oriented. They should understand the effect their work has on customers and always try to give them a great experience.
We’ve tried to operate this process the same way. We provide this program as a service to many employees, so it can be easy to just focus on the process. But for each person who uses this service, it’s intensely personal. It’s important to keep that in mind, establishing feedback loops and engaging with them to understand how we’re meeting their needs.
As part of this, we also need to maintain a system our employees can trust. For us, that involves maintaining a few core pillars that they can be relied upon to never change, even as the details of the program evolve. The pillars are: You will be reviewed by a group of your peers. You will be treated fairly. Private information will be handled appropriately. We will avoid moving the finish line.
Use data to learn and make decisions
A development team adding a major new feature would be expected to understand its impact on customers: how many people are engaging with it? Are they getting a good experience (fast, without errors)? Are they converting? We try to take a similar approach to this program, gathering the data we need in order to understand how employees are using the system and make the next decision about its evolution.
So far, we’ve tried to gather roughly two kinds of data: data about the flow of people through the system (mostly quantitative) and data about people’s experiences with the system (quantitative and qualitative).
The flow data includes things like how long the process takes from end to end, how many people are participating at once, how long each individual step in the process takes, and the number of promotions that are happening. We primarily use it to identify bottlenecks and understand the cost of running the program.
The experience data includes things you might find in a customer satisfaction survey, such as: did you find the process to be fair? How valuable was this for you? We primarily use it to gauge employee perception of the program.
Constant delivery of value
Another aspect of the agile mindset we incorporated here was a focus on constant delivery of value. In this space, that principle has three major implications.
The first is it forces a recognition of what constitutes value. By far the biggest thing we do to create value in this program is ensure that peer reviews are being completed. With our customer-oriented mindset, we have to keep in mind that this is a highly personal and meaningful experience for anyone who goes through it. Every single review that completes represents value for us and our employees.
The second implication is that it forces a balance between delivering value and continuous improvement. As much as we might like to focus all our efforts on making the system better (and it can always be better), we can’t actually spend all our time that way. We create value by completing reviews, and we have to be sure we’re balancing our time appropriately.
The third implication comes in understanding how our employees get value from this process. It isn’t merely when the process is done and they’ve received a recommendation. They’re getting value when soliciting feedback from their peers and when they’re reflecting on that, too. Understanding the flow of value over time is important in knowing where to prioritize improvements.
Continuous improvement has been a huge part of this process from day one.
A team of people are always focused (part-time) on the program. We have many of the same habits of software development teams. We maintain a backlog and have rituals for grooming and prioritizing it. We hold retros after every single review completes, where we identify opportunities for improvement. We set goals and measure ourselves against them. When we make changes, we pursue them incrementally.
This all came together during a major evolution we undertook several months ago.
In the previous iteration of the system, candidates didn’t solicit their own feedback. Panels collected feedback about them. In addition to not being culturally aligned, our data indicated this was the biggest bottleneck in the system. Panels were spending a lot of time gathering feedback about candidates, severely limiting the number of reviews we could complete in a given quarter.
The team directing the program came up with a potential improvement (the state we’re in now). We refined the idea with input from some of our employees. We defined success criteria and tweaked our customer satisfaction surveys to help us gather it. A few employees volunteered to test the evolution and helped us put it through its paces.
The results were great. We didn’t quite meet all our goals, but it was a huge improvement. Candidates are finding the new process more valuable, and they’re getting that value sooner. It’s taking less time overall, enabling more people to move through the system.
And those goals we didn’t quite meet? There are already new cards in our backlog for making more improvements.
One very important takeaway
As part of operating this program, we collect data from candidates in the form of a satisfaction survey. While there are always areas for improvement, marks are high across the board. However, one piece of data has been particularly interesting and could be a useful takeaway even outside the confines of this process.
After the process is complete, candidates are asked to stack rank the different parts of the process in order of value (i.e. which parts of the process do you find more valuable than others?). The parts of the process include soliciting feedback, reflecting on that feedback, receiving net new feedback by the panel, and hearing the panel’s recommendation.
Almost without exception, every candidate has found the most valuable parts of the process to be gathering feedback from their peers and reflecting on that feedback.
Regardless of whether or not an employee is ultimately promoted, the same trend holds.
If you’re looking to make one improvement to your organization: bring in training programs to help everyone share feedback effectively. Create systems that let them solicit feedback in a structured way. Give them the space to think about what they heard and how they need to grow. You will have to push through an initial comfort barrier, and it isn’t for everyone. But the majority of skeptics are won over once they simply try it for themselves.