Zach Peters

created: 2021-11-19T07:41:20 (UTC -06:00) tags: #[[Good Articles]] source:


Individuals matter


One of the most common mistakes I see people make when looking at data is incorrectly using an overly simplified model. A specific variant of this that has derailed the majority of work roadmaps I’ve looked at is treating people as interchangeable, as if it doesn’t matter who is doing what, as if individuals don’t matter.

I’m trying some experimental tiers on Patreon to see if I can get to substack-like levels of financial support for this blog without moving to substack!

Individuals matter

One of the most common mistakes I see people make when looking at data is incorrectly using an overly simplified model. A specific variant of this that has derailed the majority of work roadmaps I’ve looked at is treating people as interchangeable, as if it doesn’t matter who is doing what, as if individuals don’t matter.

Individuals matter.

A pattern I’ve repeatedly seen during the roadmap creation and review process is that people will plan out the next few quarters of work and then assign some number of people to it, one person for one quarter to a project, two people for three quarters to another, etc. Nominally, this process enables teams to understand what other teams are doing and plan appropriately. I’ve never worked in an organization where this actually worked, where this actually enabled teams to effectively execute with dependencies on other teams.

What I’ve seen happen instead is, when work starts on the projects, people will ask who’s working the project and then will make a guess at whether or not the project will be completed on time or in an effective way or even be completed at all based on who ends up working on the project. “Oh, Joe is taking feature X? He never ships anything reasonable. Looks like we can’t depend on it because that’s never going to work. Let’s do Y instead of Z since that won’t require X to actually work”. The roadmap creation and review process maintains the polite fiction that people are interchangeable, but everyone knows this isn’t true and teams that are effective and want to ship on time can’t play along when the rubber hits the road even if they play along with the managers, directors, and VPs, who create roadmaps as if people can be generically abstracted over.

Another place the non-fungibility of people causes predictable problems is with how managers operate teams. Managers who want to create effective teams1 end up fighting the system in order to do so. Non-engineering orgs mostly treat people as fungible, and the finance org at a number of companies I’ve worked for forces the engineering org to treat people as fungible by requiring the org to budget in terms of headcount. The company, of course, spends money and not “heads”, but internal bookkeeping is done in terms of “heads”, so $X of budget will be, for some team, translated into something like “three staff-level heads”. There’s no way to convert that into “two more effective and better-paid staff level heads”2. If you hire two staff engineers and not a third, the “head” and the associated budget will eventually get moved somewhere else.

One thing I’ve repeatedly seen is that a hiring manager will want to hire someone who they think will be highly effective or even just someone who has specialized skills and then not be able to hire because the company has translated budget into “heads” at a rate that doesn’t allow for hiring some kind of heads. There will be a “comp team” or other group in HR that will object because the comp team has no concept of “an effective engineer” or “a specialty that’s hard to hire for”; for a person, role, level, and location defines them and someone who’s paid too much for their role and level is therefore a bad hire. If anyone reasonable had power over the process that they were willing to use, this wouldn’t happen but, by design, the bureaucracy is set up so that few people have power3.

A similar thing happens with retention. A great engineer I know who was regularly creating $x0M/yr4 of additional profit for the company per year wanted to move to Portugal, so the company cut the person’s cash comp by a factor of four. The company also offered to only cut his cash comp by a factor of two if he moved to Spain instead of Portugal. He left for a company that doesn’t have location-based pay. This was escalated up to the director level, but that wasn’t sufficient to override HR, so they left. HR didn’t care that the person made the company more money than HR saves by doing location adjustments for all international employees combined because HR at the company had no notion of the value of an employee, only the cost, title, level, and location5.

Relatedly, a “move” I’ve seen twice, once from a distance and once from up close, is when HR decides attrition is too low. In one case, the head of HR thought that the company’s ~5% attrition was “unhealthy” because it was too low and in another, HR thought that the company’s attrition sitting at a bit under 10% was too low. In both cases, the company made some moves that resulted in attrition moving up to what HR thought was a “healthy” level. In the case I saw from a distance, folks I know at the company agree that the majority of the company’s best engineers left over the next year, many after only a few months. In the case I saw up close, I made a list of the most effective engineers I was aware of (like the person mentioned above who increased the company’s revenue by 0.7% on his paternity leave) and, when the company successfully pushed attrition to over 10% overall, the most effective engineers left at over double that rate (which understates the impact of this because they tended to be long-tenured and senior engineers, where the normal expected attrition would be less than half the average company attrition).

Some people seem to view companies like a game of SimCity, where if you want more money, you can turn a knob, increase taxes, and get more money, uniformly impacting the city. But companies are not a game of SimCity. If you want more attrition and turn a knob that cranks that up, you don’t get additional attrition that’s sampled uniformly at random. People, as a whole, cannot be treated as an abstraction where the actions company leadership takes impacts everyone in the same way. The people who are most effective will be disproportionately likely to leave if you turn a knob that leads to increased attrition.

So far, we’ve talked about how treating individual people as fungible doesn’t work for corporations but, of course, it also doesn’t work in general. For example, a complaint from a friend of mine who’s done a fair amount of “on the ground” development work in Africa is that a lot of people who are looking to donate want, clear, simple criteria to guide their donations (e.g., an RCT showed that the intervention was highly effective). But many effective interventions cannot have their impact demonstrated ex ante in any simple way because, among other reasons, the composition of the team implementing the intervention is important, resulting in a randomized trial or other experiment not being applicable to team implementing the intervention other than the teams from the trial in the context they were operating in during the trial.

An example of this would be an intervention they worked on that, among other things, helped wipe out guinea worm in a country. Ex post, we can say that was a highly effective intervention since it was a team of three people operating on a budget of $12/(person-day)6 for a relatively short time period, making it a high ROI intervention, but there was no way to make a quantitative case for the intervention ex ante, nor does it seem plausible that there could’ve been a set of randomized trials or experiments that would’ve justified the intervention.

Their intervention wasn’t wiping out guinea worm, that was just a side effect. The intervention was, basically, travelling around the country and embedding in regional government offices in order to understand their problems and then advise/facilitate better decision making. In the course of talking to people and suggesting improvements/changes, they realized that guinea worm could with better distribution of clean water (guinea worm can come from drinking unfiltered water; giving people clean water can solve that problem) and that aid money flowing into the country specifically for water-related projects, like building wells, was already sufficient if the it was distributed to places in the country that had high rates of guinea worm due to contaminated water instead of to the places aid money was flowing to (which were locations that had a lot of aid money flowing to them for a variety of reasons, such as being near a local “office” that was doing a lot of charity work). The specific thing this team did to help wipe out guinea worm was to give powerpoint presentations to government officials on how the government could advise organizations receiving aid money on how those organizations could more efficiently place wells. At the margin, wiping out guinea worm in a country would probably be sufficient for the intervention to be high ROI, but that’s a very small fraction of the “return” from this three person team. I only mention it because it’s a self-contained easily-quantifiable change. Most of the value of “leveling up” decision making in regional government offices is very difficult to quantify (and, to the extent that it can be quantified, will still have very large error bars).

Many interventions that seem the same ex ante, probably even most, produce little to no impact. My friend has a lot of comments on organizations that send a lot of people around to do similar sounding work but that produce little value, such as the Peace Corps.

A major difference between my friend’s team and most teams is that my friend’s team was composed of people who had a track record of being highly effective across a variety of contexts. In an earlier job, my friend started a job at a large-ish ($5B/yr revenue) government-run utility company and was immediately assigned a problem that, unbeknownst to her, had been an open problem for years that was considered to be unsolvable. No one was willing to touch the problem, so they hired her because they wanted a scapegoat to blame and fire when the problem blew up. Instead, she solved the problem she was assigned to as well as a number of other problems that were considered unsolvable. A team of three such people will be able to get a lot of mileage out of potentially high ROI interventions that most teams would not succeed at, such as going to a foreign country and improving governmental decision making in regional offices across the country enough that the government is able to solve serious open problems that had been plaguing the country for decades.

Many of the highest ROI interventions are similarly skill intensive and not amenable to simple back-of-the-envelope calculations, but most discussions I see on the topic, both in person and online, rely heavily on simplistic but irrelevant back-of-the-envelope calculations. This is not just a problem limited to cocktail-party conversations. My friend’s intervention was almost killed by the organization she worked for because the organization was infested with what she thinks of “overly simplistic EA thinking”, which caused leadership in the organization to try to redirect resources to projects where the computation of expected return was simpler because those projects were thought to be higher impact even though they were, ex post, lower impact. Of course, we shouldn’t judge interventions on how they performed ex post since that will overly favor high variance interventions, but I think that someone thinking it through, who was willing to exercise their judgement instead of outsourcing their judgement to a simple metric, could and should say that the intervention in question was a good choice ex ante.

This issue of projects which are more legible getting more funding is an issue across organizations as well as within them. For example, my friend says that, back when GiveWell was mainly or only recommending charities that had simply quantifiable return, she basically couldn’t get her friends who worked in other fields to put resources towards efforts that weren’t endorsed by GiveWell. People who didn’t know about her aid background would say things like “haven’t you heard of GiveWell?” when she suggested putting resources towards any particular cause, project, or organization.

I talked to a friend of mine who worked at GiveWell during that time period about this and, according to him, the reason GiveWell initially focused on charities that had easily quantifiable value wasn’t that they thought those were the highest impact charities. Instead, it was because, as a young organization, they needed to be credible and it’s easier to make a credible case for charities whose value is easily quantifiable. He would not, and he thinks GiveWell would not, endorse donors funnelling all resources into charities endorsed by GiveWell and neglecting other ways to improve the world. But many people want the world to be simple and apply the algorithm “charity on GiveWell list = good; not on GiveWell list = bad” because it makes the world simple for them.

Unfortunately for those people, as well as for the world, the world is not simple.

Coming back to the tech company examples, Laurence Tratt notes something that I’ve also observed:

One thing I’ve found very interesting in large organisations is when they realise that they need to do something different (i.e. they’re slowly failing and want to turn the ship around). The obvious thing is to let a small team take risks on the basis that they might win big. Instead they tend to form endless committees which just perpetuate the drift that caused the committees to be formed in the first place! I think this is because they really struggle to see people as anything other than fungible, even if they really want to: it’s almost beyond their ability to break out of their organisational mould, even when it spells long-term doom.

One lens we can use to look at what’s going on is legibility. When you have a complex system, whether that’s a company with thousands of engineers or a world with many billions of dollars going to aid work, the system is too complex for any decision maker to really understand, whether that’s an exec at a company or a potential donor trying to understand where their money should go. One way to address this problem is by reducing the perceived complexity of the problem via imagining that individuals are fungible, making the system more legible. That produces relatively inefficient outcomes but, unlike trying to understand the issues at hand, it’s highly scalable, and if there’s one thing that tech companies like, it’s doing things that scale, and treating a complex system like it’s SimCity or Civilization is highly scalable. When returns are relatively evenly distributed, losing out on potential outlier returns in the name of legibility is a good trade-off. But when ROI is a heavy-tailed distribution, when the right person can, on their paternity leave, increase company revenue of a giant tech company by 0.7% and then much more when they work on that full-time, then severely tamping down on the right side of the curve to improve legibility is very costly and can cost you the majority of your potential returns.

Thanks to Laurence Tratt, Pam Wolf, Ben Kuhn, Peter Bhat Harkins, John Hergenroeder, Andrey Mishchenko, Joseph Kaptur, and Sophia Wisdom for comments/corrections/discussion.

Appendix: re-orgs

A friend of mine recently told me a story about a trendy tech company where they tried to move six people to another project, one that the people didn’t want to work on that they thought didn’t really make sense. The result was that two senior devs quit, the EM retired, one PM was fired (long story), and three people left the team. The team for both the old project and the new project had to be re-created from scratch.

It could be much worse. In that case, at least there were some people who didn’t leave the company. I once asked someone why feature X, which had been publicly promised, hadn’t been implemented yet and also the entire sub-product was broken. The answer was that, after about a year of work, when shipping the feature was thought to be weeks away, leadership decided that the feature, which was previously considered a top priority, was no longer a priority and should be abandoned. The team argued that the feature was very close to being done and they just wanted enough runway to finish the feature. When that was denied, the entire team quit and the sub-product has slowly decayed since then. After many years, there was one attempted reboot of the team but, for reasons beyond the scope of this story, it was done with a new manager managing new grads and didn’t really re-create what the old team was capable of.

As we’ve previously seen, an effective team is difficult to create, due to the institutional knowledge that exists on a team, as well as the team’s culture, but destroying a team is very easy.

I find it interesting that so many people in senior management roles persist in thinking that they can re-direct people as easily as opening up the city view in Civilization and assigning workers to switch from one task to another when the senior ICs I talk to have high accuracy in predicting when these kinds of moves won’t work out.