When I was five years old, I experienced a first lesson in the Second Law of Thermodynamics. I had been given eight jars of brightly colored clay. Fancying myself an artist, I mixed colors together to make new ones, like the great painters did. And then I made newer colors out of the new ones. Within an afternoon, all of the clay was the same brownish-gray color that wasn’t very appealing. I realized that I wanted the brighter, prettier colors back and tried to find an adult who know how to sre-eparate them. Of course, that wasn’t possible. It had become a rather ugly high-entropy color, a “grey goo” of sorts.
Physically speaking, the overwhelming tendency of a closed system is toward increasing disorder: metal rusts, heat dissipates, and organisms die of starvation, unless more energy is put into the system. The opposite is so extremely improbable that, for practical purposes, it can be ruled out. The application of this, as a metaphor, to corporate existence is quite useful. A corporation is not a closed system, no more than a machine like a computer or an automobile. It can be repaired. Decay is common, but not inexorable. Still, the pattern is common enough that the concept of corporate entropy doesn’t stretch the imagination beyond typical experience.
Why does organizational decay happen? I think that it’s obvious: it happens a little bit at a time, as people pursue actions that benefit them personally, at a cost (usually small) to the organization. Often this cost is imperceptible: the company becomes a little more “political”, and the people in it become more defensive and less trusting. For example, an employee wishing to outcompete another for a promotion, and unable to win on merit, might draw attention to the latter’s tendency to work odd hours. A year later, and probably not by his intention, programmers are getting screamed at for showing up at 9:15.
Every machine has a fuel that it requires to maintain operations, and secondary “wear and tear” that is the corrosion resulting from consuming that fuel. For example, a car’s fuel is gasoline, and without any of that, it can only go for a few hundred miles. Every few thousand miles, however, the driver must have the machine fixed in order to compensate for the physical damage done by this combustion. An organization’s fuel is the human energy put into it as its various players seek some sort of yield or advantage from it: possibly, they’d like higher salaries or better expense accounts; alternatively, they might wish to use its resources and relationships in a different way. To put it succinctly, its fuel is incentive. The secondary corrosion is comprised of all the side effects of (often unintentionally) harmful actions that people take because of those incentives.
One model of an organization that, while quite limited in general, might prove useful, here, is that of the organization of a knowledge base. A knowledge base is nothing more than a set of “facts” about the world, and I use quotation marks because we are dealing in the real world and not all players are trustworthy, so it’s quite normal that some of those “facts” are untrue. False facts, however, are rarely retracted; indeed, the high-entropy state is not one of an empty knowledge base, but one with an over-constrained, complex, and self-contradictory knowledge base from which everything is arguably true. For a concrete example, consider the above scenario in which one contender for a leadership role uses the other’s odd hours to put him out of contention. This adds, to the organizational knowledge base, the false fact that people who work odd hours are unreliable and unfit for leadership positions. It’s not actually true, and it may be harmful to the organization in the future, but it becomes a fixture of the working culture. An enabling force behind the generation of false facts is detrimental consistency, or the organization’s desire to (a) infer facts (often false) from previous decisions that may not be the actual reasons why the decisions are made– organizations that are eager to believe in their own meritocracy often ignore political inputs to decisions, and therefore cement politics into their own fabric- and (b) to adhere to that false knowledge even in the face of contrary evidence, judging it better (more masculine, more “strong”) to be consistently wrong than to change its mind and be inconsistent. Over time, the organization builds up patterns of behavior that render it unfit to succeed on its own terms, and it begins to fail.
Some people can see the process, above, as it happens. So why is it so rare that anyone fixes it? The proximate answer is the obvious one: it’s politically dangerous. There’s minimal upside in fixing one’s organization, since leadership positions are often allocated to uncontroversial people (preferably external celebrity-manager types whom the board can pin hopes upon) rather than to those who made enemies by solving real problems. On the other hand, the mere attempt to solve an organizational problem can get a person fired. In other words, the best case scenario is making someone else’s job easier: the person who actually gets the leadership position that one hoped to acquire by spotting, and possibly fixing, the problem. When an organization begins to fail, it no longer trusts its own people to solve its problems. It can’t tell whether a would-be fixer is genuinely trying to help out, or just another political player who’ll rack up more organizational entropy while accomplishing little. By this point, the organization can’t self-repair. In desperation, it often gives absolute power to externally-hired executives, a “Hail Mary” pass that might work one time out of five, and throws its working class into the churn pattern of re-org after re-org.
To be honest about the entropic metaphor, of course, organizations are far from closed systems. They take in new people constantly (although they often type-cast them to old roles) and, without a continuing infusion of fuel (money) they die. Organizational self-repair could be possible. A major reason why it doesn’t happen in (for one example) the software industry is that most of these companies aren’t built to last, but to be sold quickly and make their founders and investors rich. If a company will be dead or sold for billions within five years, then no one cares about what will happen in six. Of course, organizational decay often happens more rapidly than even the founders expected, and intractable “technical debt” can pile up in a codebase in a matter of weeks, but that’s a risk that founders and investors tolerate. Only 1 in 10 of these disposable, VC-funded, companies is intended to last anyway, so a culture of brutal recklessness that leaves the company with a 9-in-10 blowing up on the launch pad due to extreme technical debt and rapid organizational corrosion is considered tolerable. For investors and founders, it works very well, because it enables them to dissociate themselves with specific companies and efforts, while making boatloads of money when one effort happens to strike the (short-lived) gold vein of an emerging natural monopoly. It’s the perfect “I don’t know what I’m doing but have loads of money” strategy. For employees and for society, it’s pretty terrible, but we’ve already been over that topic.
Not all businesses are VC-funded, of course, and many would like to be competent at organizational self-repair. So how does that work? I think that there are a few obvious statements to make about it. First, self-repair has to happen at all levels of the organization. If the worker bees are incentivized to move fast and never fix anything, then no amount of executive banner-waving can get necessary repairs made, especially since the executives are often the least informed, at a fine level, about where repairs are needed. The prospect of a $10,000 good-citizen “spot bonus”, for taking initiative on organizational or technical repairs, isn’t going to work when the downside (getting fired because your boss suspects that you are chasing these bonuses and blowing off your regular work) is so much greater. Second, as somewhat of a corollary, for self-repair to happen at all levels of the organization requires trust density. Individual contributors won’t repair the organization (often, at a cost to individual performance “metrics”, if those exist) if they don’t trust it to evaluate their performance fairly. On the other side, management won’t relax hard expectations in favor of soft repair-oriented contributions unless it trusts those being managed. Thus, a trust-sparse organization is probably beyond the capability for self-repair. Thirdly, an organization needs a culture that is intended to enable self-repair, and that comes down to a rather simple notion: A Minus B.
Immediate organizational health is often measured in terms of profits: revenue minus cost, “the bottom line”. This is one way to quantify a certain type of economic health, but not the only one. A Minus B is more abstract, but the same principle. “A” is benefit, and “B” is cost. These may or may not be directly measurable: for example, “A” might be technical excellence, whose influence over revenue will be massive in the long term, but which is difficult to measure in the short term, and “B” need not be dollar cost but could represent a risk budget or some other finite resource (morale, attention). “A” is what is being achieved; “B” is what must be committed, sometimes quantifiable and sometimes more abstract. Organizations can, of course, improve their health by maximizing “A”, or by minimizing “B”. In practice, both of these capacities are necessary in order to run a business.
Programmers often bemoan the B-centric focus of most employers. We want to make more A (that is, build cooler systems, solve harder problems) but we often get stuck reducing B– that is, helping businessmen unemploy people. This isn’t giving us the 21st-century that we want, and I hope that, here, I can begin to explain why this misuse of talent is happening.
B-ists (that is, B minimizers) are good at finding a local optimum and staying put there, while A-ists (A maximizers) excel at non-local explorations. Or, to put it another way, A-ists are the creative visionaries who risk divergence, and B-ists are the constraint cops who mandate convergence. Over time, most good A-ists learn that they also need to become aware of B-ist concerns, which I might call self-curtailing. If nothing else, they need to be able to actually finish their own projects, and to work together with other A-ists who might have differing visions. Necessity pushes the natural A-ists in the B-ward direction. On the other hand, B-ists are never pushed to become A-ists, because their organizational or career survival simply doesn’t require it. A reflexive, mindless, ultimately harmful cost-cutter can always find a home in Corporate America.
Why is this B-minimizing bias a problem? First of all, B-ism is limited in its possible yield. An organization needs enough B-ism to keep costs from spiraling out of control, but reducing costs to zero isn’t possible or desirable. The amount of cost that one can cut before doing severe harm is limited. On the other hand, A-ism has nearly limitless potential. Secondly, I think that the damage done by incompetent B-ists is, often, what kills a company’s culture. For every genuinely capable cost-cutting B-ist who has the company’s interests at heart, there are ten or twenty who just externalize costs, either to society or elsewhere in the company where they are harder to measure: for example, by imposing open-plan offices on programmers, demolishing productivity and worker health in order to save a measly few thousand dollars per head on office space.
Incompetent A-ists don’t survive, because even competent A-ists have a hard time keeping afloat in the corporate world, having their ideas constantly under attack. Incompetent B-ists, however, thrive and often become feared. Most of what makes corporations so awful is the work of these incompetent B-ists. They don’t have the intelligence or creativity necessary to make actual improvements in efficiency, so they “cut costs” in a way that is actually more expensive in the long term. The galling thing about this is that they never get called out for doing that. Why is it so? I think that most business people (and almost all of the private-sector social climbers whom we call executives) have a disproportionate fetish for false objectivity. They believe that more is measurable than actually is, and to their detriment. This creates a common language that exists between them, and one that produces false facts at an alarming rate. You can’t get 10 executives to agree on one A-ist vision, without accusations of bike-shedding, empire-building, and loyalty testing to fly toward the person proposing it. On the other hand, it’s relatively easy to get them to agree on a relatively innocuous claim like, “We need to cut travel expenses.”
That cutting travel expenses is sometimes a good idea isn’t controversial. The best way to accomplish this is not to make travel more unpleasant for the employees by being stingy, but to cut unnecessary and unwanted travel outright, such as by cancelling the trips that are glorified status meetings that no one really enjoys. The false fact that is introduced isn’t that there’s a benefit in reducing unnecessary travel, but in the idea that cutting travel expenses is an important goal, to such an extent that merits all-fronts stinginess. It isn’t an important goal. Companies don’t die because, against the will of their B-minimizers, they spend too much on travel; they die because they stop listening to their A-maximizers and fail to excel
In fact, the fundamental problem with B-ist cost-cutters is that they spend far too much time treating symptoms rather than the underlying condition: complexity that is accrued from false facts in the knowledge base, detrimental power relationships and outdated reporting topologies, and organizational overreach. Just as layoffs rarely improve a company (since many organizations seek to cut people labelled as non-performers rather than cutting organizational complexity, the end result being the post-layoff company is still over-complex and simply more understaffed) it is also true that cost-cutters are rarely able to remove complexity.
Let’s say, on the travel example, that one team in a company, during a time of plenty, begins flying all of its remote engineers out to California for their weekly status meeting. Since they rode a plane for the meeting, it ought to justify the time of the flight, and it expands into a day-long affair. Other managers of other teams notice that one team is doing this, and follow suit, because nothing says Important Work quite as well as an all-day status meeting every Monday and a $5,000-a-pop travel hit. Now you have a culture in which a significant proportion of the workforce is flying out to California on a weekly basis for a status meeting that most of them would prefer not to attend. The employees are “abusing” expense accounts against their will. That’s one form of complexity: a perceived and recurring obligation that drains the company and its people. Unfortunately, those “essential” status meetings are unlikely to be cut before non-executives’ relocation budgets and conference allowances get hit, even though those sources of travel costs are minimal in comparison to the status meetings. Why? Because it’s more politically expedient for a bad-faith, B-ist cost-cutter to slash perks that the proles care about than it would be to take on a bunch of powerful managers about the actual complexity and waste (and to raise the deeper question, that of whether these status meetings are necessary at all).
The corporate world’s false objectivity and foolish consistency cultures, of course, enable the sort of senseless cost cutting that, in the long term, renders the organization brittle. Stop offering relocation packages, and you won’t be able to compete for non-local talent. Stop paying for conferences, and your best people will start to lose faith in their management’s investment in their continued progress, and they’ll start looking elsewhere. Cutting essential costs is devastating, reducing “A” far more than it reduces “B”. Unfortunately, once a cost-cutting priority enters the corporate knowledge base, it tends to stick around forever. Those who challenge it are, often, themselves derided as “weak”, “unable to make hard choices”, or “not a team player”, when they point out the negative effects that these mistakes have had on their own teams and organizations.
In the long term, the organization’s A-ist concerns (excellence maximization) dissipate and the legacy complexity left by B-ists is what remains of the increasingly stingy, controlling, distrustful organization. The knowledge base becomes full of “grey goo”, and the sense that the corporation is no different from any other large, bureaucratic, just-slightly-malevolent company increases. This rarely changes, and often it can’t, because those who propose alternatives to the “gray goo”– whether they’re A-ists or just a more thoughtful brand of B-ist– tend to fall under personal attack. Their motives are questioned, and what is vision to them (and may be objectively beneficial) is derided as “bike-shedding” by their enemies. If they point out the losses incurred by mindless cost-cutting, others get defensive: “Everyone else is taking hits under The New Policy, why can’t you?” Over time, the people with the vision either leave or fail out, and what’s left is a company whose purpose is self-referential: its reason for existing is to do what it does, slightly more cheaply (and, if it can get away with it, more crappily) than yesterday. It’s not surprising that such a company would quickly devolve into the default (Gervais / MacLeod) model of an organization: a pile of resources and the cloud of people around it, trying to take one they can out of it. (See also: my exploration of the topic.) Then we arrive at the cubicles (or open-plan offices) and TPS reports for which the corporate world is so well known. What vision used to drive the organization is forgotten, overwritten, and mixed in with the rest of the muck.