Quantcast
Viewing all articles
Browse latest Browse all 304

The unbearable B-ness of software

I’m not Jack Welch’s biggest fan. For one thing, he invented the “rank-and-yank” HR policies that literally decimate companies. I don’t disagree with the idea that companies would improve their health by letting go 5-10 percent of their people per year, but I think the discovery process involved is impossible and often politically toxic. There is, however, one thing he’s said that I think has a lot of value: “A players hire A players; B players hire C players“. His point is that if you have mediocre management, you’ll actually end up with terrible employees. I would say it’s not limited to hiring only. A players make more A players. They teach each other how to be better. Not only that, but they raise the potential for what an A player can do. B players don’t have the foresight or “ownership” mentality to mentor others, and produce non-productive C players.

The insight I had recently is that this applies to software as well. “B” architectural decisions or tooling choices, which seem mediocre but tolerable at the time, create “C” (ugly, defective, or unusable) software. Software contributions often have, not always intentionally, a multiplicative (or divisive) effect across a company, in which case a single programmer can become “manager-equivalent” in impact. This is something that most companies, especially outside of technology, fail to realize about software. They miss this fact to their peril.

I’m increasingly convinced that it’s hard to be a great programmer and not be, at least, a half-decent leader. This doesn’t mean that one needs to be a “people manager” or even have any interest in taking that direction. However, a great software engineer is:

  • a decision-maker, because engineers frequently have to choose technologies and make infrastructural choices under conditions of incomplete information.
  • a creative problem-solver, because for the hardest problems there is no “canned” solution. In fact, for some problems (as in machine learning) the best solution may not be known, and simply designing the experiments is a non-trivial problem.
  • a designer, because the ability to create attractive, robust and maintainable software systems is uncommon, and the work is non-trivial.
  • a teacher, because good engineers don’t just leave a wad of undocumented code laying around, but take responsibility for making sure that other people can use them.

How is all of this, when done right, not a leadership role?

Of course, software engineers are not in general treated like leaders in any large company that I know of, and a fair proportion of the people who are hired into software positions just aren’t capable of being leaders. Still, there’s an overwhelming and self-perpetuating culture of B-ness in software, with engineers not trusted to pick their projects and choose their own tools. This culture of mediocrity is one which what I called “Java Shop Politics” emerges. I regret the name, however. It’s not fair to single out Java, especially when it was Microsoft, with Visual Basic and the first scratches at IDE-culture, that first attempted to create the commodity programmer world. A better name would be “Big Software Politics”.

I would replace Welch’s A-, B-, and C-player language with a four-class system of dividers, subtracters, adders, and multipliers. I’ve separated the “C” category between the ones who are truly toxic and hurt others’ productivity (dividers) from the more harmless people who just don’t get much done (subtracters). Dividers, I think, should be fired quickly if they don’t improve. The only crime of subtracters is to draw more in salary than they produce, but it’s worse (for morale) to fire them, so they should be mentored into adders and (eventually) multipliers whenever it is possible, and gently let go if it seems not to be. Ultimately, no company should retain an employee who doesn’t have the drive and capability to become a multiplier, but it takes time for a person to get there and firing is an extremely blunt instrument. In general, I wouldn’t fire anyone but a divider.

“B-player” and “adder” seem to correspond neatly, as do “A-player” and “multiplier”. The first category can crank out CRUD apps just fine, and write software to spec, but lack the architectural or design skill to build much on their own. Adders are the workhorses who are capable of implementing others’ decisions but unready to make their own, while multipliers deliver growth by making others (and themselves) more productive through their (often far-reaching) contributions.

Management is especially challenging and dangerous because it becomes impossible, almost by definition, for a manager to be a mere adder. A manager’s job is to alter the way people work, and as with stock traders, “random” moves have negative expectancy. The percentage of people who have multiplier-level knowledge, talent, or skill is small– maybe 10 to 15 percent, in a typical company. Managers who don’t have that capacity become dividers who add noise and entropy.

Programming, the art of managing machines, is much the same way. There are always a few junior-level, self-contained additive projects in every company, but the core infrastructural work that will be used by a large number of people is multiplicative– if done well. Done poorly, it reduces capability and has a dividing effect. How is it that software– typically construed as an asset– can have such a divisive effect? The problem is management. When people don’t freely choose what tools they use, and what software they rewrite as opposed to what they deal with “as-is”, low-quality software combined with managerial blessing will lead to unproductive and unhappy programmers.

At the beginning of this year, I developed a scale for assessing the capability of a software engineer, and I’d formalize it a bit more with a model that first separates software work into three levels:

Level 1: Additive work, such as scripts to generate reports or CRUD business apps. This will typically be written once and read only by the original code author. Relevant question: can you code?

Level 2: Multiplicative work, such as tool development and critical production infrastructure in which performance, scalability, design and code quality matter, because large numbers of people rely on the work. Most of the “sexy” problems fit in this category. Relevant questions: does your work make others more productive? Can they use it? Do they enjoy using it?

Level 3: Globally multiplicative work, such as the design of new general-purpose languages. A level-3 accomplishment needs to be “best in class”, in some way, on a worldwide basis because its purpose is to push forward the state of the entire industry. Linux and Clojure are some examples of level-3 achievements. Most of this work is R&D that few companies are willing to pay for, these days. Relevant question: are you doing original work that increases capability globally?

As with any model, this is chock full of simplifications. In reality, there are hard L1 tasks that might be rated above 1.5, and easy L2 jobs as well that might be “only” 1.7 in difficulty, but for simplicity I’ll assume that tasks can neatly be “bucketed” into one of these three categories. The going assumption is that a programmer’s level represents the level at which she will make the right calls 95% of the time. For a level-2 task, the 2.0 programmer will succeed 95% of the time, the 1.5 programmer will get 50%, and the 1.0 will get 5%, with an “S-shaped” logistic interpolation giving meaning to the fractional levels (e.g. 1.1, 1.9). In practice, these concepts are too difficult to define for formal measurement (making it useless even to attempt to measure beyond one decimal place) and the bulk of professional software engineers are between 1.0 and 2.0. While it’s difficult to apply percentiles to software engineering, the population being ill-defined, I’d estimate that:

  • the median full-time professional software engineer is about 1.1. Many senior (20+ years experience) engineers never crack 1.2.
  • graduates of elite computer science programs are about 1.3.
  • about 1 in 10 professional software engineers are at 1.5 or above.
  • about 1 in 100 software engineers are 2.0 or above.

In the corporate world, level-3 software considerations are generally irrelevant. Such efforts tend to have an R&D flavor, and there’s rarely the budget (the limiting resource being the time of extremely high-power people) or risk-tolerance for a company to attempt them, so levels 1 and 2 are where the action is. You could safely say that level-1 work can usually be done by an adder or “B player”, while level-2 projects require an “A player”, “10X-er”, or multiplier.

Companies and software managers know, from extremely painful (read: expensive) experience, that level-2 work is hard and risky, and that most professional software engineers lack the competence to do it well, and even fewer are capable of supervising such work. The result is that they try to minimize the amount of such work, and the degree to which they’re willing to rely on it. If one “unit” of level-2 work can be replaced with four “units” of level-1 work, that seems like the prudent choice, because it’s astronomically easier to hire 1.0-1.3 programmers than to vie for 1.8-2.1, who can only be detected and assessed by other great programmers. This is the essence of the “commodity programmer” culture: create a world in which as much of the work as is possible is level 1, and allocate the level-2 work only to people with a “track record”, an assessment that often has more to do with politics and social position than capability.

What goes wrong? Well, the first problem with commodity developer culture is that the bulk of engineers living within it never improve. They stop learning, because there’s no need for them to progress. When companies staff people on subordinate, bite-sized work below their level of capability, they get bored and often decline as time goes on. If you don’t have 1.5+ level work, you’re not going to have many 1.5+ engineers, and you won’t keep any of them for very long. If what high-level work you have is jealously guarded and allocated politically, the best engineers won’t stick around for years to prove themselves adequate for it. The ones without other options will.

The second problem is that projects often become level-2 by accident, and also that level-2 needs tend to emerge once the complexity load of all the level-1 work reaches a critical mass. This is akin to Greenspun’s Tenth Rule, which essentially states that when low-level languages are applied to complex problems that require a more “high-level” approach, people implement the features that already exist in high-level languages like Lisp. This shouldn’t actually be taken as a slight against C: for low-level (here, “low-level” pertains to the degree of abstraction that is used, and is non-pejorative) problems where memory management is critical (yes, plenty of those still exist) it is often the best language, but you wouldn’t want to write a complex, modern web application entirely in C. In any case, what this “Rule” is really about is the emergence of complexity, driven by need. Lisp is a well-thought-out high-level language (a level-3 accomplishment) and almost guaranteed to be superior to the domain-specific language (DSL) that a typical corporate programmer, constrained by deadlines and managers and low autonomy, would “inject” into a C project to add high-level functionality (on an ad-hoc basis) to it.

I think Greenspun’s Tenth Rule can be generalized. At some point, the complexity load induced by level-1 requirements and work requires level-2 insights and capability. The problem with the commodity developer world is that, because level-2 work is effectively not allowed, the stuff that does happen at such level occurs accidentally in an ad-hoc way. A manager might see a neat script (a level-1 accomplishment) written by a junior developer and say, “This whole company should have that. By next Friday.” However, the developer lacks both the engineering skills and the political clout to recognize bad requirements and reject them, the result being an overblown, late, and unmaintainable system that serves many needs poorly instead of doing a few things well. Such are the good intentions that pave the road to hell.

All of this, I think, explains the sorry state of the software industry today. The business world understands software’s problems at a high level– most software is shitty, late, and over budget– and (correctly) concludes that the majority of software developers lack the skills to attack problems at the multiplicative (level-2) scope, while most executives are incapable of assessing software talent at the individual level. The result is a bureaucratic, creativity-killing culture that is tailored to software engineers in the 1.1-1.3 range, a world designed to protect itself against B players. The long-term problem is that this makes it nearly impossible for most software engineers to become 1.5+ A-players. One almost certainly won’t achieve it during one’s day job. Mediocrity self-perpetuates. What companies really need to do is give software engineers a creative “sandbox” (an R&D-like environment) in which they can attempt 1.5+ projects, and if they fail, no one gets hurt. However, I wouldn’t hold my breath for this, because most managers are going to see this as “not doing real work”.

The result is a culture of “B”-ness in software. That would be acceptable, but the reality is that level-2, “A player” needs emerge anyway as a result of the need to manage (and, preferably, reduce) the accumulating complexity that additive software generates. Unfortunately, this commodity developer culture is utterly unprepared to approach such problems, and fails miserably at them. The result of this is the slew of “C” (failed) software projects that will litter any corporate environment given enough time.


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Viewing all articles
Browse latest Browse all 304

Trending Articles