No one becomes what they expect to be or even expect to want to be, but people often evolve into what they actually want or have to be. People who want to become jerks, become jerks as they get older. People who want to become smarter, get smarter. I can say with confidence, however, that when I was 20 years old I didn’t see myself becoming an expert on software politics. It’s better to be aware of office politics than not to be aware, but that’s not a knowledge that most people want to acquire when they grow up. It’s not exactly opera or higher mathematics. It’s a dirty trade, and while I’m willing to share my knowledge, I’m not superlatively proud of having it. I share it because it’s important, not because I like the light it puts me in. Machiavelli saw himself as a more general “man of letters”, and held strong republican sentiments despite writing a how-to book for despots, and would have been horrified if he had predicted that The Prince would be his legacy and reputation. Like him, I don’t want to become “software politics guy”. I’d prefer that my legacy not be “he who deconstructed Silicon Valley, possibly contributing in a diffuse way to its demise”. I’m far more interested in what can be created after it’s gone. At any rate, software politics is a topic that I know well, and I’m not afraid to share that knowledge with others, in order to better the state of our industry.
Career-wise, I’ve done alright. I’m more successful than one might think from what I let on. That’s because most of my failures became public (in some cases, against my will) while I’ve been extremely private about my career movement since mid-2012. I made most of my mistakes when I was young and hadn’t yet mastered the management of a rather awful biological health problem, and I’ve made relatively few errors in the last few years. Still, I spent more time fighting political battles and less time learning than I would have liked, and I’m not exactly taking calls from Facebook’s AI lab or Google X about whether I’d like to work on the next generation of deep neural networks. There are people my age who are 3 to 6 years ahead of me (and, probably, a very small number of people even further ahead than 6 years, starting around $750 per hour) in terms of machine learning knowledge and, as they should, the AI labs would rather hire them. In general, though, that 3-to-6-year gap shouldn’t matter, because I’ve got about five decades of life left, but in the ageist world of private-sector software engineering, it’s a big deal. There are perceptions that I have to fight because I’m 32 and haven’t yet hit a home run.
I haven’t learned, let’s say, how to train a neural network in parallel over 20 petabytes of data. I’ve never needed to do it, and I’ve never had the opportunity to take on a challenge that would require me to get to that scale. I know that I could do it. I could learn any of the things that I don’t know. What I have learned about, through experience, is a large menagerie of organizational hurdles that stand in the way of excellence. Why is there so much mediocrity in the corporate world? What is the cause of terrible source code and uncontrollable technical debt? Why are the career goals of venture capitalists so out-of-whack with regard to the needs of the companies they fund? Okay, those questions I can answer, because I’m really good at observing a system and taking it apart. Many people won’t like my answers, because I’m relatively immune to the optimistic bias that plagues most people. I’m a hard-line cynic. Excellence is possible, but only if given such a high priority that more pedestrian business goals (risk minimization, control, mandatory social pleasantness, subordination) fall by the wayside. You don’t have to hire PhDs from MIT to have excellence in your organization. You have to hire smart people (and there are more of those than many think) and not just “not get in their way”, but actively protect them from things that will get in their way. It’s theoretically very easy to get people to excel, but it’s practically very difficult because of the incoherence brought on by organizational demands.
I might not fit the traditional image, but I’m arguably a second-career programmer. I left graduate school after one year, seeing the writing on the wall about the future of the academic career: not going to get better. Academia had already sold out a generation and a half, so who could trust it? Not me. At age 23, in 2006, I’ll admit that being a professional programmer seemed like a distinctly undesirable career. Don’t get me wrong: I loved writing code and solving mathematical problems, but most people who had been in professional software told me that it was a low-status career and that management and “PMs” would make all of the interesting decisions and kick the ugly details down to “code monkeys”.
The consensus, from 2002 to 2007, was that “startups” were a one-time thing that happened in the ’90s and that the gold rush was over and not coming back. College graduates who had the most options went to Wall Street, usually after a one- to eight-year detour through graduate school, but software wasn’t seen as an attractive field unless one could start as a founder (and, for 22-year-olds, that was rare). So my “first career” (if we don’t count the year in graduate school) was in finance. Then 2008 happened and, with the financial world going to shit and bankers getting fired left and right, software began to look a lot more attractive (to me, and to many people) in comparison. This had given the major players of Sand Hill Road just enough time to figure out how to commoditize and profit from “the startup experience”, an effort led by Paul Graham with his incubator, Y Combinator. The engine of young, privileged male quixotry that had emerged by accident in the 1990s could be deliberately and explicitly reassembled in the late 2000s.
Where did I fit into all of this? Well, my experience in finance had convinced me that technology was actually the most important piece of the business, even if undervalued. (Actually, I spent most of my time at a very successful firm that didn’t undervalue technology.) Domain knowledge and split-second decision-making and relationships matter in trading, but the machines were already taking over that game. Finance had (and seems to still have, in much of this) a bizarre anti-technical culture, outside of a few firms (that happen to be taking over the world). I know quants who deliberately hide how much they know about programming because they fear being type-cast as programmers and ending up in the IT bonus pool. Nonetheless, it’s fairly obvious in finance and in most of the economy that technology’s role is only going to be more important as time goes on. So, in the late 2000s and 2010s, I’ve taken the time to learn a lot about how computers and software actually work. I could have gone back and become a quant trader, and I may still do that in the future, but I’m glad to have taken the time to learn so much about the machinery that the whole world, at this point, pretty much relies on.
I entered Google at age 27, after three years in a failed startup, and it’s become clear on reflection that I have more in common with the “second-career programmers” who start later in life than with the “natives”. I’ve learned, especially as my role has shifted toward leadership, that second-career programmers are often far better than the more respected “natives” who manage to crack $250,000 at Google before age 30. The second-career programmers, regardless of whether they come from banking or the service industries or even law, are often just as smart, harder working, and (by definition) willing to learn new things. On the other hand, the natives seem to be curious and capable in an inverse relationship to their early success. Large tech companies are full of people who climbed the ladder quickly, never bothered to learn anything but Java, and became managers before they gained a mature understanding of technology. These ex-technical types are often the worst, because they attribute their rapid rises to power to merit rather than luck, and that, combined with not having had the time to attain technical maturity, is a devastating combination. I tend to think of second-career programmers as comparable to adult (25+) students in undergraduate college: harder-working, more focused, but unable to fit in fully to a social scene that they’ve outgrown. If someone was an attorney or a restaurant owner in a previous career, she’s not going to believe that nothing will get done unless the cargo-cult gods of “Scr(ot)um” are appeased, because she knows better.
At any rate, one thing that I’ve learned about programming is that almost every organization has an “A Team” and a “B Team”. There are at least three sociological levels within the B Team (B1: requirements set by the business, but new code; B2: feature-level work at best, mostly maintenance; B3: support) but, given that I do whatever I can to avoid landing at any level of the B Team, and leave companies if that ever happens, I haven’t inspected its stratification fully. In some cases, the A Team and management overlap entirely, but that’s actually rare because, while I wouldn’t call typical software management progressive, even they understand that highly competent individuals exist who don’t want to be, or don’t have the capability to be, managers. (“Data science” is a designation invented to fast-track mathematically capable engineers to an A Team.) Managers have the intelligence to understand, broadly, the “protect, direct, and eject” notion of management and put the perceived high performers (the “protect” category) in the A Team and the middle (the “direct” category) in the typically much larger B bucket.
Most often, A-Team engineers are shifted toward a highly autonomous, protected R&D group that gets insulated from the sorts of things that made the software career seem so undesirable, from the outside. They get to work on whatever they want and generally view themselves as entrepreneurs or principal engineers in training. Companies need these A Teams to keep talent, and much more importantly to keep the B1 and B2 engineers motivated because there is something to aspire to, but this structure doesn’t always work out very well. The B-Team engineers (many of whom are just as competent as the A Teamers, because the B team is typically much larger) eventually figure out that they’re doing most of the work that the business cares about, while A Team engineers tend to do the work that other engineers care about, and the A Team often ends up being resented. The A Team will often face declining relevance and political adversity within the organization, because A Teamers tend to do “the fun stuff” and toss the remainder of the project “over the wall”, and that becomes their reputation. When it’s worst for the A Team is during a cutback; the people are rarely fired, but they’re often thrown onto the B Team. Worse yet, they often land in non-managerial roles– your typical A Team engineer would rather be on the A Team than manage B-Team work, but would much rather be a manager than actually do line-of-business grunt work– because any open management jobs on the B Team are going to be given to the stand-out B Teamers (with years of institutional knowledge) that the company can’t afford to lose.
It is unstable, internally, to be on a company’s A Team. People who want to be highly compensated and don’t care if they lose their technical edge would do better to become B-Team managers than A-Team engineers. The transition is much easier, too; it’s much easier to keep your head down and wait for a vacancy in B-Team management than to overperform and send out signals that you think you’re better than your assigned work, which is what can happen if you vie for an A-Team position. That said, there’s a long-term career edge in having A-Team experience. You learn a lot more in terms of core computer science skills, you get to pick your technologies, it carries prestige, and it looks good on CVs. If you don’t want to become a manager by age 35, you need a string of A-Team jobs, because the presumption is that B-Team direct (i.e. non-managerial) work experience rots the brain.
I have a non-conventional pedigree (I went to a Midwestern liberal arts college because, at 17, I saw myself as a creative writer rather than a programmer) and had an unusual entry into the software industry, and I’ve been all over the place. Most engineers are type cast as lifelong A- or B-teamers by age 25, but I’ve seen both. I’ve seen a diversity of political environments, in software, that most people haven’t. I’ve also seen the political forces that prevent excellence, both in individuals and organizations. I know why most projects and people and organizations in software fail, and I know how to prevent many classes of failures.
So, why is it that I write about organizational politics instead of gradient-boosting algorithms or deep neural networks? It’s not that I view software politics as a high art of computing, because it’s not. I’d much rather be an expert in machine learning than in this stuff, but I never got the early work experience that would have set me up for the former… and I’m only starting to be okay with that. At any rate, I write what I know about, not what I would like to know more about. I’ve been on A Teams and B Teams, I’ve seen A Teams crumble into irrelevance, and I’ve seen organizations commit suicide, and I’ve seen A-Team engineers in B-Team environments and vice versa. I’ve met an ungodly number of obstacles and a ridiculous number of merit inversions, and what I’ve learned from my experience is that software politics really matters. It doesn’t just stand in the way of excellence for individuals or organizations. Society is held back by the egregious mismanagement of one of its most important industries.
Sure, it’s a lot more fun to learn Haskell and assembly languages and machine learning algorithms than it is to learn about organizational politics, the last of these being hard to learn except through painful experience. Being a programmer myself, I understand the software engineer’s desire to “just write more code” as if that will be enough to solve the problem. On the whole, though, I’m not convinced. The biggest problem in the software industry isn’t cache coherence or race conditions or whether P = NP. It’s the fact that, after sixty years, we still haven’t figured out how to manage our own affairs. We’ve created an industry where there’s a cruel confluence of factors: what we do is so technical and detail-oriented and intellectually deep that it takes many, many years to become any good at it; but at the same time, we’ve inherited the age discrimination and anti-intellectualism of our MBA-culture colonizers.
One of the biggest problems, I’ve come to realize, is that we’ve gone beyond a healthy self-reliance, toward radical other-rejection. We accept exclusionary cultural artifacts like open-plan offices and “Agile” micromanagement (whose negative effects are universal, but afflict privileged young males the least) because we fear competition from “others”– minorities, women, older programmers and second-career programmers. We’ve also allowed our colonial overseers to infect us with a one-sided professional omertà; employers can damage employee reputations with reckless firings and bad references, but employees are expected not to “bad mouth” any employer, no matter how badly it damagedthat employee. None of the negativity that I’ve experienced after exposing Google’s use of stack ranking to the press has come from Google’s upper management– much of it has come from engineers and ex-Googlers. A lot of it has nothing to do with what I gave up when I talked to reporters; the fact that I have talked to reporters at all has been used as a justification to deny jobs. Professional omertà may have been proper during the Cravath era of professional employment, when firms went far out of their way to ensure stellar reputations even for the people they let go. Now that it’s one-sided– so-called “back channel” references are standard operating procedure in the incestuous community of Silicon Valley, but people who speak unpleasant truths about previous employers are rapidly blacklisted– it’s time to let it go. I’ve come to realize, as well, that professional omertà is a cultural subproblem living within the macho-subordinate culture of private-sector programming.
Omertà makes sense in communities where the external world is a threat. For example, within organized crime, calling in the authorities (i.e. powerful external agents) can shut business down for everyone. It’s “the nuclear option”. Criminal organizations need a culture of omertà that treats such an action as so offensive and disgusting as to justify severe retribution (against the individual and, often, the family). Programmers aren’t criminals, though. We don’t need professional omertà, and with all the back-channel reference-checking games being played against us, we seem to be hurt by it rather than protected from the outside world. What exactly happens to “us” if we stop being so exclusionary? To be honest, I think it would leave us much better off.
Conventional wisdom is that prices are set by supply and demand, and that’s true but also irrelevant when it comes to human labor, because hardly anyone can define what a worker in a complex job does. Our contributions are diffuse and impossible to measure at an individual level . I would argue that our product isn’t “code”, because source code varies wildly in quality and because much technical output is negative in economic value. It’s solving problems in an unforgiving environment. That’s what sets us apart. We solve problems in a world where there are right and wrong answers. Demand for this is undefined but theoretically unlimited, and while demand for technical excellence is low, that has more to do with the wrong people being in charge (i.e. on the demand side) than any inflexible limit. So if we got a more diverse mix of people into software engineering, which is more likely: (a) depression of wages, or (b) improvement of conditions? I would argue for the latter. If we let more women and older people and second- (or fifth-) career engineers in, we’d increase our bulk social and organizational skills, and be more able to fight for ourselves. Fighting for ourselves will enable us to market ourselves, increase demand for technical excellence, and bring our compensation up to what we’re actually worth. Programmers like to think of “paper belt” leaders as being out-of-touch intellectual lightweights and dinosaurs, but that’s stupid on our part. We need people with those organizational and leadership skills within our ranks. If we keep driving out that diversity and talent, we’ll continue to be led from without, and that will doom us to bad software and worse products and terrible management and nonexistent career support.
It’s the boneheaded macho-subordinate mentality that lives behind the one-sided professional omertà that leaves programmers expected to defend the reputations of employers who’d never defend theirs. It keeps our industry exclusive and, in a world that defines “cool” in terms of youth and arrogance, insular and superficially attractive. It also renders us easy to exploit, underpaid and astronomically undervalued. It hurts us greatly, though. We don’t need professional omertà because the outside world (the “paper belt”) isn’t our enemy and we don’t need to hide from it. We don’t need an exclusionary culture because the superior social and organizational skills of those “other types” (e.g. women and minorities and older people) will help us far more than the increased supply could even possibly reduce wages.
These aren’t problems that you’ll find on Kaggle, but they are important.
![](http://pixel.wp.com/b.gif?host=michaelochurch.wordpress.com&blog=12019234&post=2834&subd=michaelochurch&ref=&feed=1)