Quantcast
Channel: michaelochurch – Michael O. Church
Viewing all 304 articles
Browse latest View live

The problem with “one thing only”

$
0
0

As an antidote to the mediocrity and lack of clarity induced by corporate multitasking, Peter Thiel developed an alternate management approach when at PayPal: focus on a single contribution. This approach has a lot of benefits. It keeps people focused, takes advantage of convexity (accelerating returns) in difficult work, and removes excuses. It will also never work in the modern corporation.

“Why? Because Reality, that’s why.”

Here are three major problems with “one thing only” in a traditional corporate environment:

  1. There’s a lot of work that is important but of which there isn’t enough to make it a “one thing”.
  2. The question of who gets to set a person’s “one thing”– immediate management? upper management? the employee?– becomes extremely political.
  3. It eliminates the possibility for the employee to create redundancy in his “support network”, which is a necessity to keep the company stable.

Sporadic work

A lot of work is sporadic. It needs to be done, but it’s not enough to justify a full time, 2000-hour-per-year, effort. This is especially true in startups. The CEO can’t afford to neglect nasty paperwork because it’s not part of his “one thing”. Engineers shouldn’t take that attitude either. There’s a lot of sporadic work that’s extremely important, even if it fits poorly into a “one thing only” performance review process. For example, in a small company, recruiting is sporadic work, because there just isn’t enough activity to justify a full-time effort. Production support often becomes another category of such work, because most companies don’t want to have a dedicated person on it, because (a) it’s usually unpleasant, and people will expect very high compensation if doing it full time, and (b) that person can easily become a “single point of failure” for the business. When support duties rotate, they become another category of sporadic work. It’s not part of anyone’s “one thing” but it needs to be done.

Under a “one thing only” regime, sporadic work inflicts a performance penalty, because it means that less time can be dedicated to the assigned task. People who take it on tend to fall into two categories: (a) those who neglect it, because they won’t be rewarded for it, and (b) those who take it on eagerly and do it well, because they’ll be using that experience in a near-term job search. In other words, sporadic work is either done by people who are blowing it off, or people who are boosting their careers and likely looking for an external move– in part because a “one thing only” company makes it hard to move internally.

Who sets the “one thing”?

This is the most obvious problem with a “one thing only” management system. Who decides on a person’s “one thing”? Is it the immediate manager? If that, you now have a dictatorial company where a single manager dominates all of a person’s work. People are still willing to show up in the same place for nine hours per day, five times a week; but very few desirable employees are going to tolerate that sort of unilateral control over their time. Most employees are happy to follow orders from a manager in some of their working time, but sneak away 15 to 25 hours per week to support their careers through (a) side projects, (b) cross-hierarchical work that leads to internal mobility, and (c) online courses. The “fog of war” induced by corporate multitasking, incoherent direction, and convoluted command topologies is what enables them to get away with this self-support; remove that, and you have unhappy employees.

On the other hand, if the person chooses the “one thing”, then you have open allocation, whereby employees have the right to direct their own work. That’s great, but it’s inconsistent with a “one thing only” policy. Open allocation means that employees are trusted to change their “one thing” as needed, if (a) they believe a different “one thing” will add more value or be more likely to succeed, or (b) if they believe that they can effectively focus on two or more things and that it would be beneficial for them to do so. I like all of this, but it’s not consistent with a “one thing only” policy.

Under an open allocation policy where people’s creative energies are encouraged rather than stifled, one tends to observe an alternating pattern between divergent and convergent creativity. Divergent phases are characterized by searching and a process that appears disorganized. From a typical micromanager’s perspective, the person doesn’t seem to be producing anything, but is just having a bunch of conversations and reading books. This is an exploratory phase in which interest is often scattered as the person tries to figure out what to create. After finding something, a convergent phase sets in where the person shuts out all distractions and gets to work on that project. This is when the person is behind a closed office door for 12 hours per day, focused singularly on “one thing only”. Complex projects will usually involve several divergent and convergent phases in sequence.

Creative people know the often brutal process that’s required if one wants to achieve anything great. It starts with free expansion (divergence) and ends with brutal cutting (convergence). The problem with typical closed-allocation micromanagement is that it allows only for the latter, and the result is that workers end up “converging” on a product that no one (except the manager, who is often not a direct user of the work) really wants.

Redundancy and support networks

What keeps people in a company? Most HR theorists believe that it’s money, but there are a few problems with that explanation. No one has any real idea what technological or creative work is actually worth, but there is a fairly consistent market salary at any given time. Companies rarely depart from the market figure, the result being that most people can get a comparable job, if not a slight pay raise and a promotion, by changing employers. Money is a part of this, but what really keeps people working at the same place for years is differential social status (DSS): the difference in standing that exists between where a person is, socially, and where he or she would be in another company. This is strongly relevant because a person’s happiness while actually at work is much more strongly correlated to that person’s social status than compensation (which pertains to happiness outside of work).

A company owner has a great deal of DSS. He’s the boss, and if the company fails, he’ll be an employee somewhere else. People with a lot of connections and credibility within the company have high DSS. Most entry-level employees start with their DSS at zero. They probably have some social status (enough to justify having a job there) but no more than they would in another company. Incidentally, most employees find that, after a few years, their DSS at a given company is negative. They might have had a couple promotions, but their next big promotion is external. They leave.

What  does this have to do with “one thing only”? Well, most employees in a company are going to put some effort into internal networking. They’ll take interest in cross-hierarchical work to make themselves visible to other teams. In part, this is about building up credibility, or DSS. More to the point, they also do this so that, if things go sour where they are, they can transfer. They’ll look for a “backup boss”– someone with managerial credibility, but outside of their direct reporting line, who can vouch for them– or five. In a traditional managed setting, the employee has serial dependencies: he depends on his relationship with his boss, who depends on the relationship with her boss, and so on, up to the company’s relationship with the market. Any one of these links failing can mean The End. It’s like ice climbing, in which you can do everything right, technically speaking, but still die if the ice calves because of an earthquake. Smart employees are going to build parallel support into their networks, building up a positive DSS and a web of connections to protect themselves if things get hairy.

In a “one thing only” company, this cross-hierarchical work is discouraged, at least for junior people. Their only allowed source of social status is the relationship with the manager. If the employee is tapped as a “protege”, then there is positive DSS, but managers rarely do that for obvious reasons (the other reports will call it “playing favorites”, which it is). Thus, the relationship with the manager does not provide positive DSS, unless the manager has unusually high clout. Typically, the manager/subordinate relationship delivers zero or negative DSS.

Solution

I don’t think a “one thing only” policy works, as much as I admire the endeavor to eliminate the mediocrity inherent in corporate multitasking. If I were running a company, I would adapt it as follows:

  1. Your job is to excel at (at least) one thing. You’re allowed to work on multiple things, but if you’re junior, you should have one main target (because excellence may be harder than you expect). You might also be mediocre at your assigned work, and that’s okay. The question I care about is not, “What do you do?” but “What do you do extremely well?” If you can’t answer that, the next question is “What will you do extremely well in the future, and what’s your plan to get there?” It might be something different from what you were hired for. That’s also fine. The first casualty of battle is the plan.
  2. Make your work useful. Finding something you excel at is important, but you also need to make your work useful to the company. If you excel at something that nobody wants or needs, then you’ll have to find something else.
  3. Excellence takes time, and that’s okay. No one’s expected to deliver meaningful weekly results. When you start, your job is to find something you can excel at, not show “contribution” from the first week, because everyone knows that real contributions start in the mid-single-digit months. Your task is to start looking. Talk to people. Figure out what you might be interested in, and what you might do well at, and go from there.
  4. If you try to excel and you fail, that’s okay. What you want to do may be impossible, or unaffordable. It may take five times as much time as you or anyone else thought. You may not have the ability or experience that’s necessary. Or you might produce something excellent that no one wants. If you do all this in good faith, you’re not fired. You shouldn’t even be embarrassed. You should be proud of having had the courage to try. Learn from the experience, write a couple internal papers to share what you learned, and move on.
  5. Impact reviews instead of performance reviews. You will never have a “performance” review. Why? Because the language of “performance” is fucking insulting. It means an employee isn’t getting the benefit of the doubt. Unless I know you extremely well (and if I’m running a company, I probably have too much on my mind) I am just not qualified to evaluate you for “performance”. Performance is personal and pejorative, impact is observable and objective. We will discuss impact. What is the actual effect of your work on the larger organism? The goal is to improve motivation (by reminding you that your work matters) and align interests, not to put you on trial. Also, whatever is discussed will never be used for personnel action (favorable or adverse). Its purpose is to remind you of the larger system in which you work.
  6. I will fire you only if you can’t excel. Low impact isn’t personal, and it’s not a problem if it happens in good faith. Most companies play “heads, I win; tails, you lose” by firing low-impact (or politically unsuccessful, which is more often the state of things) personnel while keeping over 99% of the proceeds (with the piddling “performance bonus” that is a tiny fraction of that work’s value) when their employees generate a high impact. That’s pretty goddamn scummy. Now, if I’m running a company, I am in the business of capturing the proceeds of creative, excellent work so I will be capturing and reinvesting the bulk of the profit. An employee who saves the company $10 million might get (in addition to a career-altering promotion) a $250,000 bonus– better than most companies give, but only 2.5%. However, if I’m taking the bulk of an employee’s upside in good, high-impact years, then I will also be taking on some of the downside in bad years. It’s only fair. So the only people I’ll fire are people for whom I cannot see any possibility that they excel within the context of the company. They will have three options. The first is a severance package, calculated based on the expected length of a job search, in exchange for non-litigation and non-disparagement. They can take that, resign that day, and leave in two weeks (to avoid the embarrassment of an abrupt termination). They will be allowed to represent themselves as employed during the severance period, and we will agree upon a positive reference, in order to ensure that they move along with their careers and have a chance to excel somewhere else (or not excel, as it’s no longer my problem). Their second option is to work out a transition plan, in which they hand over their duties properly, and which includes the allowance for time to conduct their job search on “company time”. I will even give them a private office where they can take phone interviews, and I won’t count on-site interviews against any vacation policy (if one exists). Their job is to find another job, peacefully and seamlessly. Their third option is to excel at something I didn’t think of, and prove me wrong.

The policy shouldn’t be “one thing only”. That can rapidly become a political mess. It should be: excel.



Fundamental Subordinate Dishonesty

$
0
0

This essay has two parts. The first is about the ethics of lying in the context of a job search, whether on a resume or in an interview. The second half focuses on a massively common falsification that is not only common, but socially accepted: the Fundamental Subordinate Dishonesty.

I realize that this opinion is not without controversy, but I don’t find myself up-in-arms about most resume lies. In the professional world, that seems to be considered one of the worst things people can do, leading to immediate termination, even without any other context. In my opinion, much of this attitude is overblown, a high-horse position taken because, since there is so little in the way of actual ethics in the white-collar corporate world, the appearance of ethical “above-boardness” becomes critically important. 

Let me make myself clear on three things, however. First, I don’t lie on my resume. Why? I don’t need to, and the consequences of being caught are severe. There are annoyances in my career history, but nothing so damaging that I have to hide it. Second, I’m using “resume lie” as a shorthand for dishonesty that occurs in a job search, so lies on an interview would count. On that, it’s tactically a lot better to keep that resume factual and truthful and deliver whatever inflations you need to make verbally, if you can. Third, there are two categories of this type of lie, one of which I consider extremely unethical, and the other of which I don’t care about. Feigning competences that a person doesn’t have is charlatanry and a form of fraud. That can be extremely harmful. A person who lies about having a medical degree (a “quack” doctor) is hurting people, committing a dangerous fraud, and deserves to go to prison. There’s no question that charlatanry is unethical. It’s completely unacceptable. The other category of career inflation is what I call cosmetic inflation. When a person represents work experience honestly but inflates the title, or alters dates of employment by a couple months to cover an embarrassing gap, he’s not trying to get a job he can’t perform. He’s just inflating his own social status– in truth, mostly lying about performance reviews, which falls into the “who gives a fuck?” category as far as I am concerned– in order to bring his image in line with his capability.

On when such cosmetic inflations– fudging titles, knitting dates, representing termination as a voluntary resignation, and upgrading “performance-based” compensation to the top bracket– are advisable, I can’t say when they are and when not. I don’t do them, and I have no data. The downside is obvious– you can get just as fired for an ethical lie as an unethical one– and the upside is often unclear. Is someone really going to be treated better when he self-assigns the promotion from AVP to VP? I have no earthly idea. What’s obvious to me is that there’s nothing ethically wrong with this sort of thing. People aren’t faking capabilities they lack, with such falsehoods, but improving their stated political trajectories in the direction of what’s socially optimal. If it’s unethical to lie about social status and past political success, then the whole world is guilty.

Companies worry a lot about resume lies, and understandably so, because I imagine they’re common given the stakes. So, I asked myself: do they lose money to these? How much? This said, I’m focusing only on the cosmetic brand of lie: upgraded job titles and date-fudging, not actual fraud. I’m not talking about the objectively unethical and fraudulent lies (charlatanry) because (I would hope) only a very small percentage of the population is depraved enough to attempt them.

Perversely, I can actually see companies winning more than they lose from cosmetic inflations. Why? One of the major causes of corporate inefficiency is the lack of trust. Most people are stuck in junior roles below their level of ability (and, therefore, producing less value for the company than they otherwise could) because they aren’t trusted. They have the capability but not the credibility. The existence of outright fraud is obviously a part of this problem, even though psychopaths are often skilled at office politics and can easily win these cosmetic awards (such as job titles). Cosmetic dishonesty, perversely, might be the cure. It sounds ridiculous that I would be advising outright (if harmless) lying as a remedy to a trust problem (although I think this is an absurdly common social behavior) so let me give a concrete example. Bob is a 34-year-old banker who was an Associate for 5 years, and then laid off when his bank hit a rough patch. In his job search, he represents his work experience, and strengths and weaknesses, honestly but upgrades his political success by claiming to be a VP (in banking, “VP” is a middling rank, not an executive) who is still with the firm in good standing, and simply looking for new challenges. He gets another job, and performs well there. How much damage does he do? He might receive $20,000 more in salary from his next employer on account of his falsification. Big deal: if he’s actually incompetent, they can fire him and cut their losses. Chances are, he’ll deliver over $300,000 of additional value to his new employer on account of being trusted to perform better work. He is lying to his future employer, and making a huge return on investment for them.

What Bob is actually correcting is a cognitive flaw of humans whereby mediocrity in past social status is conflated with ethical depravity. This made sense in evolutionary times, because people (at least, men) of low social status had to subvert the existing order, often in violent ways, in order to reproduce. The alpha ape needed to watch out for the gammas, who might attempt to compensate for their lack of physical superiority by trickery, thereby killing him. It’s less relevant in the modern time, when there is a nonviolent and socially positive solution to low social status: move somewhere else and reinvent yourself. If anything, I think people who do this are more likely to be ethical. Rather than do whatever it takes (read: cheat) to win where they are, they walk away and find another game.

For an aside, most of these cosmetic lies aren’t dishonesty so much as insubordination. When someone upgrades his title to reflect a promotion that he was passed over for, is he lying? One could argue so, but one could equally convincingly argue that he’s merely de-legitimizing the managerial authorities that delivered a negative assessment of him. They called him a loser and he’s saying, “No, I’m not”. Most job titles don’t reflect objective accomplishment but political success in any case, and who’s to say that the inflated title isn’t a more accurate reflection of who he is? He’s clearly showing a lack of subordination to his ex-managers, but why shouldn’t he? What he is really doing by inflating his title is counterfeiting a social currency that he believes to be illegitimate anyway. Very little harm is done.

So what is the costliest of the cosmetic lies? Are there any that lose money for employers? The answer I’ve come to is that there is one, but it’s (a) not a traditional resume lie, and (b) so common that it is not conventionally considered unethical: the Fundamental Subordinate Dishonesty.

Subordinate dishonesty is a term I use for the often omissive deceptions that people have to use in order to survive a subordinate role in an organizational hierarchy. These aren’t ethical faults, because they’re often adaptations required for survival. For example, I learned early in my career that it is never acceptable to voice the opinion that your assigned project will not pan out, even if that is obviously true. If I hold that opinion, I keep it to myself. Even if true, “This project is never going to work, and here’s why” is a one-way ticket to Firedville. The basic, underlying principle of subordinate dishonesty is “never bear bad news“. You have to know your boss extremely well, and usually this requires a pre-existing friendly relationship, before you can trust him enough to associate your face with negative news.

There is one variety of subordinate dishonesty that is especially common, and I’ve given it the name above: the Fundamental Subordinate Dishonesty. This is when a person represents himself as being happy to take a subordinate role. This is very common on job interviews. Most people wait too long to look for jobs, and are semi-desperate by the time they “officially” hit the market. By this point, they exaggerate their willingness to take on junior, low-impact roles or positions far outside of their desired specialty. They’re just happy “to be a part of something great”. This isn’t an intentional dishonesty, so much as a self-deception repeated often, because it’s socially attractive. The problem is that it doesn’t last. After a few months in a new job, the “honeymoon period” ends and people will no longer be happy in a role where their credibility is below their capability. However, when people are either desperate or (more commonly) think they are, they will frequently overestimate and, thereby, overrepresent their willingness to fill such a role “as long as it’s needed of me” in order to “just close” an offer. But once they are settled in and no longer in fear of unemployment, they become agitated in a way that they didn’t predict (at the time, they were just happy to get out of the mess they were in) but that should have been predictable.

If I’m looking for a big-ticket loss in terms of resume lies, I don’t think inflated titles or padded dates do much damage. At their ethical worst, they’re a zero-sum game that is to the disadvantage of truthful nonparticipants, like me. (I have no ethical problem with cosmetic inflation. My decision not to take part is strategic. My career history is above-average so I gain little and risk too much.) Being in that category, I’m OK with this “loss”, because the companies where I might not get hired are those that value things (like past job titles) that I find to be imbecilic noise. It’s better, for the world, for that company to hire an (ethical) liar and trust him so he can get his job done than for it to hire me and trust me less because my (truthfully represented) social signals are not as strong. This “victimless” crime that the liar is committing doesn’t bother me much.

Instead, the trillion-dollar killer is the Fundamental Subordinate Dishonesty. It creates massive unhappiness and turnover.

For all this, I don’t want to make it sound like employees are dirty and companies are clean on this one, because that’s not true. In fact, employers have a history of over-representing the autonomy associated with a position and the challenge of the work just as brazenly as job applicants overrepresent their will to subordinate. Sometimes this is unintentional, and sometimes it’s overt. In general, I’d say that it’s aspirational. A manager’s sense of time is distorted by the need to deal with multiple concurrent and often conflicting story lines, which is why an employee who asks for more interesting work at the 4-month mark is doing so “way too soon”. So managers often present a job role, in terms of authority and creativity, based on where the employee “should be” after a “short time” (read: 2 years on busy-work that’s mostly evaluative because it’s not critical to the business) if that employee “proves himself” (read: doesn’t piss anyone off). The problem is that very few people are willing to be outright bored for two whole years. This begins as benign aspirational hiring, but grows into something else once a company goes into full-on recruiting mode. I’ve seen companies (mostly, crappy startups) that literally cannot function because over 80 percent of their people were promised leadership roles and nobody is willing to follow.

This is also why, as I’ve become older, I advise people to pay a high degree of attention to salary in comparing job offers. I used to say that it was worth it to take a low-salary job for “interesting work”. I’m not so sure anymore. If you’re going to be a founder of a startup, with real (5% or more) equity and standing in the business, then this may be an exception. Otherwise, you’re often better off looking at the hard number to size up what the company really thinks of you, unless you have solid data. Don’t trust intangibles on a “just-said” basis, because employers lie about those all the fucking time. This is the mirror image of the Fundamental Subordinate Dishonesty. The employer represents a low-level position within the hierarchy as being better than it actually is, and the employee represents herself as being content with a lower level of autonomy that she’ll actually accept. Eventually, both sides find themselves dissatisfied with the relationship because the other party “should be” picking up on their real desires. It should be no wonder that this commonly leads to a theatrical crash.

So what is the solution? I think the answer is many-fold. First of all, what actually makes people happy at work isn’t the salary. Compensation makes people happy outside of work. When at work, it’s social status. People put an optimistic spin on this in accord with what’s socially acceptable, but the reality is that people don’t like jobs where they can’t get anything done, and it’s impossible to have real achievements in a position of low status. So an organizational hierarchy that leaves most people with low status is going to make a lot of people unhappy. This was not a major issue with traditional industrial work, where unhappy workers were only 20 or 40 percent less productive than happy ones, but for modern technological work, that difference is at least an order of magnitude. Hierarchical companies are losing their ability to perform at the highest levels.

I see no other conclusion than the fact that corporate hierarchy is obsolete and increasingly incapable of solving modern technical problems. The subordination on which it relies is fundamentally dishonest, because no one is content to be held artificially at such a low level of impact. People who will follow orders in the context of a symbiotic mentor/protege relationship that advances their careers, for sure. They will also follow another’s lead on a specific project when they believe that other person has a better grasp of the problem than they, themselves, do. What they’re not willing to do anymore is be truly subordinate: to follow rank because it is rank, and to let their own long-term career goals take a backseat to professed organizational needs. Why? Because when you accept this, you reduce your long-term career prospects, and are effectively paying to keep your own job. That era is finished. Yet social protocols remain and require a certain dishonest signaling about the willingness to subordinate, leading to failed communication, overblown expectations and festering unhappiness. 

If there’s a job search lie that’s a billion- or trillion-dollar killer, I would say that the Fundamental Subordinate Dishonesty is it.


The unbearable B-ness of software

$
0
0

I’m not Jack Welch’s biggest fan. For one thing, he invented the “rank-and-yank” HR policies that literally decimate companies. I don’t disagree with the idea that companies would improve their health by letting go 5-10 percent of their people per year, but I think the discovery process involved is impossible and often politically toxic. There is, however, one thing he’s said that I think has a lot of value: “A players hire A players; B players hire C players“. His point is that if you have mediocre management, you’ll actually end up with terrible employees. I would say it’s not limited to hiring only. A players make more A players. They teach each other how to be better. Not only that, but they raise the potential for what an A player can do. B players don’t have the foresight or “ownership” mentality to mentor others, and produce non-productive C players.

The insight I had recently is that this applies to software as well. “B” architectural decisions or tooling choices, which seem mediocre but tolerable at the time, create “C” (ugly, defective, or unusable) software. Software contributions often have, not always intentionally, a multiplicative (or divisive) effect across a company, in which case a single programmer can become “manager-equivalent” in impact. This is something that most companies, especially outside of technology, fail to realize about software. They miss this fact to their peril.

I’m increasingly convinced that it’s hard to be a great programmer and not be, at least, a half-decent leader. This doesn’t mean that one needs to be a “people manager” or even have any interest in taking that direction. However, a great software engineer is:

  • a decision-maker, because engineers frequently have to choose technologies and make infrastructural choices under conditions of incomplete information.
  • a creative problem-solver, because for the hardest problems there is no “canned” solution. In fact, for some problems (as in machine learning) the best solution may not be known, and simply designing the experiments is a non-trivial problem.
  • a designer, because the ability to create attractive, robust and maintainable software systems is uncommon, and the work is non-trivial.
  • a teacher, because good engineers don’t just leave a wad of undocumented code laying around, but take responsibility for making sure that other people can use them.

How is all of this, when done right, not a leadership role?

Of course, software engineers are not in general treated like leaders in any large company that I know of, and a fair proportion of the people who are hired into software positions just aren’t capable of being leaders. Still, there’s an overwhelming and self-perpetuating culture of B-ness in software, with engineers not trusted to pick their projects and choose their own tools. This culture of mediocrity is one which what I called “Java Shop Politics” emerges. I regret the name, however. It’s not fair to single out Java, especially when it was Microsoft, with Visual Basic and the first scratches at IDE-culture, that first attempted to create the commodity programmer world. A better name would be “Big Software Politics”.

I would replace Welch’s A-, B-, and C-player language with a four-class system of dividers, subtracters, adders, and multipliers. I’ve separated the “C” category between the ones who are truly toxic and hurt others’ productivity (dividers) from the more harmless people who just don’t get much done (subtracters). Dividers, I think, should be fired quickly if they don’t improve. The only crime of subtracters is to draw more in salary than they produce, but it’s worse (for morale) to fire them, so they should be mentored into adders and (eventually) multipliers whenever it is possible, and gently let go if it seems not to be. Ultimately, no company should retain an employee who doesn’t have the drive and capability to become a multiplier, but it takes time for a person to get there and firing is an extremely blunt instrument. In general, I wouldn’t fire anyone but a divider.

“B-player” and “adder” seem to correspond neatly, as do “A-player” and “multiplier”. The first category can crank out CRUD apps just fine, and write software to spec, but lack the architectural or design skill to build much on their own. Adders are the workhorses who are capable of implementing others’ decisions but unready to make their own, while multipliers deliver growth by making others (and themselves) more productive through their (often far-reaching) contributions.

Management is especially challenging and dangerous because it becomes impossible, almost by definition, for a manager to be a mere adder. A manager’s job is to alter the way people work, and as with stock traders, “random” moves have negative expectancy. The percentage of people who have multiplier-level knowledge, talent, or skill is small– maybe 10 to 15 percent, in a typical company. Managers who don’t have that capacity become dividers who add noise and entropy.

Programming, the art of managing machines, is much the same way. There are always a few junior-level, self-contained additive projects in every company, but the core infrastructural work that will be used by a large number of people is multiplicative– if done well. Done poorly, it reduces capability and has a dividing effect. How is it that software– typically construed as an asset– can have such a divisive effect? The problem is management. When people don’t freely choose what tools they use, and what software they rewrite as opposed to what they deal with “as-is”, low-quality software combined with managerial blessing will lead to unproductive and unhappy programmers.

At the beginning of this year, I developed a scale for assessing the capability of a software engineer, and I’d formalize it a bit more with a model that first separates software work into three levels:

Level 1: Additive work, such as scripts to generate reports or CRUD business apps. This will typically be written once and read only by the original code author. Relevant question: can you code?

Level 2: Multiplicative work, such as tool development and critical production infrastructure in which performance, scalability, design and code quality matter, because large numbers of people rely on the work. Most of the “sexy” problems fit in this category. Relevant questions: does your work make others more productive? Can they use it? Do they enjoy using it?

Level 3: Globally multiplicative work, such as the design of new general-purpose languages. A level-3 accomplishment needs to be “best in class”, in some way, on a worldwide basis because its purpose is to push forward the state of the entire industry. Linux and Clojure are some examples of level-3 achievements. Most of this work is R&D that few companies are willing to pay for, these days. Relevant question: are you doing original work that increases capability globally?

As with any model, this is chock full of simplifications. In reality, there are hard L1 tasks that might be rated above 1.5, and easy L2 jobs as well that might be “only” 1.7 in difficulty, but for simplicity I’ll assume that tasks can neatly be “bucketed” into one of these three categories. The going assumption is that a programmer’s level represents the level at which she will make the right calls 95% of the time. For a level-2 task, the 2.0 programmer will succeed 95% of the time, the 1.5 programmer will get 50%, and the 1.0 will get 5%, with an “S-shaped” logistic interpolation giving meaning to the fractional levels (e.g. 1.1, 1.9). In practice, these concepts are too difficult to define for formal measurement (making it useless even to attempt to measure beyond one decimal place) and the bulk of professional software engineers are between 1.0 and 2.0. While it’s difficult to apply percentiles to software engineering, the population being ill-defined, I’d estimate that:

  • the median full-time professional software engineer is about 1.1. Many senior (20+ years experience) engineers never crack 1.2.
  • graduates of elite computer science programs are about 1.3.
  • about 1 in 10 professional software engineers are at 1.5 or above.
  • about 1 in 100 software engineers are 2.0 or above.

In the corporate world, level-3 software considerations are generally irrelevant. Such efforts tend to have an R&D flavor, and there’s rarely the budget (the limiting resource being the time of extremely high-power people) or risk-tolerance for a company to attempt them, so levels 1 and 2 are where the action is. You could safely say that level-1 work can usually be done by an adder or “B player”, while level-2 projects require an “A player”, “10X-er”, or multiplier.

Companies and software managers know, from extremely painful (read: expensive) experience, that level-2 work is hard and risky, and that most professional software engineers lack the competence to do it well, and even fewer are capable of supervising such work. The result is that they try to minimize the amount of such work, and the degree to which they’re willing to rely on it. If one “unit” of level-2 work can be replaced with four “units” of level-1 work, that seems like the prudent choice, because it’s astronomically easier to hire 1.0-1.3 programmers than to vie for 1.8-2.1, who can only be detected and assessed by other great programmers. This is the essence of the “commodity programmer” culture: create a world in which as much of the work as is possible is level 1, and allocate the level-2 work only to people with a “track record”, an assessment that often has more to do with politics and social position than capability.

What goes wrong? Well, the first problem with commodity developer culture is that the bulk of engineers living within it never improve. They stop learning, because there’s no need for them to progress. When companies staff people on subordinate, bite-sized work below their level of capability, they get bored and often decline as time goes on. If you don’t have 1.5+ level work, you’re not going to have many 1.5+ engineers, and you won’t keep any of them for very long. If what high-level work you have is jealously guarded and allocated politically, the best engineers won’t stick around for years to prove themselves adequate for it. The ones without other options will.

The second problem is that projects often become level-2 by accident, and also that level-2 needs tend to emerge once the complexity load of all the level-1 work reaches a critical mass. This is akin to Greenspun’s Tenth Rule, which essentially states that when low-level languages are applied to complex problems that require a more “high-level” approach, people implement the features that already exist in high-level languages like Lisp. This shouldn’t actually be taken as a slight against C: for low-level (here, “low-level” pertains to the degree of abstraction that is used, and is non-pejorative) problems where memory management is critical (yes, plenty of those still exist) it is often the best language, but you wouldn’t want to write a complex, modern web application entirely in C. In any case, what this “Rule” is really about is the emergence of complexity, driven by need. Lisp is a well-thought-out high-level language (a level-3 accomplishment) and almost guaranteed to be superior to the domain-specific language (DSL) that a typical corporate programmer, constrained by deadlines and managers and low autonomy, would “inject” into a C project to add high-level functionality (on an ad-hoc basis) to it.

I think Greenspun’s Tenth Rule can be generalized. At some point, the complexity load induced by level-1 requirements and work requires level-2 insights and capability. The problem with the commodity developer world is that, because level-2 work is effectively not allowed, the stuff that does happen at such level occurs accidentally in an ad-hoc way. A manager might see a neat script (a level-1 accomplishment) written by a junior developer and say, “This whole company should have that. By next Friday.” However, the developer lacks both the engineering skills and the political clout to recognize bad requirements and reject them, the result being an overblown, late, and unmaintainable system that serves many needs poorly instead of doing a few things well. Such are the good intentions that pave the road to hell.

All of this, I think, explains the sorry state of the software industry today. The business world understands software’s problems at a high level– most software is shitty, late, and over budget– and (correctly) concludes that the majority of software developers lack the skills to attack problems at the multiplicative (level-2) scope, while most executives are incapable of assessing software talent at the individual level. The result is a bureaucratic, creativity-killing culture that is tailored to software engineers in the 1.1-1.3 range, a world designed to protect itself against B players. The long-term problem is that this makes it nearly impossible for most software engineers to become 1.5+ A-players. One almost certainly won’t achieve it during one’s day job. Mediocrity self-perpetuates. What companies really need to do is give software engineers a creative “sandbox” (an R&D-like environment) in which they can attempt 1.5+ projects, and if they fail, no one gets hurt. However, I wouldn’t hold my breath for this, because most managers are going to see this as “not doing real work”.

The result is a culture of “B”-ness in software. That would be acceptable, but the reality is that level-2, “A player” needs emerge anyway as a result of the need to manage (and, preferably, reduce) the accumulating complexity that additive software generates. Unfortunately, this commodity developer culture is utterly unprepared to approach such problems, and fails miserably at them. The result of this is the slew of “C” (failed) software projects that will litter any corporate environment given enough time.


Flow, ownership, and insubordination (plus D.F.A.)

$
0
0

I wrote, in October, about what programmers want from their work, and one of the most immediate concerns is a state of consciousness called flow: a state of engagement, focus, and high productivity. Flow rarely occurs in the workplace, and one inherent problem with it is that its behavioral byproducts are often not office-appropriate. People in flow tend to lose their self-monitoring capability and exhibit behaviors (humming, tapping, even whistling) that are undesirable in such a setting. In these hideous open-plan offices that are increasingly common in technology, that’s a real issue, because one person’s state of flow will drive N-1 office workers from typical, benign office boredom into crippling, distracted anger. It becomes a morale problem, and this person needs to stop doing that. People who want to keep their jobs in such an environment will never go 100-percent into flow, unless they’re the only person there.

The lack of flow is acceptable in most office environments because they aren’t really built around optimizing productivity, but availability of the workers. Subordinate employees are, in essence, excess capacity to be tapped in a crisis, but otherwise there just to be there, preferably arriving before and leaving after their bosses. They’re not supposed to actually matter. The work they do often isn’t construed as important to the business (even if it is) but as a supportive “nice-to-have”. They might work tirelessly to build a platform for data-driven decisions, but the real players are the ones who read the reports and make the calls (usually, selectively comprehending the data to support decisions they’ve already made). Subordinates are kept around in a years-long evaluative period so that, if needs grow in the future, the ones who have proven themselves (read: succeeded politically) can be picked.

A low-level, subordinate role in a business organization involves low expectations and minimal responsibility. The job is: (a) keep your boss happy, and (b) manage your image and, over time, develop a reputation for being reliable and steady. It’s not difficult but, in the long term, it’s unfulfilling, which is one of the reasons why people have a hard time with the slow progress of the typical corporate track. A lot of people lack the patience to deal with the “dues-paying” years required to gain credibility in the typical lethargic, risk-averse environments that characterize Corporate America. The subordinate life is one of low expectations, but also one of little real accomplishment, which means that a person rarely becomes indispensable and can therefore be let go quite easily. It’s an uneasy existence, because a person in such a state often feels like an underachiever, based on a comparison between what is produced and what is possible. That said, one of the most important corporate survival skills is making sure this gnawing sense of guilt does not manifest itself in a visible or socially inappropriate way. Everyone ambitious feels this way. Let it go, and do not take it personally. Corporations are mere patterns of coming and going, devoid of real substance and unworthy of emotional attachment.

This experience is especially painful for top-tier software engineers and technologists, because our natural inclination is to improve things. What we do isn’t just “programming”. It’s process improvement. Programming is just a means of automating a problem that used to require human labor, or that wasn’t possible by humans (in a timely fashion, without mistakes) before. We like to make things better and, not only to solve hard problems, but to solve them so well as to make them easy. It’s what we do. When we see a ghastly inefficiency, we try to fix it, politics and sensitivities be damned. When the inefficiency being attacked is the enforced underperformance of new members of a group due to that group’s lack of willingness to trust the newcomers, our campaigns will seem ugly to people who don’t understand the technological impulse. As we see it, it’s an earnest attempt to improve processes, but the appearance it creates is that of an entitled youngster trying to jump the queue. It appears insubordinate, and perhaps it is.

Anything that attempts to improve the productivity of subordinate players is inherently subversive because, while it may improve the aggregate productivity of the organization, it also represents a shift in power that the entrenched leadership often does not want.

In addition to the subversive nature of high productivity, the state of flow is itself insubordinate. One Spolsky snippet that has stuck with me for a long time is from his “Guerrilla Guide to Interviewing” wherein he describes an ideal interviewee:

A really good sign that a candidate is passionate about something is that when they are talking about it, they will forget for a moment that they are in an interview. Sometimes a candidate comes in who is very nervous about being in an interview situation—this is normal, of course, and I always ignore it. But then when you get them talking about Computational Monochromatic Art they will get extremely excited and lose all signs of nervousness. Good. I like passionate people who really care. (To see an example of Computational Monochromatic Art try unplugging your monitor.) You can challenge them on something (try it—wait for them to say something that’s probably true and say “that couldn’t be true”) and they will defend themselves, even if they were sweating five minutes ago, because they care so much they forget that you are going to be making Major Decisions About Their Life soon.

What Spolsky is talking about, here, is flow. The job candidate forgets that this person is a prospective boss and might even decide not to hire her at all. The subordinate context has disappeared. There’s something more important– more intellectually interesting– than the potential for an economic relationship. She just dives into a topic she finds intellectually interesting. This is an attractive trait– in this context. She shows that there are intellectual and creative disciplines that she cares too much about to play the status-optimization game of most working adults. Certainly, having this passion is a way to stand apart from the crowd.

For a related note, one frustration that upper managers and business owners experience is that their underlings don’t “own”. What is ownership, in this context? For a first scratch, I’d say that an owner perceives the interests of the group as being more important than immediate personal comfort or benefit. It’s not that owners selflessly prioritize the group’s objectives over personal ones, so much as there’s a well-understood alignment of incentives. Literal owners (stockholders) often benefit directly, in a measurable financial way, from the success of the business, so they’re willing to make sacrifices (at least in theory) that typical employees wouldn’t make on behalf of the organization’s health. An owner will drive an organization to improve, despite personal exertion and risk, and take responsibility if it fails.

Of course, financial investment is not the only form of ownership. For example, financial ownership doesn’t apply to all organizations, because the concept doesn’t exist the non-profit sector, but people still take responsibility for causes they believe in. I would say that ownership derives from two related factors: (a) differential social status, and (b) emotional commitment. For the first, people are inclined to be invested in an organization if it affords them higher social status (which may or may not manifest itself in financial wealth) than they would otherwise have. Differential social status (DSS) exists when a person has a substantially better role (higher pay, more autonomy, interesting projects) at that company than would be available anywhere else. CEOs and owners and founders have it, because they’d be applying for more subordinate positions if their companies vanished, but most corporate employees’ DSS begins at zero (what they get is no better or worse than is available on the market) and becomes negative as time goes on (because external promotions are often easier to get than internal ones). People on whom an organization confers DSS will fight to keep it alive, while people with zero or negative DSS, if rational, are only interested in their personal careers. The second element, of emotional commitment, pertains to a genuine belief that what the organization does is good for the world, which will also motivate a person to participate in its upkeep.

When executives complain about a lack of an “ownership mentality” in their subordinates, they recognize that most people don’t really care about the health of the companies where they work. It’s an acknowledgment that most people are idiots, in the original (ancient Greek) sense of the word whereby “idiot” meant not a stupid person but, literally, private individual. The idiot, in the original Athenian context, was a self-centered person with little interest in public, global, or historical affairs. Idiots are not necessarily stupid, but small-minded. They only care about issues that directly affect them.

Is there anything wrong with being an idiot, in this sense of the word? I would say that, in the corporate context, it’s a survival skill. People who stand up for unjustly fired interns, or who criticize unethical practices in the open, are not likely to remain employed for much longer. The opposite of idiot is citizen, but corporations are not designed for citizenship. In fact, there is a major difference (in governance) between the ownership and citizenship models. In the first, voting power is allocated in proportion to financial investment and the common outcome is oligarchy; in the second, all people who are deemed to be voting citizens all have the same amount of power: one vote. Private corporations are clearly not in the second category (citizenship) but even their commitment to the first (ownership) is limited. At the board level, they are ownership-weighted republics that put a great deal of effort into appearing stately and refined. However, with regard to internal employee affairs and day-to-day management, they’re dictatorships in which most people can be unilaterally banished by a single superior officer for any reason. What point is there in pretending to “own” one’s work when one can be unilaterally divorced from it at will?

When executives complain about a lack of “ownership” in their subordinates, what they really want to see is more akin to a “citizenship” mentality. Owners assert power and call shots; citizens serve and clean up messes. That said, citizenship makes no sense when very few of these organizations have the desire to implement the political structure that would make it a reasonable behavior. If you want citizens instead of denizen workers, you have to cede power to them. They need to have a say in their working conditions. (Compared to typical corporate serfdom, this would be positive differential social status.) Almost no company is willing to do that. They create a world in which self-interested idiocy (in the non-pejorative sense above) is the fittest strategy.

Companies have tried to rectify this through stock sharing programs, but the amounts they allocate to mere employees are so low as not to have an appreciable effect. A 100-person startup might offer 0.05% of the stock, vesting over 4 years, to new hires. (Actually, it’s more likely that there will be an options program, for tax reasons, but I’m avoiding that for now.) A 0.05% slice of a company worth $100 million is worth $50,000. Vesting over 4 years, that’s $12,500 per year. That’s just not enough of a stake to convince anyone credible to subordinate his career goals to the corporate interest. If anything, it makes people more self-serving, because subordinate employees who think the company is a “rocket ship” are going to do everything they can to get an executive position with real equity (although their odds are terrible) while the more pessimistic, realistic or agnostic employees will just view it as regular compensation. These token slices don’t turn people into conscientious owners, and they’re often inadequate to cover the differential between what the employee would earn in a more stable job at a “blue chip” company.

All this said, the argument can be made that corporations are powered by “silent”, conscientious workers who expect little in the way of formal recognition or outsized compensation, but take an attitude of ownership anyway, because they care. Here’s an interesting question: what happens when a person who is not high in the social hierarchy (e.g. not a real owner) takes on this “ownership mentality”? He wakes up one unseasonably warm February morning and says, “Starting today, I’m going to give a shit.” What happens to him after that?

Flow is the immediate-term state of being engaged. It’s a second-by-second trait. I think there’s a long-term analogue of it, which I’m going to call DFA: Done Fucking Around. This occurs when a person suddenly “turns on” to an effort or cause and starts dedicating a lot of focus, energy, and time to it. People refer to it as “stepping up” or (for a leader) “taking charge”. The 9-to-5er is suddenly working till 8:00 and on weekends, even though no one asked her to do so. The previously unengaged programmer is now spending slack time at work on Coursera. When someone is DFA, as with flow, any power relationships and subordinate contexts become almost irrelevant. DFA means doing the right thing, regardless of the power structure and the consequences. It means working hard even if recognition will come late and scarcely. It means actually caring about doing whatever you’re doing extremely well. Kaizen, bitch. It means you’re not just playing subordinate and waiting to be “discovered” by upper management like it’s some kind of movie where The Right Thing happens by chance in a hundred and seven minutes, because the real world isn’t like that. No no no no, sir. You’re Done Fucking Around.

Cinematically, DFA is attractive, even in subordinate workers. He was once an Average Joe, but some event occurred to change his attitude, and now he’s dedicated. He’s going to win. Cue the Rocky music. In the movies, once the DFA light turns on, the path to success opens itself up. There might be a small amount of resistance, but the hero’s only adversary is (possibly) his own discouragement and  (intermittent, spanning only one or two scenes) paresse. That’s how the movies show it: the adversaries are too complacent and haughty to put up a fight, until the hero is strong and ready and it’s too late for them. The bad guys are too hubristic to fear him, so they don’t try to abort him while he’s in his cocoon of industrious, heroic exertion.

What actually happens to a subordinate employee when the DFA light turns on? My experience is that, within 6 months, there’s about a 4-in-5 chance that he won’t be working there anymore. If you go DFA, you had better be getting a couple lines on your resume out of the excess work, because you’ll probably be using it soon. Some get fired, some resign out of frustration, and some just seem to disappear at the height of their energy and prominence. As in the movies, an employee who goes DFA gets noticed quickly. Something’s happening. There’s an HSTG (“Holy Shit, This Guy”) moment. Unlike in the movies, he’s rarely well received. His co-workers are upset at him for making them look bad, and his superiors see him as a credible threat to their authority.

The fantasy is that the DFA subordinate will be “discovered” and find a mentor– someone who says, “You’re better than the role that you’re in, and I’ll help you skip a grade or few”– and they work together at accomplishing what the DFA employee wants to do. The mentor clears political obstacles, and the protege does the legwork. I think it’s very rare that this actually happens. With the employer-employee social contract imploded and corporate loyalty bilaterally abandoned, I think that faith in such eventualities is generally misplaced. It can happen, but it’s rare and one shouldn’t bet one’s entire career on it. What is a mentorship arrangement to the protege is “playing favorites” to everyone else. It’s rare that the benefits of taking a protege outweigh the loss of morale observed in the less fortunate.

In school, gifted students are allowed to skip grades and take advanced courses with little objection, because (a) over-smart, bored students become disruptive, and public schools can’t fire them, and (b) the other students are not likely to be resentful because, in their view, the poor sucker is just getting more and harder homework (“who’s smart now?”) Yet in the corporate world, grade-skipping is so politically touchy, and so contrary to the “we hire only the best” profession that corporations routinely make, that it very rarely happens. Corporations refuse to track their best people in order to keep consistency with an “all our courses are honors courses” mentality that is inconsistent with most of the work that white-collar, subordinate workers actually do.

Why is it this way? Why is grade-skipping and tracking acceptable in education but not at work? I have a guess. In school, the work is structured and sometimes contrived but it’s not supposed to be boring. Some classes are boring, but that’s a sign of an incompetent teacher rather than an accepted, much less intentional, trait of the work. The difficulty that most people experience in school is supposed to come from the intellectual challenge of the material itself. In the corporate world, the boring nature of the work is a fundamental, landscape feature of the challenge. What makes corporate work difficult isn’t the intellectual or creative difficulty of the work itself, but the combination of boredom and the need to maintain social poise while performing it.

In education, grade skipping means that a gifted student faces more challenging work, thus allaying boredom. However, students of average ability would find this treatment undesirable, because they’d face a more demanding environment. What’s different about the corporate world is that, because boredom is the central nature of the challenge, gifted employees who get special treatment are actually being skipped over the nasty, difficult years of dues-paying, low recognition, and subordinate bullshit. The reaction is that others will have is of indignation: “If I have to keep fucking around like this, why shouldn’t he? Who decided that he gets to be done fucking around and that I don’t?”

Corporations are, in addition to having these inherent structural problems when it comes to talent discovery, information-poor. Any information they get about individual personnel performance is so biased and tainted by politics as to be effectively useless. They might be “drowning in data” when it comes to performance reviews (key-log mining!) but the data is all garbage. What this means is that, regardless of what personnel structure a corporation might want invent in order to discover and develop talent, its integrity will be corroded faster than the system can reach implementation. In this anarchic environment where no one really knows if anyone else is good at their jobs, much less capable of something better, the power vacuum is filled by a self-promotion system: the DFA light. When you start to care, you turn it on, put your neck out there, and sometimes get tapped for leadership roles but usually get “nicked”.

To turn one’s DFA light on– to truly care about the quality and impact of your work– is to make a challenge. It’s a way of saying, “No, I am not mere excess capacity.” It’s a way of showing that one is no longer willing to sleepwalk from workday to the next. It can be an immensely powerful statement, but a personally dangerous one to make. One thing I have learned through observation is that, for a person wishing to maximize job security and career continuity, overperformance is more dangerous than its maligned counterpart.

Without much of a transition because I’m running out of time for this post, I should answer a related question: Why am I writing this kind of stuff? I’d have to be a moron not to know that I’m taking a severe personal risk by doing so. (Not a lot of people read my blog, but it’s easily Googleable.) Over the past year, I’ve given people advice (in public and in private) on how to beat performance improvement plans, why various office-political degeneracies exist, and how to build a reasonable HR system (thereby indicting the HR systems that 99 percent of companies have). Part of it is that I’m a competitive person and I like showing up powerful people, even when I’ve never met them. I’m better at their jobs than they are. But that’s not all of it. At least in the public, I’m D.F.A. as well. My first campaign has been to generate the vocabulary. Valve’s innovative internal structure is open allocation. I’ve given it a name. We can now discuss it. This is just one move out of millions that the world will require from its most progressive thinkers. I am hardly alone in this, and it’s not clear (yet) whether I’m even an important player.

The truth is that I perceive weakness in the Corporate System. These institutions will be around for a hundred more years, just as Thomas Paine and Ben Franklin didn’t kill off large religious organizations (nor was it their desire; their objective was simply to end political religion). That’s fine. My job is not to kill them. That’s not even desirable. However, I’d like to end Corporate Dominion. Its time is ending, and my generation might be the one to drive in the stake. Or, possibly not. No one can predict the future, and certainly not me. It is, however, worth it to try to take the world back. Our job isn’t to eliminate business corporations (which will always exist, and which perform good functions when their power is curtailed) but to wrest from them the power that they abuse: the ability to ruin a hard-working person’s career with a bad reference, the legal “right” to pollute the environment, et cetera.

I’ve had some illnesses and deaths in the family over the past few years. Some were surprising, others were not. But I think it’s hard to get to my age (which is not that old; I’m only 29) and not realize that time is not only finite, but that the remaining quantity of it is truly unknown. I might have over 1000 years left of life, if technological trends play out a certain way. I might have less than 1000 seconds. Who knows?

The Corporate System is predicated on Fucking Around. That’s its lifeblood. It turns democracy into oligarchy by finding efficient ways to corral the cheap votes of people who don’t give a shit. That’s how it works. That’s what it does. In the old-style system, most people fucked around, being digested for 40 years in the vain hope of being “discovered” and made substantial, and that never happened for the vast majority of them. Once fully digested, they were shat out with a gold watch except, in most cases, without the gold watch. The problem is that, for the individual, there isn’t time for fucking around. We don’t know how much we have. It might be nothing. The correct time for fucking around is… the null set.

Ok, I’m done (with this post, that is).


The call for rational economy

$
0
0

If I were to wrap the causes of humanity’s recent political progress (and, quite likely, the scientific and industrial progress that followed) up into two words, I would use rational government. Starting with Machiavelli’s championing of republics (despite his better-known satire, The Prince) in the 16th century, and culminating in the Enlightenment of the 18th, political philosophers began to approach governmental problems with a structural and proto-scientific mindset. The concept that monarchs could rule by “divine right” was discarded, secular governments replaced religious ones, and constitutional government became requisite. It’s easy to take this for granted, but for most of human history, political bodies were ruled by charismatic leaders who would allow no checks against their accumulation of power. “Don’t you trust me, as your rightful king?” (“Well, even if I did, I don’t trust your asshole son who’ll reign after you knock off.”) What makes the 18th-century political philosophers so brilliant is their insight that trusting in people was the wrong way to go, because they tend toward unreliability in the long run as power corrupts and the throne changes hands, and that it was better to build robust structures. Governance, previously existing in the context a paternalistic reign handed down “from above” and usually justified with incredible supernatural claims, was something people could debate and vote on.

Since then, what we’ve had in the West have been mostly libertarian governments, at least compared to most of human history. It hasn’t been monotonic progress, but the ideal of rational government is clearly winning. We don’t burn heretics. In fact, most governments recognize the concept of a “heretic” as meaningless. This isn’t to say we got it right from the start, and it’s still not perfect– “sodomy” laws and the opposition to gay marriage are one example of pre-rational hangover– but the ideal is well-understood and people are working toward it. Libertarian government is, by and large, the accepted norm among educated people.

Rational government likely emerged as Europeans became more mobile. Interactions among people from different countries and with radically different experiences with governments fostered an interest in comparison. What are the English doing right and wrong? How about the French? The Italian states? As Europeans developed a more complete knowledge of their history and the variety of political structures, the existing patterns began to look ridiculous. The view of hereditary divine right evolved from seeing it as a component of a fixed, natural order to considering it a dangerous, reactionary superstition. Not to overstate my country’s importance as an American, but the United States plays a major role in this trend as well. In the late 18th century, a mix of mostly English Europeans attempted to experiment with rational government on the new continent, designing a country that was, from first principles, devoid of hereditary aristocrats or state religion.

What happens when rational, libertarian government (with low corruption) becomes the norm? The good news is that these governments tend to be fair and stable. There isn’t a lot of corruption or rent-seeking by government officials. It exists, but it’s less severe than it would be in a typical theocratic or aristocratic oligarchy. So you get an industrial, capitalistic economy. (For a contrast, the extortions and bribes required in a corrupt oligarchy retard industrial and entrepreneurial progress.) That’s a good thing, but it brings its own sets of problems. One of the major and perennial ones is the ability for businesses to profit, at the expense of the world, if they’re able to externalize costs (e.g. to the environment). There is also the instability, as observed in the 1930s, of hard-line industrial capitalism. Poverty, we learned in the Great Depression, is not some “moral medicine” that makes people better. It’s a cancer that can devour an entire nation. The third problem is that a libertarian government has a hard time curtailing an unchecked corporate elite that emerges in the power vacuum.

Over time, people begin to realize that laissez-faire capitalism is not desirable. This leads to a class of government interventions (social welfare programs, regulation, high taxation rates) typically associated with socialism and, in small doses, there’s no question that they’re an improvement. However, a number of supposedly “socialist” governments have proven themselves to be immensely corrupt, brutal, left-wing authoritarian regimes no better than the right-wing dictatorships of old. I don’t think anyone educated would prefer that extreme (statist, command economies) over the current system. Empirically, they don’t work well (see: North Korea). The question of where on this purported spectrum between statist socialism (left) and laissez-faire capitalism (right) an economy should be remains open.

What is the answer? Well, I think it’s important to look at this with a scientific, data-oriented mindset. I don’t have the kind of data that it would take to find a “closed-form” answer, but let me draw some insights from machine learning and statistics. Modeling approaches tend to be global, local, or some combination of the two. Global methods assert that there is some kind of underlying structure to the problem, and use all the data to build a model. For example, if one were relating latitude to average temperature, a global model would capture the relevant global relationship: polar latitudes tend to be cold, and equatorial latitudes tend to be warm. The vast majority of the earth’s surface would be well-classified by this model. It would, however, misclassify Rome (high latitude, warm) and Mount Kilimanjaro (low latitude, cold). You’d need a richer model (altitude, ocean currents, marine west-coast effects). Linear regression is one of the simplest effective global models, and for a wide variety of problems, it does well. On the other hand, local methods of inference give a high weight to nearby data. The archetypical local method is the “nearest neighbors” approach to inference. This is what real estate appraisers use when they attempt to find a fair value for a house or plot of land: what seems to be the market rate for nearby, comparable property? There’s clearly no simple global model relating positional coordinates to land or location value, so nearby data must be used. The disadvantage that local models have (as opposed to global ones) is the paucity of useful data. For many problems, there just isn’t enough data for local methods to perform well, either because the data is hard to collect or because the space is too convoluted (high dimensionality)– or both. The lesson is that both local and global methods of inference and modeling are valuable, and neither category is uniformly superior. To solve a complex problem, it usually requires both approaches to be used.

So what do these concepts of global versus local inference have to do with economics? Well, archetypical socialism is a global-minded approach. Certain social justice constraints (“no one should have less than <X>”, “total environmental pollution cannot exceed <Y>”) are set with the intention (at least, the stated intention, as many left-wing governments have been execrably corrupt) of keeping society fair, stable, and sane. The problem is that this can lead toward a command economy, and those do a poor job of solving the fundamental wealth-creation problem: building things people want, but that they don’t have the vision yet to know that they want. Command economies can produce commodities “to spec”, but no command economy could have come up with Google. Capitalism, on the other hand, is fiercely local. It has its own intellectually defensible brand of fairness (right-libertarianism) but no interest in enforcing global social-justice constraints. It doesn’t have the tools, and it has no interest in developing them. What it does extremely well is enable the individual to exploit local information (that command-economy bureaucracies would never acquire, and over which they would never agree on an interpretation) for personal benefit. This is, in effect, what markets are: distributed, computational methods for aggregating trillions of bits of local information, aggregating a signal from millions of self-interested actors.

What I intend strongly to convey, of course, is that a modern economy must draw from both columns. Socialist command economies degenerate rapidly, in large part because they must curtail individual freedoms in order to maintain the global structure to which they’re committed. Laissez-faire, on the other hand, diverges. For a while, the use of local information conveys a computational benefit and better economic decisions are made. Unfortunately, this also has a tendency to generate inequality among individuals that, in the long term, has a pernicious effect. Inequality among ideas and companies (aggregations of effort) is a good thing, because it means that bad ideas die and good ones grow in importance. When that’s applied to people, it’s not desirable. A class of economically disenfranchised people emerges, and so does an entrenched, wealthy aristocracy. The modern corporate elite is of the latter category. The incompetence and attitude of entitlement that reside at the top of American corporate world are truly terrifying.

One of the issues with capitalism and socialism both is that they tend to generate defective versions of the other. It seems to be a natural tendency. Supposedly socialist Russia had crime-ridden, violent black markets– the kind associated with illegal psychoactive drugs in North America– over commodities as staid as light bulbs. A command economy will not eradicate the very natural will to trade, and this creates a market. Making that illegal simply denies participation to law-abiding people, making what markets will exist unregulated and inefficient. On the other hand, American capitalism has generated a perverse socialism-for-the-rich. CEOs’ kids don’t “work their way up” in a meritocracy. Their wages aren’t set by a real market, but via favor-trading within a socially closed network of self-dealing corporate officials. Their daddies buy their educational admissions and resumes and, if they’re truly too stupid to make it on their own despite immense assistance, board-position sinecures at large corporations.

Right-libertarians (the “Tea Party”) blame corporatism on the government– it’s all this damn regulation that creates the corporate problem, they say– but that’s not a useful assessment. What actually happens is that the existing elite wants badly to stay elite and will use its immense resources in order to do so. They aren’t ideologically capitalistic. They would be just as comfortable as the ruling party in a left-wing, nominally “socialistic” tyranny as long as they were at the social apex. What they are is self-protecting. If corrupting governmental and educational institutions is an option to them (and it always is, because most modern corruption is in the form of invitations to parties, not actual wads of cash) they will do it.

Corporate America has generated its own royalty. What is different about 2012 from five or ten or fifty years ago is that people are now cognizant of it. The most interesting right-wing movement in the United States is the nascent Tea Party. While I disagree with them vehemently (as a left-libertarian, and also as one who favors science over emotional argument) I will give them credit for this: at their intellectual core (and, yes, there is one) they are aggressively anti-corporate. Post-2008, Americans get that the Corporate System is not a meritocracy, not rational, and not even real capitalism. It’s designed to provide the best of two systems (socialism and capitalism) for a well-connected social and increasingly hereditary elite, regardless of merit, and the worst of both systems for everyone else. For themselves, they create an economic arrangement in which they can derive enormous personal benefit from random variables that exist in the economy, but at the same time build a jealously guard a private social-welfare system that ensures they stay rich, well-positioned, and well-connected even if they fail. For the rest, they provide mostly downside, displacement, and discomfort. A perfect metaphor for this is air travel. Well-connected people get discounted or free air travel, special lounges in the airport, and access to comfortable private aviation. The rest of us get Soviet-style service and capitalistic price volatility: the worst from both systems.

What’s changing is that people all over the world are beginning to see that we don’t have a rational economy. We have a priesthood caste of executives who rule by their own version of “divine right”, claiming that the (invisible, to most people) network of social support that has placed them represents the “wisdom of the market”. We have a world where the transference of money into power is not only politically accepted, but increasingly seen as socially normal. It’s not called “corruption” anymore when journalists and government officials attend depraved parties in Davos, La Jolla or Aspen; it’s “self-interest”.

So what are we going to do? How do we overthrow the tyranny of position, especially in a world where such entrenchment can masquerade as “reputation”? We now have a world in which private social assistance can be presented as a “talent acquisition” (or “acqui-hire”) when our forefathers at least had the insight to call it “welfare for rich people”. These people are very well-connected and extremely adept at corrupting press and educational institutions in order to make their positions seem legitimate. They’ve created their own variety of rule by divine right, with “God’s will” ascertained in accord with how much money a person has (regardless of how he got it). For one concrete example, people are usually evaluated in a professional context according to job titles. Well, what are these but knighthoods and baronies assessed “from above”, and “up” points toward an entrenched, never-elected social elite who are not so much capitalism’s “market winners” as those best positioned to exploit an increasingly industrial economy. Is there really a difference between “Senior VP at BigCo” and “Thane of Cawdor”? I don’t see one. So why is the former resume gold, while the latter is a laughable anachronism?

I’m running out of time, so I’ll stop bashing the corporates and cut to the chase. The 18th-century was when the idea of rational government came to the fore, and it changed everything. People argue that the French Revolution “failed” because it led to Napoleon, but the truth is that Napoleon was quite restrained in comparison to almost all feudal lords, much less absolute monarchs. Progress toward rational government was not monotonic, but once the ideas reached implementation, they couldn’t be rolled back outright. The ideals lived on. They continue, even now, in the darkest and most irrationally governed corners of the earth, such as the Middle East. These concepts of rational government may not be implemented yet, but they are well-known and considered superior among a large number educated people. I believe that the 21st-century is when we’ll start to see real progress toward the rational economy. Why? Because it will be the only thing that can compete in the technological world. Only societies with rational economies and true “meritocracy” will be able to grow their prosperity at a technological (possibly 10+ percent per year) rate.

The Industrial Revolution required rational government, because the theocracies and monarchies of old would never have tolerated the social and economic rise of these upstarts. Change to a technological world will meet similar opposition from our entitled social, nominally “corporate”, elite. I don’t believe in a “Singularity”, but there are phase changes in growth, and the fast-evolving new entrants frequently “win”. Immensely powerful reptiles (dinosaurs) died out, while the small, fast-evolving creatures with mutant sweat glands (mammals) were able to adapt. Tool-using animals were able to control their environment in a way that their predecessors could not, and eventually evolved into the first humans. Awareness of time and future-orientation led to the agrarian revolution, characterized by 0.05 to 1% annual economic growth, and rational government made the industrial (1 to 10% annual growth) world, emerging in the late 18th-century, possible. Now, the world is pregnant with a new possibility: a technological world characterized by rapid economic growth, general prosperity instead of poverty and, if we do it right, an end to this sickening tyranny of geography (physical and social) that has rendered most of the world’s population poor. However, we’ll need a different kind of thought to make this possible. We’ll need a world where the right people– technologically-minded people– are making the decisions, and we need an economy that is not only rational, but protects its own rationality. This requires both the protection against divergence (poverty and self-perpetuating, entitled wealth) provided by socialism and the individual, local liberty of capitalism, but it requires something more: a technologically-minded commitment to solving hard problems using approaches (such as, in software, open allocation) that would previously be considered radical.


What is “at-will employment”?

$
0
0

Since I write a lot about the sociology and HR game theory of software engineering, I get a lot of hits to my blog for queries related to employment and, in particular, its endgame, such “should you sign a PIP” (answer: no) and “can they fire you for <X>”. I’m going to answer the second category of question. Can a company fire employees for any reason? Isn’t that what at-will employment means?

Well, no. Companies like to claim that at-will employment means that an employee can be terminated for any reason and has no recourse. They like to argue that people never win termination lawsuits except when discrimination is obviously provable. They say this because they don’t want to deal with the lawsuits, and they believe they can get away with low or nonexistent severance payments if the employee actually believes this. However, that’s not true. If it were, companies would not write severance packages at all.

Before I go any further, I am not an attorney and this material is provided for informational purposes only. This is no substitute for legal advice, and if you are at risk of, or in the process of, a messy termination, you should be consulting a lawyer. Preferably, find an attorney in your state, because these laws differ depending on where you are. What I’m providing here is for the purpose of explaining the HR game theory involved, with the goal of shining some light on how companies work.

What is “at-will employment”?

Legally, it means that the employment contract is of indefinite duration and can be ended by either party on a same-day basis. An employee is not in breach if he quits at any time, nor is the firm required to provide notice or a severance package. It exists to protect 3 classes of termination:

  1. Layoffs for business reasons. Companies that choose to end employment for a business reason, such as the closing of a factory or a need to reduce payroll, have the right to end jobs. In some cases, they are required to provide notice or severance (see: WARN Act) but not in all.
  2. Objective, published performance or conduct standards. Companies have the right to set performance standards as they wish, as long as they are uniformly enforced, and to fire people who fail to meet them. (Uneven enforcement is another matter. If it can be proven, the employee might have recourse.) It is not for the law to decide that a standard is “unreasonably high”. If a business decides that it needs to set productivity standards at a level that only 5 percent of the population can meet, that’s their prerogative.
  3. No-fault “breakups” in small companies. In small firms where the failure of a single working relationship can make employment completely untenable (because there’s no possibility for “transfer” in a 8-person company) the employer maintains the right to say, “We’re just not that into you”. This does not apply to large companies, where an employee who fails to get along with one manager could be transferred to another.

I don’t think any of these provisions are unreasonable. Companies should be allowed to lay people off for valid business reasons, set uniformly-enforced performance standards as high or low as the business demands, and terminate people whose failed relationships make employment untenable.

What doesn’t it cover?

A surprisingly large set of terminations are not covered under the above. I’m going to pick on one, resume lies. Clearly, this is fireable under any circumstances, right? Well, no.

Job fraud, defined as procuring a job that would not normally be available using misleading information that claims competencies the individual does not have, is a breach of professional ethics in any line of work, and in life-critical professions or law, it’s illegal. (If you say you’re a doctor and don’t have a medical degree, you’re likely to end up in prison.) People can be terminated for that, no question. So what is job fraud? If someone claims to have a PhD and didn’t even finish college, that would be considered job fraud, if the job requires a higher education. If someone is hired as an Enterprise Salesman based on his claimed 5 years of Goldman Sachs (and the contacts that would imply) but did not work there at all, that’s job fraud. What about someone who covers up an embarrassing, 4-month-long, gap in employment? That’s a much grayer topic. Let’s say that the applicant actually had 37 months of experience in that field, and with the inflation, 41 months. Well, that’s like rounding up a 3.651 GPA to 3.7: not exactly fraudulent. To establish just cause for termination, the company must prove that it would not have hired that same person with only 37 months of experience: that a pre-existing cutoff for that position was between 37 and 41 months of employment. (It could set a policy of not hiring people with 4-month gaps in their history, but juries will not look kindly on a company with such a policy.)

Broadly speaking, resume lies fall into two categories: job fraud, and status inflation. If job fraud occurred– the person would have been ineligible for the role were truthful information provided– the company will have no problem firing that person. Severance or notice are not required. The company is not required to prove that the standard is reasonable, but only that one exists and is uniformly applied. On the other hand, most resume lies are inflations of social status and political success: upgraded titles, tweaked dates, improved performance-based compensation, and peers representing themselves as managers to fix reference problems. It’s much harder for a company to prove that the person would not have been hired were truthful information provided. There’s a spectrum:

  • Overt job fraud, at one extreme, is deception claiming competencies a person does not have, such as a person who never attended medical school claiming to be a doctor. This person can be fired immediately, and often prosecuted.
  • Cosmetic falsification is generally not grounds for termination. Let’s say that a person claims to have ancestors who came over on the Mayflower. In “white-shoe” law or management consulting, this could easily sway an interviewer. It’s a signal that the person is more likely than average to have the pedigree that clients prefer. However, since it’s not an explicit requirement for the job, and the company would have a hard time proving that it swayed their decision, few companies would be able to justify terminating an employee on that alone.

This is also why it’s much more dangerous (and far less useful) to fake a college degree than to upgrade a job title, self-assigning a promotion when one was actually passed over. The first is considered fireable job fraud almost always, and it lacks plausible deniability (you cannot claim to have forgot how you spent 4 years of your life). The second has two advantages. First, unless that title were an explicit job requirement (“we only hire SVPs, not VPs”) it doesn’t consist of grounds for termination on its own. Second, there’s plausible deniability: “I was promoted during my last week there, and they must have forgot to put it in the system. Since I was no longer working there, I wasn’t aware of this discrepancy.”

Why does HR exist?

The purpose of the above discussion was to show that companies don’t have as much freedom to “fire at will” as they might want. They can’t just can people who annoy them. However, most company executives would like to be able to fire as they wish, and preferably cheaply. That’s where HR comes in.

For blue-collar, commodity work, companies have the unambiguous right to set performance standards and fire people who don’t meet them. However, the standard must be uniformly enforced. If a company decides that 150 widgets per hour is the productivity standard and fires Mark for achieving 149, it’s within its rights to do so. However, if Mark can establish that Tom, the boss’s son, was regularly pulling 145 and retained, then Mark can justify that his termination was wrongful.

In most states, uneven enforcement of policies is not enough to justify a lawsuit. Rather, wrongful termination requires a specific protected context (discrimination, retaliation, harassment, union-busting) before a case will even be heard, and “he just didn’t like me and was unfair” does not qualify. It is, however, not that hard to create such a context. Here’s one way to do it: in response to managerial adversity, put in for a transfer. Now, a negative performance review (if that review reduces transfer opportunities, or would be visible to the manager of one’s target team at all) constitutes tortious interference with an attempt to establish a mutually beneficial working relationship elsewhere in the company. Since part of the worker’s job is to remain aware of internal mobility opportunities, this is direct interference with work performance, and that is, of course, a form of harassment. It’s retaliation against legitimate use of internal processes.

This approach doesn’t work if the company can establish a uniformly enforced, objective performance standard. No employee at 149 widgets per hour was ever retained or afforded transfer. However, the nature of white-collar “knowledge work” is that it’s almost impossible to define standards for it, much less enforce fairness policies that can apply uniformly across a job description. Consider software. A first scratch at a standard might be “20 lines of code, per day”. For engineers on new development, this is easy to meet. For maintainers whose time is spent mostly reading code, it’s very hard. Also, it provides dangerous incentives, because it’s relatively easy (and in the long term, costly) for an engineer to add 20 lines of useless code.

This, of course, is what’s behind the “calibration score” nonsense and global stack-ranking in which companies regularly engage. The original intention behind these systems was to enable quick layoffs without discussion. Actually cutting projects that made no sense (which is what the company needed to do, possibly in addition to reducing personnel) required a discovery process, and that involved a lot of discussions and input, and word would get out that something was going on. The purpose of the global stack-ranking is to provide a ready-made layoff for any cutoff percentage. It’s a simple database query. What percentage are we cutting? 7.5? Here’s your list. This is actually a terrible way to do layoffs, however, because it means people are cut without eliminating projects. So (a) fewer people are left to do the same amount of work, and (b) the excessive complexity– what likely caused the company to underperform in the first place– never goes away. That said, it remains a popular way of doing layoffs because it’s quick and relatively easy. It also generates a slew of bad data.

One of the dangers of bad data is the temptation to use it as if it were valid. “Calibration scores” are subjective, politically motivated, and often bear virtually no correlation to an individual’s ability to add value to the firm. However, in the context of at-will employment, they have an additional use. The books can be cooked in order to justify “performance-based” firing of any employee for any reason. Instead of “150 widgets per hour”, the standard becomes “3 points out of 5″. Of course, the numbers are bullshit made up to justify personnel decisions that were already made, but they create the appearance of a uniform policy: no employee below 3.0 was retained.

This also explains the phenomenon of transfer blocks. Employees who meet too much documented political adversity find themselves immobile in the company. Either that person’s personnel file is damaged (often by falsehoods the employee will never be allowed to see, because she would have a serious harassment claim if she knew what was in it) so severely that no manager will want her, or she is prohibited from moving at all as per HR policy, even if there’s another manager who’d gladly hire her. At first glance, this makes no sense. Why would HR care enough to interfere with internal mobility? The answer is that it can break the uniformity that’s needed to justify termination. Let’s say that this employee was particularly politically unsuccessful and received a damning 2.6 (where 3.0 is passing) on her performance review. If she’s allowed to transfer to another team, then another employee who received a 2.7 and was not allowed to transfer has recourse. Of course, the likelihood that an ex-employee will compile this sort of information in enough volume to build a case is very low, because the data are hard to get without initiating a lawsuit, but this should explain why companies are insistent on keeping those “calibration scores” secret.

The HR transfer block is a nasty thing to encounter. It doesn’t occur unless a manager is so hell-bent on firing an employee that he must be kept in place. It also means that, even if the new manager desperately wants to hire a person, she often can’t. There are only two ways to beat an HR transfer block. One is to escalate, but that rarely works, because managers at a high enough level to do this are generally too busy to hear petitions from peasants they don’t know. The second is to offer a bribe to someone in HR who can change the review. This practice is a lot more common than people think, but that’s not a good position to be in, and I’d say it’s not the recommended course of action in most cases. (Just get another job.) HR bribes don’t always work, and they typically cost 3 to 10 percent of a year’s salary. When you’re at the point of having to pay for your own job, you’re better off just finding another one. Keep in mind, also, that if you’re in this position you’ll often be paying two to three times the bribe you offer, because what you’re doing is immediately fireable. (The first rule of extortion: when offered an illegal bribe, triple it.)

There’s one more thing to talk about. What is it that companies really fear? When they write these “performance improvement plans” (PIPs) and create elaborate machinery to justify terminations that are actually political, what are they trying to defend themselves against? First, they don’t want to be sued– that much is clearly true. However, they also know that the vast majority of employees won’t sue them. It’s extremely time-consuming, it can be dangerous to one’s professional reputation, and employers win most of the time. The economic cost of losing a lawsuit, multiplied by the low probability that one occurs, is not something a large firm fears. Secondly, they’re afraid of public disparagement, which is what severance packages are really about. People who can move on to another job without financial worry or loss of career status are unlikely to spend months to years of their lives suing an ex-employer. They move on. However, there’s a third fear that is often not discussed, but it’s important to understand it when negotiating severance (in which, by the way, you must involve a lawyer because you do not want an extortion rap). Companies aren’t afraid of the financial costs of a lawsuit, and they can’t do much about disparagement, but there’s something else that terrifies them. Discovery. Even if the employer wins a termination lawsuit– and, fair warning, they win about 80 percent of the time– they’re going to have to prove that the termination occurred in the context of a uniformly enforced, fair performance standard. Everyone knows that white-collar work is subjective and often impossible to measure, so there are a lot of angles that can be exploited. All sorts of personnel data, HR intrigue, and possibly even historical compensation tables, can be brought into the open. That’s a terrifying thought from an employer’s perspective: much scarier than losing a termination lawsuit. So even winning a termination lawsuit is a pyrrhic victory for the employer. However, that’s unfortunately also true for the employee: winning is pyrrhic, and losing can be disastrous. There’s a delicate “mutually assured destruction” (M.A.D.) at play. Companies will often pay severance just to make the dance not happen.

This also leads to the one bit of advice that I’m going to give. This is not legal advice, but I’ve seen enough good people (and a few bad ones) get fired that I think it’s valid. Unless you are in financial need, you probably don’t want to push for more cash severance than is offered. If they offer 3 months, and you’re likely to have a job in two, then take it. Don’t try to push your go-away fee up. However, never walk away for free, and always make sure your reputation is protected. You can take a zero for cash, but if you do, mandate a positive reference (written, agreed-upon, contractually obligated) and the right to represent yourself as employed until you find your next job. One major advantage of pushing for a non-financial package is that you’re less likely to inadvertently do something illegal. Companies know that disparagement is a more credible threat, in the age of the Internet, than a termination lawsuit, but you can’t legally say it. If you say, “Give me $50,000 in severance or I’ll write a blog post about this”, you’re breaking the law and, while you probably won’t go to jail, you’ve lost all leverage with the employer. With only a reference being asked-for, you can frame it as, “Hey, we need to come up with a straight story so we don’t hurt each other in the public. I intend to represent my time at your company well, but here’s what I need from you.” Still, work with an attorney to present the deal because this is a very weird territory. (I know too much about extortion law for one lifetime, having been illegally extorted by a (now ex-) employer in the spring of 2012.)

That, in a nutshell, is an incomplete summary– but a scratch at the topic– of at-will employment. I hope that none of my readers ever need this information, and certainly that I will never have to use it, but statistically, many will. Good luck to those who must fight.


Never sign a PIP. Here’s why.

$
0
0

I alluded to this topic in Friday’s post, and now I’ll address it directly. This search query comes to my blog fairly often: “should I sign a PIP?” The answer is no.

Why? Chances are, performance doesn’t mean what you think it does. Most people think of “performance improvement” as something well-intended, because they take performance to mean “how good I am at my job”. Well, who doesn’t want to get better at his or her job? Even the laziest people would be able to get away with more laziness if they were more competent. Who wouldn’t want to level up? Indeed, that’s how these plans are presented: as structures to help someone improve professional capability. However, that’s not what “performance” means in the context of a employment contract. When a contract exists, non-performance is falling short of an agreed-upon provision of the contract. It doesn’t mean that the contract was fulfilled but in a mediocre way. It means that the contract was breached.

So when someone signs a PIP, he might think he’s agreeing that, “Yeah, I could do a few things better.” That’s not what he’s actually saying, at least not to the courts. He’s agreeing to be identified as a non-performing– again, in the legal sense of the word– employee, in the same category as one who doesn’t show up or who breaks fundamental ethical guidelines. Signing a PIP isn’t an admission that one could have been better at one’s job, but that one wasn’t doing one’s job. Since white-collar work is subjective and job descriptions are often ill-defined, making the binary question of professional and contractual performance difficult to assess in the first place, this sort of admission is gold for an employer looking to fire someone without paying severance. The employer will have a hell of a time proving contractual non-performance (which is not strictly required in order to fire someone, but makes the employer’s case stronger) without such a signature, given that most white-collar work has ill-defined requirements and performance measures.

Managers often claim that signing such paperwork only constitutes admission to having read it, not agreeing to the assessment. Even if true, it’s still a bad idea to sign it. This is now an adversarial relationship, which means that what makes the manager’s work (in the firing process) easier makes your life worse. Verbally, you should say “I agree to perform the duties requested of me, and to make the changes indicated, to the best of my ability, but there are factual inaccuracies and I cannot sign this without speaking to an attorney.” If you are pressed to sign, or threatened with termination, then you may sign it, but include the words “under duress”. (The shorthand “u.d.” will not suffice.) What this means is that, since you were threatened with summary termination, you were not free to decline the PIP, and therefore your signature is meaningless.

Whether you sign the PIP or not, you will probably be fired in time, unless you get another job before the process gets to that point. Not signing it doesn’t make it impossible for them to fire you. It only makes it somewhat harder. So why is it harmful to sign it? You want two things. First, you want time. The longer you have to look for a new job while being employed the old company, the better. If your manager sees you as a risk of messy termination, he’s more likely to let you leave on your own terms because it generates minimal work for him. PIPs are humiliating and appear to be an assertion of power, but they’re an exhausting slog for a manager. Second, you want severance if you don’t find a job in time and do get fired. Severance should never be your goal– non-executives don’t get large severances, so it’s generally better for your career and life to get another job– but you shouldn’t give that away for free.

There isn’t any upside to signing the PIP because, once one is presented, the decision to fire has already been made. A manager who genuinely wants to improve a person’s performance will communicate off the record. Once the manager is “documenting”, the relationship’s over. Also, people very rarely pass PIPs. Some people get the hint and leave, others fail and are fired, and in the remainder of cases, the PIP is ruled “inconclusive”, which means “not enough evidence to terminate at this time”. That’s not exactly an endorsement. For a PIP to be passed would require HR to side with the employee over management, and that will never happen. If the employee is under the same manager in 6 months, there will be another PIP. If the employee tries to move to another team, that will be almost impossible, because a “passed” PIP doesn’t mean exoneration. The reputation for instability created by a PIP lingers on for years. What I am essentially saying here is that, once a PIP appears, you should not sign it for the sake of maintaining a professional relationship. There is no relationship at that point.

Signing the PIP means you don’t know how to play the termination endgame. It means that you have no idea what’s happening to you, and you can be taken advantage of.

This said, there’s a way of not signing it. If you appear to be declining to sign out of bitterness or anger, that doesn’t work in your favor. Then you come off as childish. Childish people are easy to get rid of: just put them in increasingly ridiculous circumstances until they lose their cool and fire themselves by doing something stupid, like throwing a stapler at someone. The image you want to project is of a confident, capable adult– one who will not sign the PIP, because he knows it’s not in his advantage to do so, and who knows his rights under the law. This makes you intimidating. You don’t want to frighten adversaries like this– a frightened enemy is dangerous unpredictable– but you do want to intimidate them, so they get out of your way.

There’s a lot more to say about PIPs, which are dishonestly named processes since their real purpose is to create a paper trail to justify firing someone. That I’ll cover in a future post. For now, I think I’ve covered the most important PIP question. Don’t sign the fucking thing. Be professional (that’s more intimidating than being bitter) but decline to sign and, as fast as you can, get another job. If you see a PIP, moving on is your job.


Careerism breeds mediocrity

$
0
0

A common gripe of ambitious people is the oppressive culture of mediocrity that almost everyone experiences at work: boring tasks, low standards, risk aversion, no appetite for excellence, and little chance to advance. The question is often asked: where does all this mediocrity come from? Obviously, there are organizational forces– risk-aversion, subordination, seniority– that give it an advantage, but what might be an individual-level root cause that brings it into existence in the first place? What makes people preternaturally tolerant of mediocrity, to such a degree that large organizations converge to it? Is it just that “most people are mediocre”? Certainly, anyone can become complacent and mediocre, given sufficient reward and comfort, but I don’t think it’s a natural tendency of humans. In fact, I believe it leaves us dissatisfied and, over the long run, disgusted with working life.

Something I’ve learned over the years about the difference between mediocrity and excellence is that the former is focused on ”being” and what one is, while the latter is about doing and what one creates or provides. Mediocrity wants to be attractive or important or socially well-connected. Excellence wants to create something attractive or perform an important job. Mediocrity wants to “be smart” and for everyone to know it. Excellence wants to do smart things. Mediocrity wants to be well-liked. Excellence wants to create things worth liking. Mediocrity wants to be one of the great writers. Excellence wants to write great works. People who want to hold positions, acquire esteem, and position their asses in specific comfortable chairs tend to be mediocre, risk-averse, and generally useless. The ones who excel are those who go out with the direct goal of achieving something.

The mediocrity I’ve described above is the essence of careerism: acquiring credibility, grabbing titles, and taking credit. What’s dangerous about this brand of mediocrity is that, in many ways, it looks like excellence. It is ambitious, just toward different ends. Like junk internet, it feels like real work is getting done. In fact, this variety of mediocrity is not only socially acceptable but drilled into children from a young age. It’s not “save lives and heal the sick” that many hear growing up, but “become a doctor”.

This leads naturally to an entitlement mentality, for what is a title but a privilege of being? Viscount isn’t something you do. It’s something you are, either by birth or by favor. Upper-tier corporate titles are similar, except with “by favor” being common because it must at least look like a meritocracy when, in truth, the proteges and winners have been picked at birth.

Corporations tend to be risk-averse and pathological, to such a degree that opportunities to excel are rare, and therefore become desirable. Thus, they’re allocated as a political favor. To whom? To people who are well-liked and have the finest titles. To do something great in a corporate context– to even have the permission to use your own time in such a pursuit– one first has to be something: well-titled, senior, “credible”. You can’t just roll up your sleeves and do something useful and important, lest you be chastised for taking time away from your assigned work. It’s ridiculous! Is it any wonder that our society has such a pervasive mentality of entitlement? When being something must occur before doing anything, there is no other way for people to react.

As I get older, I’m increasingly negative on the whole concept of careerism, because it makes being reputable (demonstrated through job titles) a prerequisite for doing something useful, and thereby generates a culture of entitled mediocrity, because its priorities lead naturally that way. What looks like ambition is actually a thin veneer over degenerate, worthless social climbing. Once people are steeped in this culture for long enough, they’re too far gone and real ambition has been drained from them forever.

This, I think, is Corporate America’s downfall. In this emasculated society, almost no one wants to do any real work– or to let anyone else do real work– because that’s not what gets rewarded, and to do anything that’s actually useful, one has to be something (in the eyes of others) first. This means that the doers who remain tend to be the ones who are willing to invest years in the soul-sucking social climbing and campaigning required to get there, and the macroscopic result of this is adverse selection in organizational leadership. Over time, this leaves organizations unable to adapt or thrive, but it takes decades for that process to run its course.

What’s the way out? Open allocation. In a closed-allocation dinosaur company, vicious political fights ensue about who gets to be “on” desirable projects. People lose sight of what they can actually do for the company, distracted by the perpetual cold war surrounding who gets to be on what team. You don’t have this nonsense in an open-allocation company. You just have people getting together to get something important done. The way out is to remove the matrix of entitlement, decisively and radically. That, and probably that alone, will evade the otherwise inevitable culture of mediocrity that characterizes most companies.



Fourth quadrant work

$
0
0

I’ve written a lot about open allocation, so I think it’s obvious where I stand on the issue. One of the questions that is always brought up in that discussion is: so who answers the phones? The implicit assumption, with which I don’t agree, is that there are certain categories of work that simply will not be performed unless people are coerced into doing it. To counter this, I’m going to answer the question directly. Who does the unpleasant work in an open-allocation company? What characterizes the work that doesn’t get done under open allocation?

First, define “unpleasant”. 

Most people in most jobs dislike going to work, but it’s not clear to me how much of that is an issue of fit as opposed to objectively unpleasant work. The problem comes from two sources. First, companies often determine their project load based on “requirements” whose importance is assessed according to the social status of the person proposing it rather than any reasonable notion of business, aesthetic, or technological value, so that generates a lot of low-yield busywork that people prefer to avoid because it’s not very important. Second, companies and hiring managers tend to be ill-equipped at matching people to their specialties, especially in technology. Hence, you have machine learning experts working on payroll systems. It’s not clear to me, however, that there’s this massive battery of objectively undesirable work on which companies rely. There’s probably someone who’d gladly take on a payroll-system project as an excuse to learn Python.

Additionally, most of what makes work unpleasant isn’t the work itself but the subordinate context: nonsensical requirements, lack of choice in one’s tools, and unfair evaluation systems. This is probably the most important insight that a manager should have about work: most people genuinely want to work. They don’t need to be coerced, and doing that will only reduce their intrinsic incentives in the long run. In that light, open allocation’s mission is to remove the command system that turns work that would otherwise be fulfilling into drudgery. Thus, even if we accept that there’s some quantity of unpleasant work that any company will generate, it’s likely that the amount of it will decrease under open allocation, especially as people are freed to find work that fits their interests and specialty. What’s left is work that no one wants to do: a smaller set of the workload. In most companies, there isn’t much of that work to go around, and it can almost always be automated.

The Four Quadrants

We define work as interesting if there are people who would enjoy doing it or find it fulfilling– some people like answering phones– and unpleasant if it’s drudgery that no one wants to do. We call work essential if it’s critical to a main function of the business– money is lost in large amounts if it’s not completed, or not done well– and discretionary if it’s less important. Exploratory work and support work tend to fall into the “discretionary” set. These two variables split work into four quadrants:

  • First Quadrant: Interesting and essential. This is work that is intellectually challenging, reputable in the job market, and important to the company’s success. Example: the machine learning “secret sauce” that powers Netflix’s recommendations or Google’s web search.
  • Second Quadrant: Unpleasant but essential. These tasks are often called “hero projects”. Few people enjoy doing them, but they’re critical to the company’s success. Example: maintaining or refactoring a badly-written legacy module on which the firm depends.
  • Third Quadrant: Interesting but discretionary. This type of work might become essential to the company in the future, but for now, it’s not in the company’s critical path. Third Quadrant work is important for the long-term creative health of the company and morale, but the company has not been (and should not be) bet on it.  Example: robotics research in a consumer web company.
  • Fourth Quadrant: Unpleasant and discretionary. This work isn’t especially desirable, nor is it important to the company. This is toxic sludge to be avoided if possible, because in addition to being unpleasant to perform, it doesn’t look good in a person’s promotion packet. This is the slop work that managers delegate out of a false perception of a pet project’s importance. Example: at least 80 percent of what software engineers are assigned at their day jobs.

The mediocrity that besets large companies over time is a direct consequence of the Fourth Quadrant work that closed allocation generates. When employees’ projects are assigned, without appeal, by managers, the most reliable mechanism for project-value discovery– whether capable workers are willing to entwine their careers with it– is shut down. The result, under closed allocation, is that management does not get this information regarding what projects the employees consider important, and therefore won’t even know what the Fourth Quadrant work is. Can they recover this “market information” by asking their reports? I would say no. If the employees have learned (possibly the hard way) how to survive a subordinate role, they won’t voice the opinion that their assigned project is a dead end, even if they know it to be true.

Closed allocation simply lacks the garbage-collection mechanism that companies need in order to clear away useless projects. Perversely, companies are much more comfortable with cutting people than projects. On the latter, they tend to be “write-only”, removing projects only years after they’ve failed. Most of the time, when companies perform layoffs, they do so without reducing the project load, expecting the survivors to put up with an increased workload. This isn’t sustainable, and the result often is that, instead of reducing scope, the company starts to underperform in an unplanned way: you get necrosis instead of apoptosis.

So what happens in each quadrant under open allocation? First Quadrant work gets done, and done well. That’s never an issue in any company, because there’s no shortage of good people who want to do it. Third Quadrant work also gets enough attention, likewise, because people enjoy doing it. As for Second Quadrant work, that also gets done, but management often finds that it has to pay for it, in bonuses, title upgrades, or pay raises. Structuring such rewards is a delicate art, since promotions should represent respect but not confer power that might undermine open allocation. However, I believe it can be done. I think the best solution is to have promotions and a “ladder”, but for its main purpose to be informing decisions about pay, and not an excuse to create power relationships that make no sense.

So, First and Third Quadrant work are not a problem under open allocation. That stuff is desirable and allocates itself. Second Quadrant work is done, and well, but expensive. Is this so bad, though? The purpose of these rewards is to compensate people for freely choosing work that would otherwise be averse to their interests and careers. That seems quite fair to me. Isn’t that how we justify CEO compensation? They do risky work, assume lots of responsibilities other people don’t want, and are rewarded for it? At least, that’s the story. Still, a “weakness” of open allocation is that it requires management to pay for work that they could get “for free” in a more coercive system. The counterpoint is that coerced workers are generally not going to perform as well as people with more pleasant motivations. If the work is truly Second Quadrant, it’s worth every damn penny to have it done well.

Thus, I think it’s a fair claim that open allocation wins in the First, Second, and Third Quadrant. What about the Fourth? Well, under open allocation, that stuff doesn’t get done. The company won’t pay for it, and no one is going to volunteer to do it, so it doesn’t happen. The question is: is that a problem?

I won’t argue that Fourth Quadrant work doesn’t have some value, because from the perspective of the business, it does. Fixing bugs in a dying legacy module might make its demise a bit slower. However, I would say that the value of most Fourth Quadrant work is low, and much of it is negative in value on account of the complexity that it imposes, in the same way that half the stuff in a typical apartment is of negative value. Where does it come from, and why does it exist? The source of Fourth Quadrant work is usually a project that begins as a Third Quadrant “pet project”. It’s not critical to the business’s success, but someone influential wants to do it and decides that it’s important. Later on, he manages to get “head count” for it: people who will be assigned to complete the less glamorous work that this pet project generates as it scales; or, in other words, people whose time is being traded, effectively, as a political token. If the project never becomes essential but its owner is active enough in defending it to keep it from ever being killed, it will continue to generate Fourth Quadrant work. That’s where most of this stuff comes from. So what is it used for? Often, companies allocate Third Quadrant work to interns and Fourth Quadrant work to new hires, not wanting to “risk” essential work on new people. The purpose is evaluative: to see if this person is a “team player” by watching his behavior on relatively unimportant, but unattractive, work. It’s the “dues paying” period and it’s horrible, because a bad review can render a year or two of a person’s working life completely wasted.

Under open allocation, the Fourth Quadrant work goes away. No one does any. I think that’s a good thing, because it doesn’t serve much of a purpose. People should be diving into relevant and interesting work as soon as they’re qualified for it. If someone’s not ready to be working on First and Second Quadrant (e.g. essential) work, then have that person in the Third Quadrant until she learns the ropes.

Closed-allocation companies need the Fourth Quadrant work because they hire people but don’t trust them. The ideology of open allocation is: we hired you, so we trust you to do your best to deliver useful work. That doesn’t mean that employees are given unlimited expense accounts on the first day, but it means that they’re trusted with their own time. For a contrast, the ideology of closed allocation is: just because we’re paying you doesn’t mean we trust, like, or respect you; you’re not a real member of the team until we say you are. This brings us to the real “original sin” at the heart of closed allocation: the duplicitous tendency of growing-too-fast software industries to hire before they trust.


Why I wiped my LinkedIn profile

$
0
0

I wiped my LinkedIn profile recently. It now says:

I don’t reveal history without a reason, so my past jobs summary is blank.

I’m a New York-based software engineer who specializes in functional programming, machine learning, and language design.

This might not be the best move for my career. I’m mulling over whether I should delete the profile outright, rather than leaving a short note that appears cagey. I have a valid point– it really isn’t the rest of the world’s business what companies I have worked for– but I’m taking an unusual position that leaves me looking like a “tinfoiler”. For that, I’m honestly not, but I do believe in personal privacy. Privacy’s value is insurance against low-probability, high-impact harms. I don’t consider it likely that I’ll ever damage myself by publicly airing past employment history. It’s actually very unlikely. But why take the chance? I am old enough to know that not all people in the world are good, and this fact requires caution in the sharing of information, no matter how innocuous it might seem.

Consistency risk

My personal belief is that more people will damage their careers through respectable avenues such as LinkedIn than on Facebook, the more classic “digital dirt” culprit. For most jobs, no one is going to care what a now-35 software engineer said when he was 19 about getting drunk. Breaking news: all adults were teenagers, and teenagers are sometimes stupid! On the other hand, people could be burned by inconsistencies between two accounts of their career histories. Let’s say that someone’s CV says “March 2003 – February 2009″ while his LinkedIn profile says “March 2003 – November 2008“. Uh-oh. HR catches this discrepancy, flags it, and brings the candidate in for a follow-on interview, and the candidate discloses that he was on severance (and technically employed, but with no responsibilities) for 3 months. There was no lie. It was a benign difference of accounting. Still, the candidate has now disclosed receipt of a severance payment. There’s a story there. Whoops. In a superficial world, that could mean losing the job offer.

This isn’t a made-up story. The dates were different, but I know someone who ended up having to disclose a termination because of an inconsistency of this kind. (LinkedIn, in the case of which I’m aware, wasn’t the culprit.) So consistency risk is real.

Because the white-collar corporate world has so little in the way of actual ethics, the appearance of being ethical is extremely important. Even minor inconsistencies admit a kind of scrutiny that no one wishes to tolerate. This career oversharing that a lot of young people are participating in is something I find quite dangerous. Not everything that can damage a person’s reputation is a drunk picture. Most threats and mistakes are more subtle than that, and consistency risk is a big deal.

Replicating a broken system

My ideological issue, however, with LinkedIn isn’t the risk that’s involved. I’ll readily concede that those risks are very mild for the vast majority of people. The benefits of using such a service quite possibly outweigh them. The bigger problem I have with it is that it exists to replicate broken ways of doing things.

In 2013, the employment market is extremely inefficient in almost all domains, whether we’re talking about full-time jobs, consulting gigs, or startup funding. It’s a system so broken that no one trusts it, and when people distrust front-door channels or find them clogged and unusable, they retreat to back-door elitism and nepotism. Too much trust is given to word-of-mouth references (that are slow to travel, unreliable, and often an artifact of a legal settlement) and low-quality signals such as educational degrees, prestige of prior employers, and durations of employment. Local influences have a pernicious effect, the result of which is unaffordable real estate in virtually any location where a career can be built. Highly-qualified people struggle to find jobs– especially their first engagements– while companies complain of a dearth of appropriate talent. They’re both right, in a way. This is a matching problem related to the “curse of dimensionality“. We have a broken system that no one seems to know how to fix.

LinkedIn, at least in this incarnation, is an online implementation of the old-style, inefficient way of doing things. If you want an impressive profile, you have to troll for, trade, and if you’ve had a bad separation, use the legal system to demand in a settlement, recommendations and endorsements. You list the companies where you worked, job titles, and dates of employment, even if you honestly fucking hate some of those companies. We’ve used the Internet to give wings to an antiquated set of mechanics for evaluating other people, when we should be trying to do something better.

None of this is intended as a slight against LinkedIn itself. It’s a good product, and I’m sure they’re a great company. I just have an ideological dislike– and I realize that I hold a minority opinion– for the archaic and inefficient way we match people to jobs. It doesn’t even work anymore, seeing as most resumes are read for a few seconds then discarded.

Resumes are broken in an especially irritating way, because they often require people to retain a lasting association with an organization that may have behaved in a tasteless way. I have, most would say, a “good” resume. It’s better than what 98 percent of people my age have: reputable companies, increasing scope of responsibility. Yet, it’s a document through which I associate my name with a variety of organizations. Some of these I like, and some I despise. There is one for which I would prefer for the world never to know that I was associated with it. Of course, if I’m asked, “Tell me about your experience at <X>” in a job interview, for certain execrable values of X, social protocol forbids me from telling the truth.

I’ll play by the rules, when I’m job searching. I’ll send a resume, because it’s part of the process. Currently, however, I’m not searching. This leaves me with little interest in building an online “brand” in a regime vested in the old, archaic protocols. Trolling for endorsements, in my free time, when I’m employed? Are you kidding me?

The legitimacy problem

Why do I so hate these “old, archaic protocols”? It’s not that I have a problem, personally. I have a good resume, strong accomplishments for someone of my age, and I can easily get solid recommendations. I have no need to have a personal gripe here. What bothers me is something else, something philosophical that doesn’t anger a person until she thinks of it in the right way. It’s this: any current matching system between employers and employees has to answer questions regarding legitimacy, and the existing one gets some core bits seriously wrong.

What are the most important features of a person’s resume? For this exercise, let’s assume that we’re talking about a typical white-collar office worker, at least 5 years out of school. Then I would say that “work experience” trumps education, even if that person has a Harvard Ph.D. What constitutes “work experience”? There’s some degree of “buzzword compliance”, but that factor I’m willing to treat as noise. Sometimes, that aspect will go in a candidate’s favor, and sometimes it won’t, but I don’t see it conferring a systemic advantage. I’m also going to say that workplace accomplishments mean very little. Why? Because an unverifiable line on a resume (“built awesome top-secret system you’ve never heard of”) is going to be assumed, by most evaluators, to be inflated and possibly dishonest. So the only bits of a resume that will be taken seriously are the objectively verifiable ones. This leaves:

  • Company prestige. That’s the big one, but it’s also ridiculously meaningless, because prestigious companies hire idiots all the time. 
  • Job titles. This is the trusted metric of professional accomplishment. If you weren’t promoted for it, it didn’t happen.
  • Length of tenure. This one’s nonlinear, because short tenures are embarrassing, but long stints without promotions are equally bad.
  • Gaps in employment. Related to the above, large gaps in job history make a candidate unattractive.
  • Salary history, if a person is stupid enough to reveal it.
  • Recommendations, preferably from management.

There are other things that matter, such as overlap between stated skills and what a particular company needs, but when it comes to “grading” people, look no farther than the above. Those factors determine where a person’s social status starts in the negotiation. Social status isn’t, of course, the only thing that companies care about in hiring… but it’s always advantageous to have it in one’s favor.

What’s disgusting and wrong about this regime is that all of these accolades come from a morally illegitimate source: corporate management. That’s where job titles, for example, come from. They come from a caste of high priests called “managers” who are anointed by a higher caste called “executives” who derive their legitimacy from a pseudo-democracy of shareholders who (while their financial needs and rights deserve respect) honestly haven’t a clue how to run a company. Now, I wouldn’t advise people to let most corporate executives around their kids, because I’ve known enough in my life to know that most of them aren’t good people. So why are we assigning legitimacy to evaluations coming from such an unreliable and often corrupt source? It makes no sense. It’s a slave mentality.

First scratch at a solution

I don’t think resumes scale. They provide low-signal data, and that fails us in a world where there are just so many of the damn things around that a sub-1% acceptance rate is inevitable. I’m not faulting companies for discarding most resumes that they get. What else would they be expected to do? Most resumes come from unqualified candidates who bulk-mail them. Now that it’s free to send a resume anywhere in the world, a lot of people (and recruiters) spam, and that clogs the channels for everyone. The truth, I think, is that we need to do away with resumes– at least of the current form– altogether.

That’s essentially what has happened in New York and Silicon Valley. You don’t look for jobs by sending cold resumes. You can try it, but it’s usually ineffective, even if you’re one of those “rock star” engineers who is always in demand. Instead, you go to meetups and conferences and meet people in-person. That approach works well, and it’s really the only reliable way to get leads. This is less of an option for someone in Anchorage or Tbilisi, however. What we should be trying to do with technology is to build these “post-resume” search avenues on the Internet– not the same old shit that doesn’t work.

So, all of this said, what are resumes good for? I’ve come to the conclusion that there is one very strong purpose for resumes, and one that justifies not discarding the concept altogether. A resume is a list of things one is willing to be asked about in the context of a job interview. If you put Scala on your resume, you’re making it clear that you’re confident enough in your knowledge of that language to take questions about it, and possibly lose a job offer if you actually don’t know anything about it. I think the “Ask me about <X>” feature of resumes is probably the single saving grace of this otherwise uninformative piece of paper.

If I were to make a naive first scratch at solving this problem, here’s how I’d “futurize” the resume. Companies, titles, and dates all become irrelevant. Leave that clutter off. Likewise, I’d ask that companies drop the requirement nonsense where they put 5 years of experience in a 3-year-old technology as a “must have” bullet point. Since requirement sprawl is “free”, it occurs, and few people actually meet any sufficiently long requirement set to the letter, so that seems to select against people who actually read the requirements. Instead, here’s the lightweight solution: allocate 20 points. (The reason for the number 20 is to impose granularity; fractional points are not allowed.) For example, an engineering candidate might put herself forward like so:

  • Machine learning: 6
  • Functional programming: 5
  • Clojure: 3
  • Project management: 3
  • R: 2
  • Python: 1

These points might seem “meaningless”, because there’s no natural unit for them. but they’re not. What they show, clearly, is that a candidate has a clear interest (and is willing to be grilled for knowledge) in machine learning and functional programming, moderate experience in project management and with Clojure, and a little bit of experience in Python and R. There’s a lot of information there, as long as the allocation of points is done in good faith and, if not, that person won’t pass many interviews. Job requirements would be published in the same way: assign importance to the things according to how much they really matter, and keep the total at 20 points.

Since the points have different meanings on each side– for the employee, they represent fractions of experience; for the company, they represent relative importance– it goes without saying that a person who self-assigns 5 points in a technology isn’t ineligible for a job posting that places an importance of 6 for that technology. Rather, it indicates that there’s a rough match in how much weight each party assigns to that competency. This data could be mined to match employees to job listings for initial interviews and, quite likely, this approach (while imperfect) would perform better than the existing resume-driven regime. What used to involve overwhelmed gatekeepers is now a “simple matter” of unsupervised learning.

There is, of course, an obvious problem with this, which is that some people have more industry experience and “deserve” more points. An out-of-college candidate might only deserve 10 points, while a seasoned veteran should get 40 or 50. I’ll admit that I haven’t come up with a good solution for that. It’s a hard problem, because (a) one wants to avoid ageism, while (b) the objective here is sparseness in presentation, and I can’t think of a quick solution that doesn’t clutter the process up with distracting details. What I will concede is that, while some people clearly deserve more points than others do, there’s no fair way to perform that evaluation at an individual level. The job market is a distributed system with numerous adversarial agents, and any attempt to impose a global social status over it will fail, both practically and morally speaking.

Indeed, if there’s something that I find specifically despicable about the current resume-and-referral-driven job search culture, it’s in the attempt to create a global social status when there’s absolutely no good reason for one to exist.


No, idiot. Discomfort Is Bad.

$
0
0

Most corporate organizations have failed to adapt to the convexity of creative and technological work, a result of which is that the difference between excellence and mediocrity is much more meaningful than that between mediocrity and zero. An excellent worker might produce 10 times as much value as a mediocre one, instead of 1.2 times as much, as was the case in the previous industrial era. Companies, trapped in concave-era thinking, still obsess over “underperformers” (through annual witch hunts designed to root out the “slackers”) while ignoring the much greater danger, which is the risk of having no excellence. That’s much more deadly. For example, try to build a team of 50th- to 75th-percentile software engineers to solve a hard problem, and the team will fail. You don’t have any slackers or useless people– all would be perfectly productive people, given decent leadership– but you also don’t have anyone with the capability to lead, or to make architectural decisions. You’re screwed.

The systematic search-and-destroy attitude that many companies take toward “underperformers” exists for a number of reasons, but one is to create pervasive discomfort. Performance evaluation is a subjective, noisy, information-impoverished process, which means that good employees can get screwed just by being unlucky. The idea behind these systems is to make sure that no one feels safe. One in 10 people gets put through the kangaroo court of a “performance improvement plan” (which exists to justify termination without severance) and fired if he doesn’t get the hint. Four in 10 get damaging, below-average reviews that damage the relationship with the boss, but make internal mobility next to impossible. Four more are tagged with the label of mediocrity, and, finally, one of those 10 gets a good review and a “performance-based” bonus… which is probably less than he feels he deserved, because he had to play mad politics to get it. Everyone’s unhappy, and no one is comfortable. That is, in fact, the point of such systems: to keep people in a state of discomfort.

The root idea here is that Comfort Is Bad. The idea is that if people feel comfortable at work, they’ll become complacent, but that if they’re intimidated just enough, they’ll become hard workers. In the short term, there’s some evidence that this sort of motivation works. People will stay at work for an additional two hours in order to avoid missing a deadline and having an annoying conversation the next day. In the long term, it fails. For example, open-plan offices, designed to use social discomfort to enhance productivity, actually reduce it by 66 percent. Hammer on someone’s adrenal system, and you get response for a short while. After a certain point, you get a state of exhaustion and “flatness of affect”. The person doesn’t care anymore.

What’s the reason for this? I think that the phenomenon of learned helplessness is at play. One short-term reliable way to get an animal such as a human to do something is to inflict discomfort, and to have the discomfort go away if the desired work is performed. This is known as negative reinforcement; the removal of unpleasant circumstances in exchange for desired behavior. An example of this known to all programmers is the dreaded impromptu status check: the pointless unscheduled meeting in which a manager drops in, unannounced, and asks for an update on work progress,usually  in the absence of an immediate need. Often, this isn’t malicious or intentionally annoying, but comes from a misunderstanding of how engineers work. Managers are used to email clients that can be checked 79 times per day with no degradation of performance, and tend to forget that humans are not this way. That said, the behavior is an extreme productivity-killer, as it costs about 90 minutes per status check. I’ve seen managers do this 2 to 4 times per day. The more shortfall in the schedule, the more grilling there is. The idea is to make the engineer work hard so there is progress to report and the manager goes away quickly. Get something done in the next 24 hours, or else. This might have that effect– for a few weeks. At some point, though, people realize that the discomfort won’t go away in the long term. In fact, it gets worse, because performing well leads to higher expectations, while a decline in productivity (or even a perceived decline) brings on more micromanagement. Then learned helplessness sets in, and the attitude of not giving a shit takes hold. This is why, in the long run, micromanagers can’t motivate shit to stink.

Software engineers are increasingly inured to environments of discomfort and distraction. One of the worst trends in the software industry is the tendency toward cramped, open-plan offices where an engineer might have less than 50 square feet of personal space. This is sometimes attributed to cost savings, but I don’t buy it. Even in Midtown Manhattan, office space only costs about $100 per square foot per year. That’s not cheap, but not expensive enough (for software engineers) to justify the productivity-killing effect of the open-plan office.

Discomfort is an especial issue for software engineers, because our job is to solve problems. That’s what we do: we solve other peoples’ problems, and we solve our own. Our job, in large part, is to become better at our job. If a task is menial, we don’t suffer through it, nor do we complain about it or attempt to delegate it to someone else. We automate it away. We’re constantly trying to improve our productivity. Cramped workspaces, managerial status checks, and corrupt project-allocation machinery (as opposed to open allocation) all exist to lower the worker’s social status and create discomfort or, as douchebags prefer to call it, “hunger”. This is an intended effect, and because it’s in place on purpose, it’s also defended by powerful people. When engineers learn this, they realize that they’re confronted with a situation they cannot improve. It becomes a morale issue.

Transient discomfort motivates people to do things. If it’s cold, one puts on a coat. When discomfort recurs without fail, it stops having this effect. At some point, a person’s motivation collapses. What use is it to act to reduce discomfort if the people in charge of the environment will simply recalibrate it to make it uncomfortable again? None. So what motivates people in the long term? See: What Programmers Want. People need a genuine sense of accomplishment that comes from doing something well. That’s the genuine, long-lasting motivation that keeps people working. Typically, the creative and technological accomplishments that revitalize a person and make long-term stamina possible will only occur in an environment of moderate comfort, in which ideas flow freely. I’m not saying that the office should become an opium den, and there are forms of comfort that are best left at home, but people need to feel secure and at ease with the environment– not like they’re in a warzone.

So why does the Discomfort Is Good regime live on? Much of it is just an antiquated managerial ideology that’s poorly suited to convex work. However, I think that another contributing factor is “manager time”. One might think, based on my writing, that I dislike managers. As individuals, many of them are fine. It’s what they have to do that I tend to dislike, but it’s not an enviable job. Managing has higher status but, in reality, is no more fun than being managed. Managers are swamped. With 15 reports, schedules full of meetings, and their own bosses to “manage up”, they are typically overburdened. Consequently, a manager can’t afford to dedicate more than about 1/20 of his working time to any one report. The result of this extreme concurrency (out of accord with how humans think) is that each worker is split into a storyline that only gets 5% of the manager’s time. So when a new hire, at 6 months, is asking for more interesting work or a quieter location, the manager’s perspective is that she “just got here”. Six months times 1/20 is 1.3 weeks. That’s manager time. This explains the insufferably slow progress most people experience in their corporate careers. Typical management expects 3 to 5 years of dues-paying (in manager time, the length of a college semester) before a person is “proven” enough to start asking for things. Most people, of course, aren’t willing to wait 5 years to get a decent working space or autonomy over the projects they take on.

A typical company, as it sees its job, is to create a Prevailing Discomfort so that a manager can play “Good Cop” and grant favors: projects with more career upside, work-from-home arrangements, and more productive working spaces. Immediate managers never fire people; the company does “after careful review” of performance (in a “peer review” system wherein, for junior people, only managerial assessments are given credence). “Company policy” takes the Bad Cop role. Ten percent of employees must be fired each year because “it’s company policy”. No employee can transfer in the first 18 months because of “company policy”. (“No, your manager didn’t directly fuck you over. We have a policy of fucking over the least fortunate 10% and your manager simply chose not to protect you.”) Removal of the discomfort is to be doled out (by managers) as a reward for high-quality work. However, for a manager to fight to get these favors for reports is exhausting, and managers understandably don’t want to do this for people “right away”. The result is that these favors are given out very slowly, and often taken back during “belt-tightening” episodes, which means that the promised liberation from these annoying discomforts never really comes.

One of the more amusing things about the Discomfort Is Good regime is that it actually encourages the sorts of behaviors it’s supposed to curtail. Mean-spirited performance review systems don’t improve low performers; they create them by turning the unlucky into an immobile untouchable class with an axe to grind, and open-plan offices allow the morale toxicity of disengaged employees to spread at a rapid rate. Actually, my experience has been that workplace slacking is more common in open-plan offices. Why? After six months in open-plan office environments, people learn the tricks that allow them to appear productive while focusing on things other than work. Because such environments are exhausting, these are necessary survival adaptations, especially for people who want to be productive before or after work. In a decent office environment, a person who needed a 20-minute “power nap” could take one. In the open-plan regime, the alternative is a two-hour “zone out” that’s not half as effective.

The Discomfort Is Good regime is as entrenched in many technology startups as in large corporations, because it emerges out of a prevailing, but wrong, attitude among the managerial caste (from which most VC-istan startup founders, on account of the need for certain connections, have come). One of the first things that douchebags learn in Douchebag School is to make their subordinates “hungry”. It’s disgustingly humorous to watch them work to inflict discomfort on others– it’s transparent what they are trying to do, if one knows the signs– and be repaid by the delivery of substandard work product. Corporate America, at least in its current incarnation, is clearly in decline. While it sometimes raises a chuckle to see decay, I thought I would relish this more as I watched it happen. I expected pyrotechnics and theatrical collapses, and that’s clearly not the way this system is going to go. This one won’t go out with an explosive bang, but with the high-pitched hum of irritation and discomfort.


We should pay people not to subordinate

$
0
0

In the very long term, technological society will need to implement a basic income, as soon as full employment becomes untenable. Basic income (BI) is an income paid to all people, with no conditions. Alaska already has a small one, derived from its oil wealth. In the long term, however, full employment will be impossible due to the need for ongoing, intensive, and traditionally unpaid training.

Today, I’m not going to talk about basic income, because we’re probably a couple of decades before society absolutely needs one, and even farther away from one being implemented, because of the monumental political hurdles such an effort would encounter. Instead, I’m going to talk about right now– January 7, 2013– and something we need to do in order to maintain our capacity to innovate. I will address something that society ought to do in order to prevent a pointless and extreme destruction of human capital.

Peter Thiel has created a program (“20 Under 20″) that pays high-potential young people to skip college, but the entry-level grunt work most people spend the first few years of their careers on is, in my opinion, much more damaging, especially given its indefinite duration. (I don’t think undergraduate college is that damaging at all, but that’s another debate.) There is some busywork in college, and there are a few (but they’re very rare) incompetent professors, but more creativity is lost during the typical workplace’s years-long dues-paying period, which habituates people to subordination, than to any educational program. I do not intend to say that there aren’t problems with schools, but the institutions for which the schools prepare people are worse. At least grading in school is fair. A professor as corrupt and partial in grading as the typical corporate manager would be fired– and professors don’t get fired often.

In terms of expected value (that is, the average performance one would observe given an indefinite number of attempts) the market rewards creativity, which is insubordinate. However, when it comes to personal income, expectancy is completely meaningless, at least for us poors who need a month-to-month income to pay rent. Most people would rather have a guaranteed $100,000 per year than a 1-in-1000 shot (every year) at $500 million, with a 99.9% chance of no income, even though the latter deal has more expectancy in it. Risk-adjusted, people of average means are rewarded for taking stable jobs, which often require subordination.

Technically speaking, people are paid for work, not subordination, but the process that exists to evaluate the work is so corrupt and rife with abuse that it devolves into a game that requires subordination. For a thought experiment, consider what would happen to a typical officer worker who, without subversion or deception to hide her priorities, did the following:

  • worked on projects she considers most important, regardless of her manager’s priorities,
  • prioritized her long-term career growth over short-term assignments, and
  • expressed high-yield, creative ideas regardless of their political ramifications.

These activities are good for society, because she becomes better at her job, and obviously for her. They’re even good for her company. However, this course of action is likely to get her fired. Certainly, there’s enough risk of that to invalidate the major benefit of being an employee, which is stability.

So, in truth, society pays people to be subordinate, and that’s a real problem. In theory, capitalist society pays for valuable work, but the people trusted to evaluate the work inevitably become a source of corruption as they demand personal loyalty (which is rarely repaid in kind) rather than productivity itself. However, the long-term effect of subordination is to cause creative atrophy. To quote Paul Graham, in “You Weren’t Meant to Have a Boss“:

If you’re not allowed to implement new ideas, you stop having them. And vice versa: when you can do whatever you want, you have more ideas about what to do. So working for yourself makes your brain more powerful in the same way a low-restriction exhaust system makes an engine more powerful.

I would take this even farther. I believe that, after a certain age and constellation of conditions, creativity can be lost effectively forever. People who keep their creativity up don’t lose it– and lifelong creative people seem to peak in their 50s or later, which should kill the notion that it’s a property of the young only– but people who fall into the typical corporate slog develop a mindset and conditioning that render them irreversibly dismal. It only seems to take a few years for this to happen. Protecting one’s creativity practically demands insubordination, making it almost impossible to win the corporate ladder and remain creative. This should explain quite clearly the lack of decent leadership our society exhibits.

We should offset this by finding a way to reward people for not subordinating. To make it clear, I’m not saying we should pay people not to work. In fact, that’s a terrible idea. Instead, we should find a repeatable, robust, and eventually universal way to reward people who work in non-subordinate, creative ways, thereby rewarding the skills that our society actually needs, instead of the mindless subordination that complacent corporations have come to expect. By doing this, we can forestall the silent but catastrophic loss that is the wholesale destruction of human creative capital.


IDE Culture vs. Unix philosophy

$
0
0

Even more of a hot topic than programming languages is the interactive development environment, or IDE. Personally, I’m not a huge fan of IDEs. As tools, standing alone, I have no problem with them. I’m a software libertarian: do whatever you want, as long as you don’t interfere with my work. However, here are some of the negatives that I’ve observed when IDEs become commonplace or required in a development environment:

  • the “four-wheel drive problem”. This refers to the fact that an unskilled off-road driver, with four-wheel drive, will still get stuck. The more dexterous vehicle will simply have him fail in a more inaccessible place. IDEs pay off when you have to maintain an otherwise unmanageable ball of other people’s terrible code. They make unusable code merely miserable. I don’t think there’s any controversy about this. The problem is that, by providing this power, then enable an activity of dubious value: continual development despite abysmal code quality, when improving or killing the bad code should be a code-red priority. IDEs can delay code-quality problems and defer macroscopic business effects, which is good for manageosaurs who like tight deadlines, but only makes the problem worse at the end stage. 
  • IDE-dependence. Coding practices that require developers to depend on a specific environment are unforgivable. This is true whether the environment is emacs, vi, or Eclipse. The problem with IDEs is that they’re more likely to push people toward doing things in a way that makes use of a different environment impossible. One pernicious example of this is in Java culture’s mutilation of the command-line way of doing things with singleton directories called “src” and “com”, but there are many that are deeper than that. Worse yet, IDEs enable the employment of programmers who don’t even know what build systems or even version control are. Those are things “some smart guy” worries about so the commodity programmer can crank out classes at his boss’s request.
  • spaghettification. I am a major supporter of the read-only IDE, preferably served over the web. I think that code navigation is necessary for anyone who needs to read code, whether it’s crappy corporate code or the best-in-class stuff we actually enjoy reading. When you see a name, you should be able to click on it and see where that name is defined. However, I’m pretty sure that, on balance, automated refactorings are a bad thing. Over time, the abstractions which can easily be “injected” into code using an IDE turn it into “everything is everywhere” spaghetti code. Without an IDE, the only way to do such work is to write a script to do it. There are two effects this has on the development process. One is that it takes time to make the change: maybe 30 minutes. That’s fine, because the conversation that should happen before a change that will affect everyone’s work should take longer than that. The second is that only adept programmers (who understand concepts like scripts and the command line) will be able to do it. That’s a good thing.
  • time spent keeping up the environment. Once a company decides on “One Environment” for development, usually an IDE with various in-house customizations, that IDE begins to accumulate plugins of varying quality. That environment usually has to be kept up, and that generates a lot of crappy work that nobody wants to do.

This is just a start on what’s wrong with IDE culture, but the core point is that it creates some bad code. So, I think I should make it clear that I don’t dislike IDEs. They’re tools that are sometimes useful. If you use an IDE but write good code, I have no problem with you. I can’t stand IDE culture, though, because I hate hate hate hate hate hate hate hate the bad code that it generates.

In my experience, software environments that rely heavily on IDEs tend to be those that produce terrible spaghetti code, “everything is everywhere” object-oriented messes, and other monstrosities that simply could not be written by a sole idiot. He had help. Automated refactorings that injected pointless abstractions? Despondency infarction frameworks? Despise patterns? Those are likely culprits.

In other news, I’m taking some time to learn C at a deeper level, because as I get more into machine learning, I’m realizing the importance of being able to reason about performance, which requires a full-stack knowledge of computing. Basic fluency in C, at a minimum, is requisite. I’m working through Zed Shaw’s Learn C the Hard Way, and he’s got some brilliant insights not only about C (on which I can’t evaluate whether his insights are brilliant) but about programming itself. In his preamble chapter, he makes a valid insight in his warning not to use an IDE for the learning process:

An IDE, or “Integrated Development Environment” will turn you stupid. They are the worst tools if you want to be a good programmer because they hide what’s going on from you, and your job is to know what’s going on. They are useful if you’re trying to get something done and the platform is designed around a particular IDE, but for learning to code C (and many other languages) they are pointless. [...]
Sure, you can code pretty quickly, but you can only code in that one language on that one platform. This is why companies love selling them to you. They know you’re lazy, and since it only works on their platform they’ve got you locked in because you are lazy. The way you break the cycle is you suck it up and finally learn to code without an IDE. A plain editor, or a programmer’s editor like Vim or Emacs, makes you work with the code. It’s a little harder, but the end result is you can work with any code, on any computer, in any language, and you know what’s going on. (Emphasis mine.)

I disagree with him that IDEs will “turn you stupid”. Reliance on one prevents a programmer from ever turning smart, but I don’t see how such a tool would cause a degradation of a software engineer’s ability. Corporate coding (lots of maintenance work, low productivity, half the day lost to meetings, difficulty getting permission to do anything interesting, bad source code) does erode a person’s skills over time, but that can’t be blamed on the IDE itself. However, I think he makes a strong point. Most of the ardent IDE users are the one-language, one-environment commodity programmers who never improve, because they never learn what’s actually going on. Such people are terrible for software, and they should all either improve, or be fired.

The problem with IDEs is that each corporate development culture customizes the environment, to the point that the cushy, easy coding environment can’t be replicated at home. For someone like me, who doesn’t even like that type of environment, that’s no problem because I don’t need that shit in order to program. But someone steeped in cargo cult programming because he started in the wrong place is going to falsely assume that programming requires an IDE, having seen little else, and such novice programmers generally lack the skills necessary to set one up to look like the familiar corporate environment. Instead, he needs to start where every great programmer must learn some basic skills: at the command-line. Otherwise, you get a “programmer” who can’t program outside of a specific corporate context– in other words, a “5:01 developer” not by choice, but by a false understanding of what programming really is.

The worst thing about these superficially enriched corporate environments is their lack of documentation. With Unix and the command-line tools, there are man pages and how-to guides all over the Internet. This creates a culture of solving one’s own problems. Given enough time, you can answer your own questions. That’s where most of the growth happens: you don’t know how something works, you Google an error message, and you get a result. Most of the information coming back in indecipherable to a novice programmer, but with enough searching, the problem is solved, and a few things are learned, including answers to some questions that the novice didn’t yet have the insight (“unknown unknowns”) yet to ask. That knowledge isn’t built in a day, but it’s deep. That process doesn’t exist in an over-complex corporate environment, where the only way to move forward is to go and bug someone, and the time cost of any real learning process is at a level that most managers would consider unacceptable.

On this, I’ll crib from Zed Shaw yet again, in Chapter 3 of Learn C the Hard Way:

In the Extra Credit section of each exercise I may have you go find information on your own and figure things out. This is an important part of being a self-sufficient programmer. If you constantly run to ask someone a question before trying to figure it out first then you never learn to solve problems independently. This leads to you never building confidence in your skills and always needing someone else around to do your work. The way you break this habit is to force yourself to try to answer your own questions first, and to confirm that your answer is right. You do this by trying to break things, experimenting with your possible answer, and doing your own research. (Emphasis mine.)

What Zed is describing here is the learning process that never occurs in the corporate environment, and the lack of it is one of the main reasons why corporate software engineers never improve. In the corporate world, you never find out why the build system is set up in the way that it is. You just go bug the person responsible for it. “My shit depends on your shit, so fix your shit so I can run my shit and my boss doesn’t give me shit over my shit not working for shit.” Corporate development often has to be this way, because learning a typical company’s incoherent in-house systems doesn’t provide a general education. When you’re studying the guts of Linux, you’re learning how a best-in-class product was built. There’s real learning in mucking about in small details. For a typically mediocre corporate environment that was built by engineers trying to appease their managers, one day at a time, the quality of the pieces is often so shoddy that not much is learned in truly comprehending them. It’s just a waste of time to deeply learn such systems. Instead, it’s best to get in, answer your question, and get out. Bugging someone is the most efficient and best way to solve the problem.

It should be clear that what I’m railing against is the commodity developer phenomenon. I wrote about “Java Shop Politics” last April, which covers a similar topic. I’m proud of that essay, but I was wrong to single out Java as opposed to, e.g. C#, VB, or even C++. Actually, I think any company that calls itself an “<X> Shop” for any language X is missing the point. The real evil isn’t Java the language, as limited as it may be, but Big Software and the culture thereof. The true enemy is the commodity developer culture, empowered by the modern bastardization of “object-oriented programming” that looks nothing like Alan Kay’s original vision.

In well-run software companies, programs are build to solve problems, and once the problem is finished, it’s Done. The program might be adapted in the future, and may require maintenance, but that’s not an assumption. There aren’t discussions about how much “headcount” to dedicate to ongoing maintenance after completion, because that would make no sense. If people need to modify or fix the program, they’ll do it. Programs solve well-defined problems, and then their authors move on to other things– no God Programs that accumulate requirements, but simple programs designed to do one thing and do it well. The programmer-to-program relationship must be one-to-many. Programmers write programs that do well-defined, comprehensible things well. They solve problems. Then they move on. This is a great way to build software “as needed”, and the only problem with this style of development is that the importance of small programs is hard to micromanage, so managerial dinosaurs who want to track efforts and “headcount” don’t like it much, because they can never figure out who to scream at when things don’t go their way. It’s hard to commoditize programmers when their individual contributions can only be tracked by their direct clients, and when people can silently be doing work of high importance (such as making small improvements to the efficiencies of core algorithms that reduce server costs). The alternative is to invert the programmer-to-program relationship: make it many-to-one. Then you have multiple programmers (now a commodity) working on Giant Programs that Do Everything. This is a terrible way to build software, but it’s also the one historically favored by IDE culture, because the sheer work of setting up a corporate development environment is enough that it can’t be done too often, and this leads managers to desire Giant Projects and a uniformity (such as a one-language policy, see again why “<X> Shops” suck) that managers like but that often makes no sense.

The right way of doing things– one programmer works on many small, self-contained programs– is the core of the so-called “Unix philosophy“. Big Programs, by contrast, invariably have undocumented communication protocols and consistency requirements whose violation leads not only to bugs, but to pernicious misunderstandings that muddle the original conceptual integrity of the system, resulting in spaghetti code and “mudballs”. The antidote is for single programs themselves to be small, for large problems to be solved with systems that are given the respect (such as attention to fault tolerance) that, as such, they deserve.

Are there successful exceptions to the Unix philosophy? Yes, there are, but they’re rare. One notable example is the database, because these systems often have very strong requirements (transactions, performance,  concurrency, durability, fault-tolerance) that cannot be as easily solved with small programs and organic growth alone. Some degree of top-down orchestration is required if you’re going to have a viable database, because databases have a lot of requirements that aren’t typical business cruft, but are actually critically important. Postgres, probably the best SQL out there, is not a simple beast. Indeed, databases violate one of the core tenets of the Unix philosophy– store data in plain text– and they do so for good reasons (storage usage). Databases also mandate that people be able to use them without having to keep up with the evolution of such a system’s opaque and highly-optimized internal details, which makes the separation of implementation from interface (something that object-oriented programming got right) a necessary virtue. Database connections, like file handles, should be objects (where “object” means “something that can be used with incomplete knowledge of its internals”.) So databases, in some ways, violate the Unix philosophy, and yet are still used by staunch adherents. (We admit that we’re wrong sometimes.) I will also remark that it has taken decades for some extremely intelligent (and very well-compensated) people to get databases right. Big Projects win when no small project or loose federation thereof will do the job.

My personal belief is that almost every software manager thinks he’s overseeing one of the exceptions: a Big System that (like Postgres) will grow to such importance that people will just swallow the complexity and use the thing, because it’s something that will one day be more important than Postgres or the Linux kernel. In almost all cases, they are wrong. Corporate software is an elephant graveyard of such over-ambitious systems. Exceptions to the Unix philosophy are extremely rare. Your ambitious corporate system is almost certainly not one of them. Furthermore, if most of your developers– or even a solid quarter of them– are commodity developers who can’t code outside of an IDE, you haven’t a chance.


A humorous note about creationism and snakes.

$
0
0

This isn’t one of my deeper posts. It’s just something I find amusing regarding a cultural symbol, especially in the context of Biblical creationism. One of the core stories of the Bible is the temptation of Eve by a serpent who brought her to disobey God. In other words, sin came into the world because of a snake. The Garden of Eden wasn’t perfect, because one animal was bad and woman was weak. This myth’s origins go back to Sumer, but that’s irrelevant to this observation. The question is: why a snake? Why was this animal, out of all of dangerous creatures out there, chosen as the symbol of sin?

Snakes are carnivores, but most of the charismatic megafauna, such as tigers, eagles, and wolves, are. Yet few of those seem to inspire the reflexive fear that snakes do. Many of these animals are more dangerous to us than snakes. Yet we view lions and hawks with awe, not disgust or dread.

The most likely answer is not what creationists would prefer: it’s evolution that leads us to view snakes in such a way. Most land mammals– even large ones, to whom most species of snake are harmless– seem to have some degree of fear of snakes, and humans are no exception. Most religions have a strong view of this animal– some positive and reverent, but many negative. Why? Hundreds of millions of years ago, when our mammalian ancestors were mostly rodent-like in size, snakes were their primary predators. A fear of swift, legless reptiles was an evolutionary advantage. Seeing one meant you were about to die.

We don’t have this fear of lions or tigers because such creatures aren’t that old. Large cats have only been with us for a few million years, during which time we were also large and predatory, so there’s a mutual respect between us. Snakes and mammals, on the other hand, go way back.

Related to this is the legend of the dragon. No one can prove this, obviously, but the concept of a dragon seems to have emerged out of our “collective unconscious” as mammals. We have to go back 65 million years to find creatures that were anything like dragons, but a large number of cultures have independently invented such a mythical creature: a cocktail of small mammalian terrors (reptiles, raptors, fire, venom) coming from a time when we were small and probably defenseless prey creatures.

The key to understanding long-standing myths and symbols such as Biblical creation turns out, with some irony in the fact, to be evolution. Serpents ended up in our creation myths, because after all this time, we haven’t gotten over what they did to us 100 million years ago.


Learning C, reducing fear.

$
0
0

I have a confession to make. At one point in my career, I was a mediocre programmer. I might say that I still am, only in the context of being a harsh grader. I developed a scale for software engineering for which I can only, in intellectual honesty, assign myself 1.8 points out of a possible 3.0. One of the signs of my mediocrity is that I haven’t a clue about many low-level programming details that, thirty years ago, people dealt with on a regular basis. I know what L1 and L2 cache are, but I haven’t built the skill set yet to make use of this knowledge.

I love high-level languages like Scala, Clojure, and Haskell. The abstractions they provide make programming more productive and fun than it is in a language like Java and C++, and the languages have a beauty that I appreciate as a designer and mathematician. Yet, there is still quite a place for C in this world. Last July, I wrote an essay, “Six Languages to Master“, in which I advised young programmers to learn the following languages:

  • Python, because one can get started quickly and Python is a good all-purpose language.
  • C, because there are large sections of computer science that are inaccessible if you don’t understand low-level details like memory management.
  • ML, to learn taste in a simple language often described as a “functional C” that also teaches how to use type systems to make powerful guarantees about programs.
  • Clojure, because learning about language (which is important if one wants to design good interfaces) is best done with a Lisp and because, for better for worse, the Java libraries are a part of our world.
  • Scala, because it’s badass if used by people with a deep understanding of type systems, functional programming, and the few (very rare) occasions where object-oriented programming is appropriate. (It can be, however, horrid if wielded by “Java-in-Scala” programmers.)
  • English (or the natural language of one’s environment) because if you can’t teach other people how to use the assets you create, you’re not doing a very good job.

Of these, C was my weakest at the time. It still is. Now, I’m taking some time to learn it. Why? There are two reasons for this.

  • Transferability. Scala’s great, but I have no idea if it will be around in 10 years. If the Java-in-Scala crowd adopts the language without upgrading its skills and the language becomes associated with Maven, XMHell, IDE culture, and commodity programmers, in the way that Java has, the result will be piles of terrible Scala code that will brand the language as “write-only” and damage its reputation for reasons that are not Scala’s fault. These sociological variables I cannot predict. I do, however, know that C will be in use in 10 years. I don’t mind learning new languages– it’s fun and I can do it quickly– but the upshot of C is that, if I know it, I will be able to make immediate technical contributions in almost any programming environment. I’m already fluent in about ten languages; might as well add C. 
  • Confidence. High-level languages are great, but if you develop the attitude that low-level languages are “unsafe”, ugly, and generally terrifying, then you’re hobbling yourself for no reason. C has its warts, and there are many applications where it’s not appropriate. It requires attention to details (array bounds, memory management) that high-level languages handle automatically. The issue is that, in engineering, anything can break down, and you may be required to solve problems in the depths of detail. Your beautiful Clojure program might have a performance problem in production because of an issue with the JVM. You might need to dig deep and figure it out. That doesn’t mean you shouldn’t use Clojure. However, if you’re scared of C, you can’t study the JVM internals or performance considerations, because a lot of the core concepts (e.g. memory allocation) become a “black box”. Nor will you be able to understand your operating system.

For me, personally, the confidence issue is the important one. In the functional programming community, we often develop an attitude that the imperative way of doing things is ugly, unsafe, wrong, and best left to “experts only” (which is ironic, because most of us are well into the top 5% of programmers, and more equipped to handle complexity than most; it’s this adeptness that makes us aware of our own limitations and prefer functional safeguards when possible). Or, I should not say that this is a prevailing attitude, so much as an artifact of communication. Fifty-year-old, brilliant functional programmers talk about how great it is to be liberated from evils like malloc and free. They’re right, for applications where high-level programming is appropriate. The context being missed is that they have already learned about memory management quite thoroughly, and now it’s an annoyance to them to keep having to do it. That’s why they love languages like Ocaml and Python. It’s not that low-level languages are dirty or unsafe or even “un-fun”, but that high-level languages are just much better suited to certain classes of problems.

Becoming the mentor

I’m going to make an aside that has nothing to do with C. What is the best predictor of whether someone will remain at a company for more than 3 years? Mentorship. Everyone wants “a Mentor” who will take care of his career by providing interesting work, freedom from politics, necessary introductions, and well-designed learning exercises instead of just-get-it-done grunt work. That’s what we see in the movies: the plucky 25-year-old is picked up by the “star” trader, journalist, or executive and, over 97 long minutes, his or her career is made. Often this relationship goes horribly wrong in film, as in Wall Street, wherein the mentor and protege end up in a nasty conflict. I won’t go so far as to call this entirely fictional, but it’s very rare. You can find mentors (plural) who will help you along as much as they can, and should always be looking for people interested in sharing knowledge and help, but you shouldn’t look for “The Mentor”. He doesn’t exist. People want to help those who are already self-mentoring. This is even more true in a world where few people stay at a job for more than 4 years.

I’ll turn 30 this year, and in Silicon Valley that would entitle me to a lawn and the right to tell people to get off of it, but I live in Manhattan so I’ll have to keep using the Internet as my virtual lawn. (Well, people just keep fucking being wrong. There are too many for one man to handle!) One of the most important lessons to learn is the importance of self-mentoring. Once you get out of school where people are paid to teach you stuff, people won’t help people who aren’t helping themselves. To a large degree, this means becoming the “Mentor” figure that one seeks. I think that’s what adulthood is. It’s when you realize that the age in which there were superior people at your beck and call to sort out your messes and tell you what to do is over. Children can be nasty to each other but there are always adults to make things right– to discipline those who break others’ toys, and replace what is broken. The terrifying thing about adulthood is the realization that there are no adults. This is a deep-seated need that the physical world won’t fill. There’s at least 10,000 recorded years of history that shows people gaining immense power by making “adults-over-adults” up, and using the purported existence of such creatures to arrogate political power, because most people are frankly terrified of the fact that, at least in the observable physical world and in this life, there is no such creature.

What could this have to do with C? Well, now I dive back into confessional mode. My longest job tenure (30 months!) was at a startup that seems to have disappeared after I left. I was working in Clojure, doing some beautiful technical work. This was in Clojure’s infancy, but the great thing about Lisps is that it’s easy to adapt the language to your needs. I wrote a multi-threaded debugger using dynamic binding (dangerous in production, but fine for debugging) that involved getting into the guts of Clojure, a test harness, an RPC client-server infrastructure, and a custom NoSQL graph-based database. The startup itself wasn’t well-managed, but the technical work itself was a lot of fun. Still, I remember a lot of conversations to the effect of, “When we get a real <X>”, where X might be “database guy” or “security expert” or “support team”. The attitude I allowed myself to fall into, when we were four people strong, was that a lot of the hard work would have to be done by someone more senior, someone better. We inaccurately believed that the scaling challenges would mandate this, when in fact, we didn’t scale at all because the startup didn’t launch.

Business idiots love real X’s. This is why startups frequently develop the social-climbing mentality (in the name of “scaling”) that makes internal promotion rare. The problem is that this “realness” is total fiction. People don’t graduate from Expert School and become experts. They build a knowledge base over time, often by going far outside of their comfort zones and trying things at which they might fail, and the only things that change are that the challenges get harder, or the failure rate goes down. As with the Mentor that many people wait for in vain, one doesn’t wait to “find a Real X” but becomes one. That’s the difference between a corporate developer and a real hacker. The former plays Minesweeper (or whatever Windows users do these days) and waits for an Expert to come from on high to fix his IDE when it breaks. The latter shows an actual interest in how computers really work, which requires diving into the netherworld of the command line interface.

That’s why I’m learning C. I’d prefer to spend much of my programming existence in high-level languages and not micromanaging details– although, this far, C has proven surprisingly fun– but I realize that these low-level concerns are extremely important and that if I want to understand things truly, I need a basic fluency in them. If you fear details, you don’t understand “the big picture”. The big picture is made up of details, after all. This is a way to keep the senescence of business FUD at bay– to not become That Executive who mandates hideous “best practices” Java, because Python and Scala are “too risky”.

Fear of databases? Of operating systems? Of “weird” languages like C and Assembly? Y’all fears get Zero Fucks from me.



“Job hopping” is often fast learning and shouldn’t be stigmatized

$
0
0

One of the traits of the business world that I’d love to see die, even if it were to take people with it, is the stigma against “job hopping”. I don’t see how people can not see that this is oppression, plain and simple. The stigma exists to deny employees of the one bit of leverage they have, which is to leave a job and get a better one.

The argument made in favor of the stigma is that (a) companies put a lot of effort into training people, and (b) very few employees earn back their salary in the first year. Let me address both of these. For the first, about the huge amount of effort put into training new hires, that’s not true. It may have been the case in 1987, but not anymore. There might be an orientation lasting a week or two, or an assigned mentor who sometimes does a great job and is sometimes absentee, but the idea that companies still invest significant resources into new hires as a general policy is outdated. They don’t. Sink or swim is “the new normal”. The companies that are exceptions to this will sometimes lose a good person, but they don’t have the systemic retention problems that leave them wringing their hands about “job hoppers”. For the second, this is true in some cases and not in others, but I would generally blame this (at least in technology) on the employer. If someone who is basically competent and diligent spends 6 months at a company and doesn’t contribute something– perhaps a time-saving script, a new system, or a design insight– that is worth that person’s total employment costs (salary, plus taxes, plus overhead) then there is something wrong with the corporate environment. Perhaps he’s been loaded up with fourth quadrant work of minimal importance. People should be able to start making useful contributions right away, and if they can’t, then the company needs to improve the feedback cycle. That will make everyone happier.

The claimed corporate perspective of “job hoppers” is that they’re a money leak, because they cost more than they are worth in the early years. However, that’s not true. I’d call it an out-and-out lie. Plenty of companies can pay young programmers market salaries and turn a profit. In fact, companies doing low-end work (which may still be profitable) often fire older programmers to replace them with young ones. What this means is that the fiction of new hires being worth less than a market salary holds no water. Actually, employing decent programmers (young or old, novice or expert) at market compensation is an enormous position of profit. (I’d call it an “arbitrage”, but having done real arbitrage I prefer to avoid this colloquialism.)

The first reason why companies don’t like “job hoppers” is not that new hires are incapable of doing useful work, but that companies intentionally prevent new people from doing useful work. The “dues paying” period is evaluative. The people who fare poorly or make their dislike of the low-end work obvious are the failures who are either fired or, more likely, given the indication that they won’t graduate to better things, which compels them to leave– but on the company’s terms. The dues-paying period leaks at the top. In actuality, it always did. It just leaks in a different way now. In the past, the smartest people would become impatient and bored with the low-yield, evaluative nonsense, just as they do now, but be less able to change companies. They’d lose motivation, and start to underperform, leaving the employer feeling comfortable with the loss. (“Not a team player; we didn’t want him anyway.”) In the “job hopping” era, they leave before they have the motivational crash and there is something to be missed.

The second problem that companies have with “job hoppers” is that they keep the market fluid and, additionally, transmit information. Job hoppers are the ones who tell their friends at the old company that a new startup is paying 30% better salaries and runs open allocation. They not only grab external promotions for themselves when they “hop”, but they learn and disseminate industry information that transfers power to engineers.

I’ve recently learned first-hand about the fear that companies have of talent leaks. For a few months last winter, I worked at a startup with crappy management, but excellent engineers, and I left when I was asked to commit perjury against some of my colleagues. (No, this wasn’t Google. First, Google is not a startup. Second, Google is a great company with outdated and ineffective HR but well-intended upper management. This, on the other hand, was a company with evil management.) There’s a lot of rumor surrounding what happened and, honestly, the story was so bizarre that even I am not sure what really went on. I won the good faith of the engineers by exposing unethical management practices, and became somewhat of a folk hero. I introduced a number of their best engineers to recruiters and helped them get out of that awful place. Then I moved on, or so I thought. Toward the end of 2012, I discovered that their Head of Marketing was working to destroy my reputation (I don’t know if he succeeded, but I’ve seen the attempts) inside that company by generating a bunch of spammy Internet activity, and attempting to make it look like it was me doing it. He wanted to make damn sure I couldn’t continue the talent bleed, even though my only interaction was to introduce a few people (who already wanted to leave) to recruiters. These are the extents to which a crappy company will go to plug a talent hole (when those efforts would be better spent fixing the company).

Finally, “job hopping” is a slight to a manager’s ego. Bosses like to dump on their own terms. After a few experiences with the “It’s not you, it’s me” talk, after which the reports often go to better jobs than the boss’s, managers develop a general distaste for these “job hoppers”.

These are the real reasons why there is so much dislike for people who leave jobs. The “job hopper” isn’t stealing from the company. If a company can employ a highly talented technical person for 6 months and not profit from the person’s work, the company is stealing from itself.

All this said, I wouldn’t be a fan of someone who joined companies with the intention of “hopping”, but I think very few people intend to run their careers that way. I have the resume of a “job hopper”, but when I take a job, I have no idea whether I’ll be there for 8 months or 8 years. I’d prefer the 8 years, to be honest. I’m sick of having to change employers every year, but I’m not one to suffer stagnation either.

My observation has been that most “job hoppers” are people who learn rapidly, and become competent at their jobs quickly. In fact, they enter jobs with a pre-existing and often uncommon skill set. Most of the job hoppers I know would be profitable to hire as consultants at twice their salary. Because they’re smart, they learn fast and quickly outgrow the roles that their companies expect them to fill. They’re ready to move beyond the years-long dues-paying period at two months, but often can’t. Because they leave once they hit this political wall, they become extremely competent.

The idea that the “job hopper” is an archetype of Millennial “entitlement” is one I find ridiculous. Actually, we should blame this epidemic of job hopping on efficient education. How so? Fifty years ago, education was much more uniform and, for the smartest people, a lot slower than it is today. Honors courses were rare, and for gifted students to be given extra challenges was uncommon. This was true within as well as between schools. Ivy League mathematics majors would encounter calculus around the third year of college, and the subjects that are now undergraduate staples (real analysis, abstract algebra) were solidly graduate-level. There were a few high-profile exceptions who could start college at age 14 but, for most people, being smart didn’t result in a faster track. You progressed at the same pace as everyone else. This broke down for two reasons. First, smart people get bored on the pokey track, and in a world that’s increasingly full of distractions, that boredom becomes crippling. Second, the frontiers of disciplines like mathematics are now so far out and specialized that society can’t afford to have the smartest people dicking around at quarter-speed until graduate school.

So we now have a world with honors and AP courses, and with the best students taking real college courses by the time they’re in high school. College is even more open. A freshman who is intellectually qualified to take graduate-level courses can do it. That’s not seen as a sign of “entitlement”. It’s encouraged.

This is the opposite of the corporate system, which has failed to keep up with modernity. A high-potential hire who outgrows starter projects and low-yield, dues-paying grunt work after 3 months does not get to skip the typical 1-2 years of it just because she’s not learning anything from it. People who make that move, either explicitly by expressing their boredom, or simply by losing motivation and grinding to a halt, often end up getting fired. You don’t get to skip grades in the corporate world.

Unfortunately, companies can’t easily promote people fast, because there are political problems. Rapid promotion, even of a person whose skill and quick learning merit it, becomes a morale problem. Companies additionally have a stronger need to emphasize “the team” (as in, “team player”, as much as I hate that phrase) than schools. In school, cheating is well-defined and uncommon, so individualism works. At work, where the ethical rules are often undefined, group cohesion is often prioritized over individual morale, as individualism is viewed as dangerous. This makes rapid promotion of high-potential people such a political liability that most companies don’t even want to get involved. Job hoppers are the people who rely on external promotion because they often grow faster than it is politically feasible for a typical corporation to advance them.

For all that is said to the negative about job hoppers, I know few who intentionally wish to “hop”. Most people will stay with a job for 5 years, if they continue to grow at a reasonable pace. The reason they move around so much is that they rarely do. So is it worth it to hire job hoppers, given the flight risk associated with top talent? I would say, without hesitation, “yes”. Their average tenures “of record” are short, but they tend to be the high-power contributors who get a lot done in a short period of time. Also, given that one might become a long-term employee if treated well, delivering both top talent and longevity, I’d say there’s a serious call option here.

“Job hopping” shouldn’t be stigmatized because it’s the corporate system that’s broken. Most corporate denizens spend most of their time on low-yield make-work that isn’t important, but largely exists because of managerial problems or is evaluative in purpose. The smart people who figure out quickly that they’re wasting their time tend to want to move on to something better. Closed-allocation companies make this extremely difficult and as politically rickety as a promotion system, so often they decide to move on. Without the “job hopping” stigma, they’d be able to quit and leave when this happens, but that reputation risk encourages them, instead, to quit and stay. For companies, this is much worse.

 


Leadership: everyone is a main character

$
0
0

I tried to be a creative writer when I was younger. I was not very good at it. On the technical aspects of writing, I was solid. My problem was that I didn’t understand people well enough to write a convincing character. Social adeptness isn’t a prerequisite for being a strong writer, but a basic understanding of how most people think is necessary, and I didn’t have that. One thing I remember, and I think it was my mother who first said to me: every character thinks of him- or herself as a main character. That’s what makes Rosencrantz and Guildenstern Are Dead (the play that begins by ribbing every Bayesian in the audience) great. In Hamlet, these guys are a sideshow, and can be disregarded as such (they’re symbolic of the prince’s sunnier past, now betrayers and otherwise irrelevant). From their perspective, they aren’t. Tom Stoppard’s play reminds us that what’s remembered as one story, told in truth by its main protagonist, is actually one of a thousand possible, related but very different, tales.

I’ve worked in an ungodly number of companies for someone my age, and I’ve seen leadership done well and not-well, but only in the past 24 months have I really started to understand it. Why can some people lead, and others not? Why was I once so bad at leading other people? I think the core insight is the above. To lead people, you have to understand them and what they want, which usually requires recognition of the fact that they don’t see themselves as followers innately. They’ll follow, if they can lead in other ways.

I don’t intend for this to devolve into an “everyone is special” claim. That’s bullshit. I could say, “everyone is special because every point in a high-dimensional space sees itself as an outlier”, but that would be beside the point. I’m not talking about capability or importance to any specific organization, but perspective only. I think most people are realistic about their importance to the various organizations they inhabit, which is often very low if those organizations are big. However, few people see themselves as supporting actors in life itself.

The problem that a lot of “visionaries” have is that they lose sight of this entirely. They expect others to subsume their lives into an ambitious idea, as they have done themselves, while failing to understand that others aren’t going to commit the same dedication unless they can take some ownership of the idea. The visionaries tend to cast themselves as the main players– the saviors, architects, deal-makers and warriors– while most of the others, within that organization, are supporting cast. Since they will never see themselves as subordinates in truth, there will always be a disconnect that disallows them from willingly putting their entire being into the vision or organization. They’ll subordinate a little bit of work, if they get paid for it. They won’t subordinate themselves. Not for real.

This bias– toward viewing others as supporting actors– is dangerous. The problem isn’t only that it can lead one to mistreat other people. Worse than that, it can lead toward systematically underestimating them, being underprepared for them to pursue their own agendas, and feeling a sense of betrayal when that happens. The worst thing about “visionaries”, in my view, isn’t how they treat their supporters. Some are good in that way, and some are not, but only a couple that I’ve known were reliably indecent. The problem with them, instead, is that they tend to be easy prey for sycophantic subordinates who often have bad intentions. They see the “golden child” as a docile, eager protege or a lesser version of themselves, when often that person is an exploitative sociopath who knows how to manipulate narcissists. Visionaries tend to see their organizations as benevolent dictatorships with “no politics” (it’s never politics to them because they’re in charge) while their organizations are often riddled with young-wolf conflicts that they’ve inadvertently encouraged.

I don’t know what the right answer is here. The word “empathy” comes to mind, but that’s not what I mean. Reading emotions is one skill, but sometimes I wonder if it’s even possible to really “get” (as in understand) another person. I don’t believe that, as humans, we really have the tools for that. Alternatively, people often talk about putting themselves “in someone else’s shoes” (I could do a Carlin-esque rant about the colloquialism, but I’ll skip it) but that is extremely difficult to do, because most people, when they attempt to do so, are heavily influenced by their own biases about that person.

Likewise, I can’t claim in all honesty that any of this is my strong suit. Technical, intellectual, and cultural leadership are a 10/10 fit with my skill set and talents. On the other hand, leadership of people is still a major area for development. The somewhat cumbersome store of knowledge (and, I make no pretenses here, I do not claim the skills necessary to apply it) that I have on the topic is not from preternatural excellence (trust me, I don’t have such) but from watching others fail so badly at it. I owe everything I know on the topic to those who, unlike me, were promoted before they outgrew their incompetence. So there are many questions here where I don’t know the answers, and some where I don’t even know the right questions, but I think I know where to begin, and that goes back to what every novelist must keep in mind: everyone is, from his or her own perspective, a main character.


Psychopathy and superficial reliability

$
0
0

Lord Acton says: judge talent at its best and character at its worst. This is a wise principle, yet it fails us miserably when misapplied, as it often is in modern society. Why is that? The world is large, so our knowledge of each is extremely sparse. We often lack the information necessary to judge either talent or character well well. The consequence of information sparsity in judgment of talent is the existence of celebrity. It’s better to have everyone know that you’re a 6, than to be a 10 in secret. This itself is not so dangerous, but the contest for visibility, even in supposed meritocracies like the software industry, gets destructive quickly. Even in small companies, more effort is often expended to gain control of the division of labor (thus, one’s own visibility and reputation) than is spent actually completing the work. The fact that awful people are excellent at office politics is so well-known that it requires no documentation. It becomes visible within the first 6 months of one’s working life. This makes assessment of character as important as the judgment of skill and talent. Is the guy with the flashy resume a legitimate 99.99th-percentile talent, or a degenerate politicker and credit-taker who managed to acquire credibility? Reference checking is supposed to solve that, and it doesn’t work. I’ll get to that, a little bit later.

Information sparsity in the assessment of talent is a known danger, but I tend to see it as a short-term and minor threat. There’s probably an eventual consistency to it. Over time, people should converge to levels of challenge, responsibility, and influence commensurate with their ability. More dangerous, and infinitely more intractable, is the information sparsity that pertains to character. People tend to overestimate, by far, their ability to judge other peoples’ ethical mettle. In fact, the vast majority of them are easy to hack, and their excessive confidence in their own assessment is, in truth, easily used against them by the bad actors.

This problem is pretty much impossible to solve. Most people know from experience that the worst people– the psychopaths– are superficially charming, which means that personal impressions are of low value. What about getting access to the person’s history? In employment, that’s what reference checks are for, but shady characters often have great references. Why? Because they lie, extort, and manipulate people until their histories become not only socially acceptable but outright attractive. They hack people with as much skill and malice as the worst black-hat “crackers”. The people who are harmed by intensive reference checks are honest people with difficult histories, not the degenerate and dishonest who are the real threat.

My experience is that people lack the tools to judge others for character, at least at scale. Any fair punitive structure is predictable, and the most skilled of the bad actors will adapt. Any unpredictable punitive structure will be unfair, and rely on decisions made by influential humans, who are more likely than average to be psychopaths, and will certainly have psychopathic courtiers (whom the powerful person has not yet detected). The best one can do is to judge people by their actions, and to punish bad deeds swiftly and objectively. This is not a trivial art, of course.

Laws and imprisonment serve this punitive purpose, but most of the people in our jails are impulsive people of low social class, with only moderate overlap between the imprisoned population and the psychopaths. In employment, there’s a naive hope that, while psychopaths can climb high within corporations, they will eventually be unable to escape their histories and be flushed out of respectable careers. It never happens that way. Moral degenerates don’t get blacklisted. They acquire power and do the blacklisting.

One acquired strategy for dealing with such people is “Distrust everyone”. That’s how most seasoned managers and executives, having been robbed a couple times by dishonest subordinates, tend to view the people below them– with implicit, prevailing distrust. That strategy fails especially badly. Why? First, there are degrees of trust and distrust. Becoming a managerial favorite (managers are not always psychopaths, but managerial favorites almost always are) simply requires superiority in relative trust, not any level of absolute trust. Second, it’s functionally impossible to get a complex job done (much less lead a team) with prevailing total distrust of everyone, so people who “distrust everyone” are desperate for people they can give partial trust. Psychopaths play people with that attitude quite easily. It’s not even work for them. A boss who thinks his subordinates are all morons is surprisingly easy to hack.

The conclusion of all this is that, in defending scalable institutions such as corporations against psychopaths, we’re basically helpless. We don’t have the tools to detect them based on affability or social proof, and any strategy that we devise to deal with them, they will subvert to their own ends. We can’t “beat” them when they look exactly like us and will be undetected until it’s too late. Our best shot is not to attract them, and to avoid engaging in behaviors that make our institutions and patterns most easily hackable.

Despite our complete lack of ability to assess individuals for character at scale, we develop metrics for doing so that often not only fail us, but become tools of the psychopath. A going assumption that people make is that the small is indicative of the large. If Fairbanks is chilly in the summer, it must be frigid in the winter. (This applies to most climates, but not to San Francisco.) People who make occasional misspellings in email must be stupid. People who have mediocre accomplishments (by adult standards) at young ages are destined for adult brilliance. People who regularly take 75-minute lunches are “time-stealing” thieves.

Talent is judged in the workplace based on minor accomplishments, largely because there are so few opportunities for major accomplishment, and those are only available to the well-established. The guy who reliably hits a “6″ is judged to be capable of the “9″ (see: Peter Principle) while the one who gets bored and starts dropping “5″s is flushed out. Character is judged, similarly, based on useless and minor signals. The person who regularly arrives at 9:00, never says the wrong thing, and projects the image of a “team player” (whatever the fuck that means) gets ahead. What takes the place of character– which, I contend, cannot be assessed at scale and amid the extreme information sparsity of modern society– is superficial reliability. The people who pass what a company thinks are character and “culture fit” assessments are, rather than those of pristine character, the superficially reliable.

Who wins at this game? I wouldn’t say that it’s only psychopaths who win, but the best are going to be the psychopaths. The earnestly honest will break rules (formal and informal) to get work done. They care more about doing the right thing than being perceived the right way. Psychopaths are not by-the-word rule-followers with regard to formal policies, but they always follow the informal social rules (even to the breach of formal and less-powerful informal rules). They figure them out the quickest, have few distractions (since they rarely do actual work; that’s not what the office game is about!) and, fairly early on, find themselves in the position to make those rules. 

Superficial reliability works in favor of the worst people. Why? It evolves into a competition. Once everyone is in the office from 9:00 to 6:00, the new standard becomes 8:00 to 7:00. Then it’s 7:00 to 8:00, with expected email checking to 11:00. People start to fail. The noncompliant are the first to drop away and judged by the organism (the team, management) to have been the least dedicated, so it’s not seen as a loss. The next wave of failures are the enervated compliant, who meet the increasingly difficult standards but underperform in other ways. They spend their 13 hours at the office, but start making mistakes. They turn into temporary incompetents, and are flushed out as well. They’re not seen as a loss either. “We have a tough culture here.” As those burn off, people who were formerly at the center of the bell curve (in reliability, status, and performance) are now on the fringe, which means that there’s an atypically large set of people on the bubble, generating a culture of anxiety. They become catty and cutthroat now that the middle is no longer a safe place to be. People fight, and some come out of it looking so terrible that their reputations are ruined. They leave. Psychopaths rarely enter these contests directly, but evolve into puppet masters and arms dealers, ensuring that they win regardless of each battle’s outcome. Soon, the psychopath has entrenched himself as a key player in the organization. He’s not doing most of the work, but he’s feared by the actual stars, enough that they’ll let him take credit for their work and (in management’s eye) become one.

Most reliability contests work this way. There’s some performance metric where the bottom few percent justly deserve to be fired. As a limited measure, such a “sweep” is not a bad idea. (“Let’s stop paying the people who never show up.”) Management, however, is not measured or limited. It’s faddish, impulsive, absolute, and excessive. Whatever policy is used to separate from true underperformers (about 2%) must also be used to “stack rank” the other 98 percent. It’s no longer enough to enforce an honest 8-hour day; we must require an 11-hour day. This overkill damages the work environment and culture, and psychopaths thrive in damaged, opaque, and miserable environments.

Another example is reference checking in employment. The real purpose of the reference check is to discourage the morally average from lying about their histories, and it works. The moral “middle man” at the center of the ethical bell curve would probably lie on his resume given the right incentives, but would stop short of asking 3 friends to support the lie by posing as peers at jobs the person did not hold. Most people won’t make that kind of demand of people who aren’t close to them, but few people want to be seen as unethical by close colleagues. That is the point where the average person says, “Wait a minute, this might be wrong.” The classic, three-reference check also filters out the honest but undesirable candidates who just can’t find three people to recommend their work. It’s a reliability test, in that anyone who can’t find 3 people in his last 5 years to say good things about him is probably in that bottom 2% who are undesirable hires for that reasonl alone. Yet, at senior ranks in large companies, reference checking becomes a reliability contest, with 10 to 20 references– including “back channel” references not furnished by the candidate– being required. At that point, you’re selecting in favor of psychopaths. Why? Most honest people, playing fair, can’t come up with 20 references, nor have they engaged in the intimidation and extortion necessary to pass an intensive “back-channel” reference check in a world where even a modestly positive reference means no-hire. It’s those honest people who fail those cavity searches. A psychopath with no qualms about dishonesty and extortion can furnish 50 references. Beyond the “classic 3″, reference checking actually selects for psychopathy. 

Why do psychopaths never fail out, even of reliability contests designed to cull those of low character? The answer is that they have a limited emotional spectrum, and don’t feel most varieties of emotional pain, which makes them exceptionally good at such contests. They don’t become upset with themselves when they produce shoddy work– instead, they plan a blame strategy– so they don’t mind 15-hour days. (Office politics is a game for them, and one they love to play, so long hours don’t bother them.) They are emotionally immune to criticism as well. While they care immensely about their status and image, they have no reason to fear being dressed down by people they respect– because they don’t actually respect anyone. While psychopaths seem to despise losing, given the awful things they will do to avoid a minimal loss, even defeat doesn’t faze them for long. (This is an erroneous perception of the psychopaths; when we see psychopaths doing awful things to avoid minor losses, we assume they must have a desperate hatred of losing because we would require extreme circumstances in order to do such bad things. In truth, the difference is that they have no internal resistance against bad action.) Losses do not depress or hamper them. They pop right back up. Psychopaths are unbeatable. You can’t find them out until it’s too late, and whatever you try to kill them with is just as likely to hit someone innocent. Indeed, they thrive on our efforts to defeat them. When they are finally caught and prone, our punishments are often useless. There is truly “no there there” to a psychopath, and they have nothing to lose.

For an aside, I am not saying that we are powerless to curtail, punish, or rehabilitate the larger category of “bad actors”. Laws, social norms, and traditional incentives work well for normal people. Petty theft, for example, is rare because it is punished. Plenty of non-psychopaths would– out of weakness, desperation, curiosity, or even boredom– steal if they could get away with it. Jail time deters them. Prison is an environment to which normal people adapt poorly, and therefore an undesirable place to be. Psychopaths are different in many ways, one of which is that they are extremely adaptive. They love environments that others cannot stand, including prisons and “tough” workplace cultures. Punishing a psychopath is very hard, given his imperviousness to emotional pain. You could inflict physical pain or even kill him, but there would be no point. He would suffer, but he would not change.

Why does psychopathy exist? It’s useful to answer this question in order to best understand what psychopathy is. My best guess at it is that it has emerged out of the tension between two reproductive pressures– r- and K-selection– that existed in our evolutionary environment. An r-selective strategy is one that maximizes gross reproductive yield, or “spray and pray”. K-selective strategies are focused more on quality– fewer, more successful, offspring. The r-selective “alpha” male has a harem of 20 women and 200 children, most neglected and unhealthy. The K-selective “beta” has one wife and “only” 8 or 9 offspring, and invests heavily in their health. Neither is innately superior to the other; r-selective strategies repopulate quickly after a crisis, while K-selective quality-focused strategies perform well in stability. Human civilization has been the gradual process of the K-strategist “betas” taking over, first with monogamy and expected paternal investment, which was later extended to political and economic equality (because high-quality offspring will fare better in a stable and just world than a damaged one). Almost certainly, all humans possess a mix of “alpha” and “beta” genes and carry impulses from both evolutionary patterns, with the more civilized beta strategy winning over time, but not without a fight. Indeed, what we view as morally “good” in many societies is intimately connected with beta patterns– sexual restraint, nonviolence, positive-sum gradualism– while our concept of “sin” is tied to our alpha heritage. Psychopathy seems to be an adaptation in which the beta, or K-selective, tendencies of the mind are not expressed, allowing the alpha to run unchecked. In evolutionary terms, this made the individual more fit, although often at the expense of society.

Psychopaths (for obvious evolutionary reasons) like sex, status, and resources, but that alone doesn’t identify them, since almost everyone does. What differentiates the psychopath is the extreme present-time orientation, as well as the willingness to make ethical compromises to get them. The future-oriented, positive-sum mentality is absent in the psychopath. Unhampered by conscience, psychopaths quickly acquire resources and power, these being key (at least, throughout most of our evolutionary history) to reproductive proliferation. In business, their sexual appetites are not of major interest. What’s most relevant to our problem is their attraction to power and status. That is what they want. It’s only about money for them so far as it confers social status.

If we cannot defeat psychopaths, then what should we do? This turns out not to be a new problem– not in the least. Why, for example, do American elected officials draw such mediocre salaries? Why do we need all the checks and balances that make even the presidency so much damn work? Making power less attractive is one of the first principles of rational government, as the concept was developed during the Age of Reason. The reactionary clergies and hereditary aristocracies had to go– that much was clear– but how could one prevent a worse and more brutal lord from filling the vacuum? The idea was to compensate for power’s natural attractiveness by limiting it and attaching responsibilities. In the U.S., this even came to the matter of location, with the nation’s capital being chosen deliberately in an undesirable climate. In elected politics, I would say that this has mostly worked. We’ve had some downright awful political leaders, but a surprisingly low number (by corporate comparison) of psychopaths in top political positions. I wouldn’t go so far as to say that elected office doesn’t attract them, but other positions of power attract them much more. With the first-rate psychopaths making millions in the corporate world, the psychopaths who are attracted to elected political positions are the C-students in psychopath school.

Taking a macroscopic perspective, psychopathy is a very hard problem to solve. A closed system such as a nation-state has some probably invariant population of psychopaths that, inevitably, will be attracted to some variety of social status and dominance over other people. Flush them out of politics, and they end up in business. Yet if business were made unattractive due to an overpowered state (e.g. left-wing authoritarianism) they would end up back in government. They have to go somewhere, and it is impossible to identify them until they’ve done their damage (and, often, not even then). Yet the microeconomic problem for an individual firm is much easier– don’t attract psychopaths.

In technology, one strategy is Valve-style open allocation, under which employees are permitted to work for the firm directly rather than requiring managerial approval. Want to change projects? Move your desk and start. The typical extortion that middle managers use to build their careers– work for me or you don’t work here at all– doesn’t exist, because no one has that authority. Managerial authority attracts psychopaths like little else– more than money or prestige– and if one can do without it, one should consider doing so.

Much of the appeal of startups in technology is the perception (sometimes, an inaccurate one) that small technology companies haven’t yet been corroded and politicized by managerial extortions. In the ideal case, a startup operates under a constrained open allocation. It’s not yet “work on whatever you want”, because the startup requires intense focus on solving a specific problem, but employees are trusted to manage their own contribution. When do those companies go to closed allocation? Often, “hot” companies lose their cultural integrity in the process of hiring executives. The flashy career office-politician with impressive titles and “a track record” demands authority from the go, and it’s given to him. Five direct reports is not enough; he demands ten. He gets 15. Over time, employees lose status and autonomy as it’s chipped away to feed these people.

Most of the cultural losses that companies endure as they grow are suffered in the quest to hire executives from the outside, but what kind of person are you going to attract if you’re immediately willing to sell off your employees’ autonomy to “close a deal”? The people you’re most likely to get are those who enjoy power over people. Not all of these are psychopaths (some are mere narcissists or control freaks) but many are. Your culture will disappear rapidly.

If you’re running a typical VC-funded, build-to-flip operation, then hiring power-hungry external executives might be the way to go. A great way to buy an important decision-maker (an investor, an executive at an acquirer) is to give his underperforming friend an executive position at your company. You might take on a psychopath or few, but you’re not going to be in the company for very long, so it’s not your concern. On the other hand, if you want to build a stable company whose culture and values will still be worth a damn in 20 years, then you can’t do that. To the extent that your organization needs positions of power to function, you need to make them undesirable to the psychopath. This is one of the major reasons why you need intrinsic limits (checks and balances) on power.

Unfortunately for corporate executives, making a company less psychopath-friendly means equalizing the distribution of power and reward within companies. It means moving away from the CEO-as-king model and the eight-figure pay packages. Over the past forty years, we’ve been paying more and getting less when it comes to corporate management. Flushing out the psychopaths requires that we pay less, both financially and in terms of authority over other people, for managerial positions. The whole concept of what it means to be an “executive” will require reinvention as radical as the replacement of hereditary monarchs by elected legislators.

The stupid, superficial reliability contests that corporations use to assess character and protect themselves against psychopaths don’t work. In fact, they do the opposite, becoming the psychopath’s favorite tools. Companies that want to avoid being invaded and controlled by such people will have to reinvent themselves in a form radically unlike the traditional, hierarchical corporation.


Tech Wars

$
0
0

Any creative field undergoes periods of divergence and convergence, and technology is no exception. Right now, we’re in the late swing of a divergent phase, with a plethora of new languages, frameworks, paradigms and methodologies so vast that it’s impossible, at this point, even to know a respectable fraction of them. There are “back-end programmers” who know nothing of Javascript, and “front-end guys” who’ve never used a profiler. That will end, as people grow tired of having to learn a new technology stack with every job change, and also as the dimensionality of the bilateral matching problem between engineers and employers becomes intolerable. As painful as convergence can be, such a time will come out of necessity.

For a single company, convergence is often enforced internally, with “approved tools” lists and style guides. Often the communication that drives these decisions takes a familiar form: flame wars.

These partisan fights are common in technology. There’s one set of programmers who believe Java is the only language that has a right to exist and that, if you’re not using an IDE, you’re using “stone tools”. There’s another contingent that thinks everything should be written in Ruby, because that’s what Rails is in and why do we need another goddamn language? There’s a third that’s fixated on C++ on the basis that, if you can’t manage memory, how can you make sure an application performs? For now, I don’t care who’s right in this– really, that depends on the problem, because there clearly isn’t “one language to rule them all”– and language or style comparisons are not what I care to discuss today. I think we can all agree these flame wars are stupid and counterproductive. Do tabs/spaces debates really require long email chains? No, they don’t.

This, if anything, is the thing I dislike second-most about programmers. The worst thing about programmers is that most of them (although this is a largely a product of a corporate environment that rewards mediocrity) lack taste and curiosity, and therefore never become competent. The second-worst is that, among those who have taste, most of the heated debates come down to superficial nonsense. Yes, code quality matters. It’s critically important; it can make or wreck a company. Cosmetic differences, on the other hand, are just not that important or interesting. Tooling choices are important, but rarely as critical as opinionated engineers make them out to be, in the sense that they’re rarely existential matters for the company. Closed allocation is cultural death, and unethical management will ruin your startup, but the difference between Python and Scala is unlikely to be lethal. Can you write performant systems using mostly Python? Of course. Can high-quality, attractive code be written using an inelegant language like C++? Yes, it can. I have strong preferences regarding languages, but I’ve learned over the years not to take them too seriously. More dangerous than picking a suboptimal language is not getting any work done because of partisan bickering.

Language and tool wars are an outgrowth of a careerist idiocy that one would expect software engineers to be above. Engineers engage in this hyperbolic mudslinging because their career needs, in a typical corporate power struggle, often necessitate some control over the technical environment– and that’s often a zero-sum squabble. The software industry isn’t a real meritocracy, but it insists on looking like one, so the object of the game is to convince the team to use the tools with which one is most familiar, in order to become (because of tool fit, not intrinsic superiority) the most productive. This is the problem with technology in the late-autumn phase of a divergent spell: an engineer’s job performance is going to be more of a function of his match with the work environment and tools than of his overall competence, so there’s an enormous incentive to change the game if one can. There’s more individual gain in changing existing technical choices than grinding away at regular work. Most software engineers have learned this, and I can’t count on one hand the number of times I’ve seen a company go through a scorched-earth code rewrite because some douchebag managerial favorite only knew one language, and (of course) that language became the only possible option for the firm. This is terrible for the company that has to reinvent all of its technical assets, but beneficial to the favorite who naturally becomes the master of the new technical universe.

In a world of tight deadlines, short employment stints, and extremely detailed job requirements, this is only getting worse. It’s no longer acceptable to spend months in a “ramp up” period on a new job. It won’t get a person fired, but you’re unlikely to get the best projects if you don’t blow someone away in your first 3 months. So you need to find or carve out a niche where you can use tools you already know, in order to get some macroscopic achievement (a “launch”) as soon as possible. This forces companies into one of two undesirable alternatives. The first (sprawling divergence) is to let everyone use the tools they like, and the predictable result is “siloization” as engineers stick to the tools they already know how to use, rather than take the time to understand what other people are doing. Taking these systems into production and supporting them becomes a nightmare, because they often have 7 different NoSQL databases to support what’s essentially a CRUD app. The other (enforced convergence) is to require certain uniformity in development. The company becomes an “X Shop” for some specific language (or database, or methodology) X. That’s when the flame wars begin, because everyone is going to have an enormous stake in the choice of X. Now, it’s not enough for X to be the best tool for your project; you have to shove X down other peoples’ throats in the company, or you won’t be able to use it at all. This also falls down as necessity requires exceptions to the uniformity requirements, and the decision of who can get an exception becomes immensely political.

In truth, we as programmers are, in many companies, behaving like executives, spending more time arguing about how to do work than doing the actual work.

This, I think, is the fall-down calamity of the software industry. It’s why 50 percent of people who are professional programmers won’t be, seven years from now. Software engineering is all about improvement– building things that didn’t exist before, automating tedious processes– and it’s only natural that we want to improve ourselves. In order to get better, one needs a reliable stream of increasingly high-quality projects. What makes great software engineers extremely rare (and they are) isn’t a lack of talent alone; the limiting factor is the paucity of quality work. At a certain point, one reaches a level where it’s difficult to get quality work, even for a qualified (or overqualified) engineer. You end up spending more time convincing managers to give you good projects (playing politics, acquiring control of the division of labor) than you actually get to spend on the work. People get enervated and drop out, or they become managers and lose touch with the day-to-day process of writing code. Tooling wars are one component of this nasty slog that programmers have to enter in order to have a shot at the best work.

What’s the solution? I mentioned the alternative to uniformity, which is the “everyone uses whatever they want” divergent approach that often generates an intolerable support burden. It can work, but it will fail in the typical hierarchical corporation. When you have this proliferation of creativity and ideas, you need two things. First, there has to be a pruning process, because some of what is produced won’t be very good. Managers never get this right. Their decisions on what modules to keep (and, thereby, force engineers to maintain and use) and which ones to burn will be based on political factors rather than code or product quality. Second, it requires extremely strong cross-hierarchical communication, because it requires that people begin to make technical choices based on the long-term benefit of the group rather than short-term “productivity” in an attempt to acquire status. People who are building technical assets will need to actually give a damn about making their clients succeed, as opposed to the selfish, parochial goal of pleasing managers. You won’t get this kind of communication and pay-it-forward attitude– not as prevailing behaviors, rather than occasional serendipities endowed by the comfortable– in a closed-allocation company.

The open-allocation sociology is one in which there is cross-hierarchical collaboration because the conceptual hierarchy necessitated by the problem doesn’t congeal into a rigid hierarchy of people. Because there’s so much internal mobility, other teams are potential future teammates, and people will generally treat them well. What this means is that, while there will be divergence in the name of exploration and creative license, people will also take care of the convergence tasks, such as integrating their work with the rest of the company and teaching other teams how to use the assets they generate.

For a contrast, the closed-allocation sociology is one in which people strive, in the short term, for the image of superior productivity, in order to advance rapidly into a role where they control the division of labor instead of being controlled by it. This encourages people to diverge carelessly, for the twin purposes of (a) appearing highly productive and useful, and (b) creating personal security by generating technical assets that, although management can be convinced of their importance through political means, become so opaque to use as to leave other teams beholden to their creator. Of course, this productivity is illusory, the reality being that costs are externalized to the rest of the company. The heavy-handed, managerial antidote is to mandate convergence, but that inducing battles regarding which technologies and approaches live and which die. The result are hyperbolic, counterproductive and fear-driven arguments that devolve into the technical flame wars we all know and loathe.

As long as we have hierarchical corporations and closed-allocation regimes where one must either control the division of labor or be controlled by it, we will have endless flame wars over even the smallest of technical choices. The stakes are too high for them not to exist.


Gervais Principle questioned: MacLeod’s hierarchy, the Technocrat, and VC startups

$
0
0

The MacLeod Model of organizational sociology posits that workplaces tend toward a state in which there are three levels:

  • Losers, who recognize that low-level employment is a losing deal, and therefore commit the minimum effort not to get fired.
  • Clueless, who work as hard as they can but fail to understand the organization’s true nature and needs, and are destined for middle management.
  • Sociopaths, who capture the surplus value generated by the Losers and Clueless. Destined for upper management.

Venkatesh Rao has a brilliant analysis of these three tiers as they play out in the sit-com, The Office

I’m going to propose the existence of a fourth subset: the Technocrat. Before doing that, it’s important to assess the forces that generate these tiers, and why I think the MacLeod classification isn’t adequate.

Effort and strategy

Losers are not “losers” in the sense of being unpopular or contemptible people. Here, “Loser” means “one who loses”. In fact, they’re often well-liked and good people. However, they’re aware that employment is a better deal for the other side than it is for them. What they want most is to be comfortable. They don’t want job insecurity or too much change. They’re not “losing”, so much as they’re selling risk, in their poorly-paid jobs. They prefer contests where the points don’t matter (e.g. the Party Planning Committee in The Office) and the comfort of a stable group that will protect them. At work, they generally aim for the Socially Accepted Middling Effort (SAME). They’re not lazy, but the largest influence on how hard they work is social approval. They don’t want to be perceived as slackers, nor as overachievers, so they manage their performance to the middle.

Clueless are the hardest workers. They work as hard as they can, out of a sense of loyalty and ethical obligation. There’s an adverse selection at play that favors the untalented, because people who are naturally talented and hard workers tend to threaten Sociopathic colleagues and will be sabotaged. The survivors tend to be the less capable. Michael Scott, again borrowing from The Office, is a clear example of this. While he’s incompetent, he clearly puts everything he has into his work, having no social life outside of it. If he were more intelligent, he’d either have been tapped for a better role, or he’d be perceived as a threat and be sabotaged by an ambitious Sociopath. However, the less effective and capable Clueless are clearly destined for terminal middle-management roles that Sociopaths don’t want, so this adversity doesn’t often befall them. 

Sociopaths are strategic in their work ethic, but also flexible. As Venkat describes, they’ll fall to the outright bottom, effort-wise, if that frees up resources to impress someone who matters. They have no qualms about reducing their effort to zero in an environment where social attractiveness is more important. However, they’ll also contribute immense bursts of energy when a promotion is available. One core Sociopath skill is assessing whether it’s advantageous to work hard and, when not, to slack off and focus on social polish.

Common traits of the tiers

Of these three subsets, each pair of them has a unifying trait that is considered an asset by the corporate world. Losers and Sociopaths are strategic, Clueless and Sociopaths are dedicated, and Losers and Clueless are team-players. Each of these deserves assessment.

Team-playing (Losers and Clueless)

Companies prefer employees who will prioritize social approval from the team over personal ambitions or needs, and call such a person a team player. The alternative is the out-for-herself careerist who expects interesting work and upward mobility. The selling trait of team players is that they don’t have to be explicitly motivated “from above”. Wanting to do good by the people around them will motivate them organically “for free”, and they’ll ignore being underpaid, ignored, and given few opportunities to advance if the group’s approval is enough to indulge their esteem needs.

Losers are team-players because they want social approval. They want the comfort of being in rather than vying for up. They’re happy with the trophy of ascent into a meaningless in-crowd (in The Office, the Finer Things Club). Their goal is to be cool. Clueless, however, want to be “in charge”. What turns them into team players is the fact that they can be motivated with meaningless leadership accolades, or even undesirable tasks dressed up as positions of power (in The Office, Dwight). Clueless, in general, can’t recognize the difference between the genuine power sought by Sociopaths, social approval sought by Losers, and the ceremonial non-power that Sociopaths and other Clueless generate to keep them motivated.

Of the three traits I will analyze (team-playing, dedication, and being strategic) the most important one for a subordinate position is team-playing. Even if a person is highly competent, if she’s not well-liked by her team, she won’t be effective in a low-level position, because no one will help her. Subordinate positions never involve enough autonomy for a person to have a real achievement on her own. As Venkat Rao correctly assesses it, Sociopaths in subordinate roles have a short lifespan, limited by their willingness to run a team-player charade. It’s up-or-out for them. Ryan, in The Office, does this brilliantly. He uses his relationship with air-headed Loser Kelly to hide his true nature and underplay his threatening intelligence– seeming like an incompetent Loser– but once he finds common ground (his MBA degree) with CFO David Wallace, he takes the chance to jump and, on success, dumps her immediately.

Dedication (Clueless and Sociopaths)

For Losers, most of life is outside of work. Losers will put in their 40 hours, but if you ask one to work a weekend or do undesirable tasks that are way outside of her job description, you’ll get resistance. For the Loser, it’s “just a job”, and not something that has the right to dominate a person’s entire life. Losers aren’t lazy or apathetic, but they’re rationally disengaged. A loser recognizes that she can get an equivalent job at another company, so it’s group cohesion only that keeps them in place.

Clueless and Sociopaths, on the other hand, can work 90 hours per week without complaint. They’ll do their best to meet unreasonable requests from above, but their motivations are very different. Clueless are driven by a misguided sense of corporate loyalty, both above and below them. The Clueless middle-manager (or over-performing team member) is willing to please her boss, on whom she depends for her connection to the company (to which she feels a debt of gratitude). She’s also willing to pick up the slack of the Losers on her team, out of a sense of personal and possibly parental loyalty to them.

Sociopaths are dedicated for a different reason. Like anyone else, a Sociopath would prefer not to work long hours or do unpleasant tasks, but their pain tolerance is extremely high, because the corporate game is fun to them. For a Sociopath, a 90-hour work week is like spending that much time playing a video game. It’s not how most adults would choose to spend their lives, but it’s not taxing, boring or painful either. If the rewards are there, the Sociopath will drop everything and put in the hours.

For a middle manager, dedication is the most important trait. Middle managers frequently find themselves in two jobs that are often at odds. On one hand, they have to clean up the messes left by rationally disengaged Loser subordinates. On the other hand, they must please the hard-driving Sociopaths above them. This generates not only a lot of work, but an incoherent, heterogeneous mess of it. Frequently, they end up working on “whatever’s left” and putting in a lot more energy than the tiers below and above them. The Loser subordinates don’t work as hard as they do. The Sociopath executives have enough power to control the division of labor and definition of “performance”, so they don’t have to work hard. Middle-managers get stuck in the middle, and it takes dedication to survive it.

Strategy (Losers and Sociopaths)

Losers and Sociopaths might seem to be polar opposites, but they also share an important trait: they’re strategic. They’re realistic in their assessment of what efforts will be fruitful and, unlike the Clueless with their unconditional work ethic, they allocate their time and energies only toward work that seems to be worth a damn. In this, they tend also to be able to relate amicably, because their strategies rarely conflict.  They want different things. Losers want to minimize discomfort and to be “cool” as defined by the in-crowd where they are. Sociopaths want to maximize personal reward and rise into the in-crowd that actually has power.

Strategic workers tend not to waste time. Losers work efficiently but tend to do as little as they need to retain social acceptability. Contrary to stereotype, they don’t do “the minimum not to get fired”. They modulate their efforts to the middle, not wanting a slacker reputation, but not wanting a “busy bee” reputation that will overload them with undesirable work. Sociopaths (and Technocrats, to be discussed later) work on the stuff that benefits their careers, and ignore whatever doesn’t.

For upper-management roles, being strategic is the most important trait. A non-strategic manager or executive will put a lot of effort into pursuits of low or even negative value. That can be acceptable for a middle manager. The Sociopaths above will set priorities, and direct him, while the social-approval mechanisms of the strategically-aware Losers will keep the team from going completely off the rails. Teams can protect themselves, in the short term, from off-the-rails Clueless management. A non-strategic executive is dangerous, if not untenable. Losers, in fact, tend to be better picks for executive roles than Clueless. This explains why, after Peter outs himself as a disengaged Loser to “The Bobs” in Office Space– while office conformity demands that subordinates fake Clueless earnestness around authority– they describe him as “a straight shooter with upper management written all over him”. He’s strategic. He understands that his current job is not worth doing.

Other possibilities…

All three of these traits are attributes that corporations view as assets: being a team-player, dedication, and being strategic. I’ve painted a picture in which each of the three MacLeod tiers has exactly two of the three. Losers are strategic team players, but not dedicated. Clueless are dedicated team-players, but not strategic. Sociopaths are dedicated and strategic, but not team players. This brings us to the question: why should an employee have exactly two of these seemingly orthogonal capabilities? What happens to people with zero, one, or three?

Deficient categories (0/3 and 1/3 traits)

If someone’s neither dedicated nor strategic, it doesn’t matter much whether that person’s a team player or not. A team player sacrifices personal ambitions for the health of the team, but people who are neither dedicated nor strategic have nothing credible to sacrifice. These fall into the bottom of the Loser category: Lumpenlosers.

Dedicated, non-strategic, non-team-players are Loose Cannons (see: Andy in The Office). They’re driven by ego, but have no insight into whether what they’re doing serves any purpose (personal or organizational) at all. They tend to eagerly throw themselves into battles with no idea where they will lead. Either they are fired, or they become Lumpenlosers as well, being tolerated for the amusement they provide.

Finally, to be strategic but neither a team-player nor dedicated is almost a contradiction in terms. A strategic person will recognize that one or the other is necessary in order to retain employment in the long term. So, the only time this constellation occurs is when a person has little interest in staying with a company. They usually quit or get fired.

Unicorns? (3/3 traits)

Are there people out there who are strategic, dedicated, and team players? I would also argue that this arrangement is contradictory, at least in a subordinate role. A strategic person will either aim for minimal discomfort and pain, or for maximal benefit. The first category will avoid conflict and disharmony (team player) but not work long hours, especially not to an extent that hurts the group (by making others look bad in comparison). On the other hand, the yield-maximizer will work very hard (dedication) but not for the benefit of a social group coalesced around someone else’s purpose. People who are strategic and dedicated will demand important roles and interesting work, which means they’re not likely to be perceived as team players.

This is not to say, however, that it’s impossible or even rare for a person to be strategic, dedicated, and a good person. What I do mean to imply is that strategic and dedicated people don’t accept subordination. Although they will work for altruistic purposes or the good of “the team”, they do it on their terms. In fact, I would argue that among the most important players in the modern economy, a fair number of them fit this exact description. Like MacLeod Sociopaths, they’re dedicated and strategic. However, the term “Sociopath” doesn’t apply, because they aren’t bad people, and they’re not willing to rise by being unethical. They want to win, but fairly. This requires revisiting the Sociopath category.

Are MacLeod Sociopaths really sociopaths?

I would argue that the Sociopath category should be split into two subsets: Technocrats, and the true Psychopaths.

It’s important to address the term sociopath. It’s no longer used in psychiatry, having been replaced by psychopath. A psychopath is a person without a conscience, and that’s about as close as modern psychiatry gets to calling someone “a bad person”. Most psychopaths are, as typically defined, bad people. Their lack of empathy and concern for others leads then toward reckless and harmful behavior. Whether this is mental illness or sane moral depravity is almost a religious question, but either way, psychopaths aren’t desirable in an organization, but many are very successful climbers of institutional ladders. The lower classes of psychopaths end up as common criminals, but the smartest and most socially adept psychopaths develop strategies for externalizing costs and taking credit for others’ accomplishments. They are disproportionately common in the upper ranks of corporations.

Sociopath is a pop0-psychology synonym for “psychopath”, most likely persisting because the popular ear conflates the latter with “psycho” or “psychotic”, which psychopaths are clearly not. It is not even clear that psychopaths are mentally ill. I would argue that, in a prehistoric evolutionary context, their lack of emotional attachment to others made them individually successful (high rates of reproduction, at least for men) at the expense of group harmony and stability. In modern society, they are the cancer cells– individually fit at the expense of the large, complex body of civilization. This is different from typical mental illness (especially psychosis) which is undisputedly pathological, making the individual less fit.

Psychopaths are egotistical and usually unethical. Above all, they desire social status. Money and power are only interesting to them so far as they confer an elevated position over others. This characterization, one hopes, does not apply to everyone who is strategic and dedicated within an organization. I find it clear that it does not. There are good people who are strategic and dedicated. So I add a fourth category: the Technocrat. So I discard MacLeod’s Sociopath category, and split it into the Technocrat and the true Psychopath.

In this usage, Technocrat pertains more to the abstract principle of technology– the use of knowledge in pursuit of mutual benefit– than to any concrete device. Technocrats are dedicated and strategic and, like Psychopaths, they want to win. What differentiates them is that they don’t want to make others lose. Technocrats have a positive-sum mentality and a desire to make improvements that benefit everyone. True psychopaths despise what they perceive as human weakness and want to dominate people. Technocrats are competing against “the world” in the effort to make it better than it has ever been.

Because Technocrats like to improve things, they extend this mentality to their work and themselves. They want to be more skilled, and more autonomous, so they can achieve more. They’re dedicated, because it requires work to find “win-win” opportunities. They’re strategic, because what they do tends to require creativity. In general, they’re not team players. Losers are team players because they fear social disapproval, and Clueless if they perceive their role as leadership (even if it’s objectively not). Psychopaths view the team as something to exploit (for personal gain) and to dominate (for enjoyment). Technocrats want to improve the team, often in spite of itself. This makes them good leaders, but they turn out to be shitty subordinates. That, in fact, is the Technocrat’s downfall. Of the four classes, they are the absolute worst in subordinate positions. Psychopaths hate subordination as well, but they’re better at failing silently for long enough to move up.

The MacLeod model correlates psychological traits with organizational position, and the Technocrat is difficult to pin down. Other categories are much easier. Losers cannot typically rise into management because they aren’t dedicated. Clueless should not become executives because they aren’t strategic. (It happens, but they usually get fired at some point.) Psychopaths can rise without limit, but their lack of a team-player ethos means they’re fired if they can’t find a route. There isn’t a clear position in the traditional managerial hierarchy for the Technocrat. Organizations would probably prefer to have Technocrats over Psychopaths in upper management, if they were rational entities. They’re not, and for a variety of reasons, when Technocrats and Psychopaths battle, the Psychopaths usually win.

Technocrats don’t have a well-defined home in the white-collar corporation. They despise the pointless busywork of subordinate positions, so they don’t do well in the Loser circuit. They tend to be logical truth-seekers, so the Clueless world of middle-management is too full of self-deception for them. Finally, amid the zero-sum in-fighting of Psychopaths in the executive suite, they often lose. Why do they often lose? That I will address below.

Some Technocrats have found a way to become highly-paid individual contributors, such as in academia, research, and software engineering. Unfortunately, this isn’t a stable arrangement for most. Academia has taken a Clueless direction of late: graduate students and pre-tenured professors are grunts, and tenured professors are (semi-autonomous) middle managers chasing rainbows. I consider most of them non-strategic because they lack interest in the practicality of their work, a behavioral pattern that has reduced their social status over time. Software engineering, on the other hand, is mostly a (better-paid) Loser’s world, because most of the work that professional developers get is line-of-business dreck that isn’t intellectually challenging, and not worth the dedication that a genuinely important or interesting project would deserve.

The typical career of a Technocrat looks similar to one of a Psychopath: job hopping, up-or-out gambits, and the obvious prioritization of a personal strategy over “team player” social acceptability. The difference is the purpose toward which she aims. The Technocrat wants to become really great at something and do something good for the world. The Psychopath wants to climb old-style, increasingly anachronistic, industrial hierarchies and dominate (or, at least, exploit) others. Both are after power, but they define it differently. Technocrats seek the power to help people; Psychopaths want power over people.

One might suspect that Technocrats should succeed as entrepreneurs, and this is probably the Technocrat’s best bet. Rather than attempting to find a decent role in an existing, dysfunctional organization that it will be impossible for her to fix, she should create a new one. Then, she has no conflict between her strategic and dedicated nature and being a “team player”, because she has built and defined the team.

What I would say, however, is that the venture-capital-funded startup ecosystem (VC-istan) is not a Technocrat’s paradise, despite its pretensions to that effect. The actual ethical behavior of the leadership at most of these VC darling startups is old-style managerial Psychopathy. Why is it this way? To answer that, one needs to understand what VC-istan actually is: one of the first postmodern corporations. VCs are supposed to be in competition with one another, but they collude. They decide, as a group, who’s hot and who’s not, and what valuations will look like in each season, and they get together and co-fund startups. There is no market economy. Rather, funding decisions are made by a command economy driven by social consensus among an in-crowd (partners at influential VC firms) and those who wish to join them.

One might think that VC-funded startup founders are Technocrats. Not likely. These so-called CEOs are actually mid-level Product Managers within the postmodern corporation of VC-istan. Some are ascendant Psychopaths seeking a transition to a more senior (“entrepreneur-in-residence”, VC associate, corporate executive) role and others are Clueless “true believers” who fail to recognize that, due to their investors’ unilateral control not only over their current projects but over their long-term reputations, they’re nothing but eager middle managers.

So where are the Technocrats, then? Over time, they tend to fail out of hierarchical organizations, being ill-equipped to fight against the Psychopaths who tend to defeat them. As VC-istan increasingly devolves into a postmodern corporation (rather than a real market, with– imagine this!– actual competition among investors) they will be increasingly out-of-home there, as well. I don’t know where they’ll go. My guess is that the rising generation of Technocrats will find a way to democratize business formation, with crowdfunding (e.g. Kickstarter) being an obvious first step in that direction, and presumably a lot more to come.

Why Psychopaths beat Technocrats

Why do the zero-sum, destructive Psychopaths defeat the positive-sum, creative Technocrats? To answer this, it’s important to understand credibility, which is an umbrella term for intangible assets that institutions create in order to rank people by perceived merit.

How do organizations decide how to compensate people, and whom to promote? When a person starts a new job, her salary is based on economic conditions and her negotiation skill much more than any objective definition of merit. Companies, begrudgingly, accept this. They recognize the tradeoff between operational efficiency and fairness in compensation, and will gladly adapt salaries to conditions. However, companies prefer to believe that they get the division of labor and recognition right. It’s important for them to maintain this fiction, because it allows them to believe that, although they know their compensation schedule to be unfair, they know exactly how unfair it is and can reduce the injustice (not out of ethical principle, but because it’s a morale risk) over time. If Bob and Mark are both Level IV software engineers making $100,000 and $120,000 per year, respectively, Bob can receive better annual raises until they’re at parity.

This is what professional ladders and job titles are for. The firm wants to believe (or, at least, wants the Clueless to believe, that often being enough) that, while small injustices are tolerated in specific individual compensation, that salary ranges shall be set according to job descriptions, which will be assigned according to pure merit. Someone who is “platonically” at Level IV might get a higher salary based on prior compensation, negotiation skill, and market conditions, but he should never get a Level-V title… until he earns it. Titles, as the organization perceives them, must correspond to personal merit and work performance, not external conditions.

This is credibility, which represents the company’s degree of trust in an individual. Compensation is what a person takes out of a company. Credibility is what the firm thinks the person puts into it. Job titles are a powerful form of credibility, being transferrable to some degree across companies. So-called “performance” reviews exist to calibrate private credibility based on a person’s work history, and may eventually result in public credibility changes through promotions (or demotions and terminations). Societies understand that resources (power, money, information) sometimes accrue to unworthy or even dangerous individuals, and that there is nothing to prevent “the bad rich” from acquiring mere commodities once this has happened, but the sacred intangible of trust is supposed to be above that. Credibility is the corporate analogue of trust, invented because interpersonal trust becomes sparse at scale. Large organizations recognize that genuine trust, built through direct interaction, is not enough to tie people together, so they need to invent a legible pseudo-trust in order for anything to get done. Credibility is a currency that corporations mint in order to make statements about how far a person can be trusted. You’re not supposed to be able to get credibility through trade. It should be a reward for the demonstration merit only.

Credibility isn’t just a vague, academic notion. It has actual effects. For example, a common way for a company to do a layoff is to select levels at which it is overstaffed, and terminate the highest-compensated at that level. The idea is that employees at each level are equal in productive value. It’s corporate consistency. Thus, the most marginal employees, who should be let go first, are the highest-paid at the targeted level. Let’s say that the Level IV salary range is $80,000 to $120,000 and the Level V range is $100,000 to $160,000. It’s dangerous to be a Level IV at $120,000– but very safe to be a Level V at the exact same salary.

Is this regime exploitable? Of course. Clueless might believe that there’s such a thing as a “platonic” Level IV, and that people will be accurate assessed and promoted to their level of ability. That’s not true. There is no “platonic” Level IV or Level V. There will be guidelines based on work experience and interview performance, and then myriad possible exceptions. A wise negotiator slotted for Level IV, if he knew the company’s HR infrastructure, would target his salary history and expectations toward $125,000, making him ineligible for a Level-IV position. Would the hiring manager say, “We can’t meet that”, and turn him away? Or would she bump him to Level V, not only justifying his salary, but improving his job security, leadership opportunities, and level of respect within the company? Unless he were very clearly ineligible for a Level-V position, the latter would happen. So he would, in effect, be using his negotiation skills to win credibility. In the real world, credibility is just as negotiable and tradable as “harder” currencies like money, power, and information. It’s not “supposed” to be that way, but it is.

Losers care about credibility, but only enough to retain employment. Once they have enough of it, they focus on side games in which those games’ local definitions of credibility and status don’t really matter, in that they have no effect on personnel decisions or compensation. Clueless believe deeply in credibility and go about earning it “the old-fashioned way”, which is to earn it through hard work. Psychopaths, on the other hand, are the quickest to actually realize what credibility is: a corporate fiction. In reality, it’s a commodity like any other. If they need someone to vouch for them, they find a way to buy it. Psychopaths bribe and extort their way into credibility. They take credit for others’ work, they lie, and they find ways to trade social assets (including titles and “job performance”) that aren’t “supposed” to be for sale.

How do Technocrats handle credibility? I’m not sure. Since the intended effect of corporate credibility is to create a global social status in large groups that would otherwise not have a clear rank ordering, thus generating a sort of “permanent record” that’s ripe for abuse by power, it seems naturally opposed to the Technocrat ethos. However, it’s also clear that corporations need to invent credibility, because interpersonal trust is too sparse at scale for large organizations to function, and credibility becomes a lubricant. To the extent that Technocrats accept credibility’s existence, they tend to treat it as an engineering problem. I think the Technocrat’s objective is to fix credibility: to redefine it so that it can’t be traded, bought, or won through extortion, and to make it an actual dimension of merit. Of course, this is a hard, if not impossible, problem. Psychopaths take a much easier route with more personal benefit: they find ways to exploit it.

Indeed, these four categories can be framed in terms of psychological reactions to the divergence between true merit and corporate credibility:

  • Clueless deny it and persist in the just world fallacy. Indeed, credibility’s root power is in its ability to motivate and impress the Clueless. 
  • Losers accept the crookedness of the corporate game and disengage, grabbing just enough credibility to avoid adversity.
  • Psychopaths become crooked themselves, which is the fastest path to personal benefit, and ultimately win not only credibility for themselves but the capacity to modulate others’ credibility, which is corporate power.
  • Technocrats attempt to fix the game, often altruistically through openness, transparency, and anti-hierarchical corporate structures. So far, they haven’t shown much success in deploying their vision at scale.

Between Psychopaths and Technocrats, who is more likely to climb the corporate ladder? Unfortunately, the advantage is with the former. Technocrats have to change people, while Psychopaths exploit them as they are.

Corporations, being pre-industrial in their ideology, have relied heavily on credibility and personal relationships for an obvious reason. There weren’t many hard problems for these institutions (rent-seekers that used technological innovation, but rarely created it) to solve, and there was no other way to evaluate people. Once the hard intellectual challenges have been overcome and it’s a risk-averse, stable organization, then how does it define the “best” who deserve leadership positions? How does one define “merit”? The Losers don’t attend that debate, because they don’t care. The Clueless say, “Hard work”. Technocrats argue for creativity and insight, still wanting to tackle hard problems that the now-stable and risk-averse organization believes it has outgrown. Psychopaths form an alliance with the Clueless, recognizing them as effective (in well-defined roles) but non-threatening. What’s built up, then, is a concept of “performance” that, to the Clueless, looks like a commodity earned only through hard work: a credibility. Clueless provide the muscle and blind support that enable credibility to be manufactured; Psychopaths define it. However, the Psychopaths are always careful to define performance with enough back-doors that they can win, regardless of whether they wish to work hard. At the top, this is especially easy. If you control the division of labor, you get to write your own performance review.

What Psychopaths recognize is that credibility can always be bought on a black market. There are always ways to get it, and often the illicit ones are more effective than the straightforward path. Technocrats understand this as well, but they often find it morally objectionable to start trading. It feels like cheating to them, and the worst varieties of credibility-trading (which generally amount to professional extortion) they refuse to use at all.

Can Technocrats win?

Psychopaths and Technocrats are destined for conflict. Psychopaths invent social credibilities to win the support of the Clueless and put themselves at the top of organizations. Technocrats see a crooked game and try to fix it, often with radical honesty. If the Technocrats succeed, they’ll undermine the work of the Psychopaths. This becomes a fight to the death. The character of an organization will be determined by which of these two categories wins. Ultimately, most organizations will be run by Psychopaths who superficially adopt the appearance of Technocrats. This becomes increasingly the case as corporations become risk-averse and self-protective. The positive-sum “win-win” outcomes that Technocrats seek exist, and they’re all over the place, but they never come without risk. Once the company decides that creative risks are intolerable, what’s left is zero-sum social-status-driven squabbling.

The MacLeod hierarchy applies best to the modern white-collar corporation, whose modern incarnation was developed in the 1920s. At the time, there were a decent number of Technocratic leaders, but over time, corruption set in. By the mid-1970s, it was clear that Psychopaths were going to take over, and the “greed is good” 1980s saw that through. Basic research was cut, academia turned into a pyramid scheme, and well-positioned corporate executives made millions. Psychopaths gutted and looted corporations, leaving husks– large, powerful institutions whose beneficial purposes had been discarded, leaving mere patterns of externalized costs and value-capture by the Psychopaths. Where’d the excluded Technocrats go? Well, all over the place, but the most well-known cohort moved to Silicon Valley– before it was cool.

Silicon Valley is not a Technocrat haven any more. If nothing else, one need only look at house prices in the Bay Area. In cases of extreme geographic scarcity (e.g. San Francisco, Manhattan) house prices might be explicable by supply and demand in a fair, because supply responds slowly to economic circumstances while the demand curve moves quickly. However, when extreme housing prices persist over the long term, and over a large and mostly suburban area where there is no natural geographic scarcity, this suggests market manipulation and NIMBYism. That’s the surest sign ever that Psychopaths are winning. I am not saying this to rag on California. (I live in Manhattan; we have expensive real estate and Psychopaths, too.) In fact, had California’s real estate problem reversed itself, I’d argue that it was a transient market phenomenon and not Psychopathic victory. However, even when the economy softened, Bay Area real estate remained pricey. That’s a primary indicator that the Psychopaths have started to win. Psychopaths love when real estate is expensive, because they can use physical position to display dominance. Technocrats, on the other hand, love New Places, because New Places are positive-sum– you can build cool things on inexpensive, unused territory and improve it. The Technocrats poured into California when it was a geographic New Place and land was cheap. Now the New Places are elsewhere. It’s not clear what the next Technocratic New Place is, or even if “Place” still needs to be a geographic location, but it’s no longer Northern California.

If I had to guess, though, I’d bet that within the next 10 years, VC-istan will fall under the weight of its own organizational Psychopathy. I don’t have enough personal experience to opine on whether VCs themselves are Psychopathic, but the VC darlings thatI’ve known well have been run by some horrible people. Mark Pincus has been most overt about his psychopathy, but his behavior is far from atypical among the sorts of people who win in this bubble-world. Once, I had to leave a junior-executive position (for which I had left Google) at a brand-name, VC-funded startup at 3.5 months because I refused to sign an affidavit that not only contained perjury, but probably would have Pincus’d ten on my colleagues who had joined early and had “too much” equity. The Psychopathic management of that company spent months afterward attempting to destroy my reputation. How I overcame that adversity deserves its own essay, for another time…

Psychopaths defeat Technocrats in an organization where the work is intellectually easy, creatively non-demanding, and social status ends up trumping ability. That describes most organizations, because most companies are too risk-averse to do anything hard or important. Needless to say, Psychopaths love economic bubbles– exploitable, chaotic optimism without substance– and the 2010-13 “social media” affair clearly qualifies. However, this bubble’s very different from the last one. In the 1997-2000 bubble, startups were overvalued by the market. Wall Street provided the fools. In this one, market valuations appear reasonable. Investors aren’t going to fall for that one again. This bubble is in the unduly high value assessed, by young talent, to subordinate positions in these startups. (I wrote this, last summer, to contribute to its inevitable “pop”.) The fools aren’t coming form Wall Street, but fresh out of school. Most employees of these VC-funded “tech startups” believe they’re 6 months away from investor contact, real job titles, and the chance to be a founder in the next startup. They’re wrong. Long before they can collect the career assets they’ve been promised, this bubble will have popped and washed back to sea, leaving them with useless work experience and, thanks to the high cost-of-living in the startup hubs, no savings. The 1990s dot-com bubble wrecked a few careers and spilled a lot of rich peoples’ money. The 2010s bubble will spill a little bit of rich peoples’ money and wreck a lot of careers– mostly, those of young technologists trying to establish themselves. I would argue that this makes it worse.

What I hope to see during 2013, and the coming years, is a Flight to Substance. I want the best people– investors, entrepreneurs, engineers– to realize that they’ve been hoodwinked by VC-istan’s shallow reinvention of the corporate system as something different and “cooler”, and to demand a return to Real Technology. I want to see better ideas, better companies, and better cultures. I want to see funding for research and development, so that people can do intellectually interesting work without being thrown into the secondary labor market that academia has become. I want to see a world in which people actually care about solving hard problems and delivering real value. When this happens, the Technocrats can win again.


Viewing all 304 articles
Browse latest View live