Quantcast
Channel: michaelochurch – Michael O. Church
Viewing all 304 articles
Browse latest View live

Gervais / MacLeod 22: Inferno

$
0
0

In Part 21, I wrote a summary of the modern Organizational Problem. For a recap of the highlights:

  • As machines take over boring, commoditized work, the only stuff left for humans is convex work where enabling excellence is more important than excluding failure, which is not even possible if the work is difficult enough to be interesting. Traditional, risk-reductive approaches to management fail on convex work. 
  • Companies evolve, due to the inevitable corruption attendant to their internal credibility markets, toward a sociological state that is internally stable (due to the Effort Thermocline) but renders it unable to compete on the market, and prone to moral abandon. This either drains them slowly (rank culture) or causes them to lapse into ethical depravity (Enron) that brings down the whole house.

Most of my focus, in Part 21, was on the macroscopic, impersonal forces that act on organizations. I mentioned conflict between lawful evil and chaotic good, as well as the ancient mechanisms (induced depression) through which lawful evil asserts itself, both briefly, but now I’m going to do a deep dive into the micro-scale illnesses of the organization, tunneling through all the layers, in an attempt to find solutions at each level.

I’ve dedicated quite a few chapters to trust. I’m fairly confident that I’ve solved the root financial problems already. I’ve also discussed the toxicity of distrust. We’ve covered a lot of ground, both technical and soft. What I believe I have achieved is to unite the sociological, the economical, the moral, and the financial elements of the organization. We’re almost ready to Solve It: to tackle the Organizational Problem. In Part 21, I summarized the macroscopic details of organization decay (industrial and moral) but the problems are most tractable at the microscopic scale. We need to descend into the often personal hell of corporate life.

The structure of our journey through the problem, stratum by stratum, I’ll put in this part (Part 22). I have, however, needed to split what was intended as a “final post” into two. Part 23 will discuss the solution and follow the same structure.

Let’s waste no time in getting ourselves to the gates of Corporate Hell.

First Circle: Opacity

We come to Limbo, the first circle, where we confront the sin of opacity. Are most people in Corporate America being fairly compensated? I’d argue that they’re not, but the real crime isn’t what they’re being paid. It’s that they’re deprived of so much information that they have almost no insight into their actual value, and no leverage.

Ultimately, no one knows what human labor (or any asset) is “worth”. It’s generally impossible to come up with fair values for everything in a coherent way. That’s why markets exist: because valuation is an extremely difficult computational problem that’s best performed by “selfish” actors (investors and arbitrageurs) equipped to take advantage of distributed knowledge. It’s easy to compute a “fair value” for commoditized material assets. For human labor, it’s much harder. Finding a fair value for it is inherently difficult even under the best conditions.

On the market for human labor, most people can’t tolerate the volatility that would be seen on a highly-liquid exchange market (e.g. the U.S. equity market) where values shift by 20 to 30 percent per year. Most people would not be able to survive, at current compensation levels, if the labor market had that kind of volatility. So liquidity (which makes commodity markets more liquid and efficient) is not something most would desire. It would have their talents reallocated (i.e. job changes) on a monthly basis.

People (even those who love capitalism uncritically) have a hard time believing in markets. Can a $200-billion company really lose or gain $1 billion in true value in a day? Well, there’s a paradox inherent in markets, which is that price volatility and fairness seem to be (surprisingly) positively correlated. A price can absorb lots of signals and exhibit Brownian drift (which is probably harmless, in the long term) that makes it appear inconsistent because, clearly, the true value didn’t change that much in so little time. Or, it can absorb very little signal and have more superficial consistency, but less fairness, insofar as this begets illiquidity, “custom pricing“, and high premiums for middlemen. Markets either have to be pseudo-inconsistent (prices fluctuate based on small margins of supply and demand that are often almost random in their time of emergence) and fair, or consistently unfair. Most peoples’ salaries are set by the latter type of market, with unfairness layered on by asymmetries in information.

With human labor, people are stuck in a bad position. I am in support of a basic income, but that’s not the world we live in. The need for a monthly income puts them into a state of extreme risk aversion– devastating and pernicious, but so ubiquitous that people fail to recognize it as perverse; it’s just usual. While they’d make more (on average) if they could supply services directly to the market, the income volatility that doing so would involve is much more than most people have the financial means to stomach. They’d rather get a consistent low rate for their labor (paid during sickness and on vacation and, if one excludes at-will termination, regardless of uncontrollable fluctuations in work quality) than deal with the vicissitudes of an impersonal market that only cares about what they produce, even if the latter is much better for them in the long run (and might deliver savings that leave them able to escape the corporate shackles).

What’s the problem? It’s not that wages are “unfairly” low. I can’t even assess whether that’s the case. Employment is an insurance trade, and low wages exist because of the risk premium. What’s a “fair” risk premium? One can’t assess that without building a market. We don’t know, and that’s the problem. The crime is that people really don’t know how much genuine risk reduction (if any) they’re getting. Since they can be fired “for performance”, while most white-collar jobs make performance impossible to measure objectively, I’d say it’s very little. There’s also a very strong argument to be made that labor is unfairly treated because a few major players on the other side control the market. So I have very strong suspicion that most of these trades are unfair but, without a market to appeal to, there’s no proof.

The evil is in opacity. People enter the MacLeod Loser trade– taking a subordinate role in an established organization, rather than engaging with the market directly– in order to get rid of financial risk that most people have too little wealth to tolerate. In exchange, they get low wages that keep them in financial semi-desperation. They don’t know how much risk-reduction they’re actually getting (at-will employment) and, because the market is so tightly locked-down by major players, they don’t know what a risk-neutral fair price for their work is, so they can’t assess what they’re paying their employers for this insurance. Are they getting screwed? Probably, but I can’t prove that because it’s impossible to compute with a fair price. I can prove another evil: they have no way of knowing whether they’re getting screwed. If they could evaluate their own deals and judge them fair, I wouldn’t be one to argue with them; but they will never be in access to that information. Management keeps such a tight-lipped approach to everything important– compensation, personnel policies, promotion guidelines, career planning– and, worse yet, brings brutal punishments onto those who share such information. Although this is technically illegal, many companies make it a fireable offense to disclose one’s own compensation.

With opacity, the severe asymmetry of information leaves one side unable to evaluate the fairness of the deal. The deal might be totally fair, but often it won’t be. This is why I made such a strong argument in favor of transparency in compensation. The poison of opacity must be driven out with force.

Opacity isn’t only about the financial aspect. It’s also about domination (managerial mystique). The manager knows the employee’s salary, but not vice versa. That’s intentional. Most companies bring their people into submission by hiding important information, scaring people into disadvantageous panic-trading, and taking the (highly profitable) other side. For concave labor, this didn’t damage operations too much; for convex work, it does. People need a certain amount of empowerment and information to do convex work well. They rarely get it, because management intervenes. I won’t say that management has no role in the technological-era, convex-labor world; but it will have to become a more dignified, advisory role involving the provision of direction and mentoring, rather the carrot-and-stick extortion that exists now.

Most corporations evolve a set of rent-seeking high officers who rob investors and employees alike. They plunder investors through misappropriation of capital and dishonest representation of risk, while they use opacity over all important information to scare employees into terrible, panic-driven trades and subordination. Where does this lead? We must look at the winners of this trade, and that takes us right into the Second Circle: parasitism.

Second Circle: Parasitism

Every organization has people in it who have ceased to contribute, but continue to hold important roles and draw high compensation. Purges of “deadwood” (or “low performer” witch hunts) are usually directed at the bottom, but the worst problem employees are always people at the top (who’ve ceased to think of themselves as “employees”; they’re executive royalty!) Sure, there are small-scale subtracters at the bottom; but the worst are usually dividers at the top who suck all life out of the firm. Parasitism and even outright theft occur at all levels, but there’s a point (Effort Thermocline) where their prevalence increases sharply.

Low and opaque wages, fast firing that negates the promised risk reduction, and a general lack of respect for employees, all represent the “unfair” aspects of the corporation that we know and hate. There’s also an assumption that “the assholes at the top” are capturing large amounts of surplus value. That’s often right. Below the Effort Thermocline, value is created; above it, it’s captured.

Why do companies tolerate a class of rent-seeking parasites who add nothing, when it might be better for morale to fire them all and redistribute the proceeds to employees (profit sharing) and investors? Well, it turns out that much of regular economics (going back to Marx) makes a fatal error, which is to conflate labor and management, the latter being a subset of the former. On paper, that’s true. Sociologically, it’s not: managers do not see themselves as labor, and do not act as such, and are not viewed as labor by other workers. By the technical terminology, CEOs are still “labor”, even though their compensation is not set by a fair market (but through self-serving deal-trading with other CEOs on whose boards they sit). In their minds, executives are the real owners. Here’s a breakdown of the corporation, with square brackets ([]) representing terminal nodes:

(Company)-----+
.             |
.     +-------+---------------+
.     |                       |
. [Capital]                   |
.                         ("Labor")-----------+
.                             |               |
.      [Executives]--------(Mgmt.)            |
.      a.k.a. Sociopaths      |               |
.                             |               |
~~~~~~~~~~~~~~~~~~~~~ EFFORT THERMOCLINE~~~~~~~~~~~~~~
.                             |               |
.                     [Middle Managers]       |
.                     a.k.a. Clueless         |
.                                             |
.                                         [Workers]
.                                         a.k.a Losers

The old Marxist way of looking at the company has two tiers: bourgeoisie and proletariat. Capital vs. labor. That made sense when industrial processes were simple enough that anyone who held capital could manage them. If you wanted a good vs. evil narrative, you could equate capital to the rent-seeking slaveowners who had oppressed humankind for millennia, and labor to the slaves, and conditions for workers were so poor that you’d essentially be right. However, as the factories and machines and operations became more complicated, owners had to put professional, non-owning managers on the payroll, and that created the three-tier company: owners vs. managers vs. workers.

That gets us to three tiers, but there’s a distinct change of flavor between upper management (executives) and the floor-holding mere managers in the middle, who are still accountable for doing work; how’d we end up with four?

First, it should be obvious that managers are on the advantageous side of a principal-agent problem with investors because they control the books. While investors have right to interrogate management, being the owners of the enterprise, they rarely know what questions to ask. Second, they have even more advantages over the workers, being able not only to fire them but also (if daring enough to risk a lawsuit) able to inflict long-term damage using work’s feudalist reputation economy. They have a position of power over both sides, and one that can be very profitably (for them) exploited.

I would also argue that workers are investors. The modern concept of the career developed as an antidote to this socially unstable dynamic of owners against workers. For many people, that sharp dichotomy between the two categories is deeply anachronistic, because workers have the option of moving up to higher-skilled labor in the modern, fluid economy. “Worker” is no longer a permanent class, at least in theory. So workers are investors of time. Finally, with public equity ownership and 401(k) plans, they are literally investors.

Yet there’s a lot of opacity in the labor market, especially pertaining to careers. Is the career game a free market, or a feudal reputation economy? When you have what claims to be a market economy, but that uses opacity as a tool of exploitation, and clearly copied half of its pages from pernicious feudalism, there’s a lot of fear that can be exploited. High levels of ambient fear will separate people into “protectors”, vassals, and peasants. Out of an anarchic power vacuum that cannot last for long– peoples’ nerves can only take so much– brutal strongmen rise into power. That’s where executives come from.

Executives are a subset of managers who arrogate themselves to be the real owners of the company, more important than passive investors and simply more powerful than workers. The owners-versus-workers dichotomy comes back, this time between rent-seeking, non-producing executives above the Effort Thermocline, and a labor sector that includes the terminal middle-management (Clueless) who failed to include themselves in that arrogation, and the risk-averse non-strivers (Losers).

So, executives are the ones who get to such a level of invulnerability that they command large salaries just for “making decisions”. The morally degenerate, high-status ones most willing to abuse their principal-agent advantage create sinecures where they have lots of power, but no responsibility. Now there’s a four-tier enterprise: investors vs. executives vs. middle managers vs. workers. This mirrors the MacLeod hierarchy precisely (with investors, being organizationally passive, not on the chart).

Managers can rob investors financially by misleading them, and they can steal from employees on the credibility market, or by misleading them about the career-building value of the work they are doing, and those who excel at both tend to develop an outsized social status and become the executive Sociopaths. Those who either refuse or are unable to participate in such robbery will linger in the less dignified middle management tier and become the Clueless.

Investors and (non-executive) employees share a lot in common, in truth. Employees are investors of time, and often literal investors as well. So why are executives so easily able to rob the rest of the company blind? Shouldn’t investors and workers (often the same people, since most workers’ retirement assets are invested in corporate bonds and equities) band together and drop a pipe on that shit?

It’s a nice idea, but it turns out not to be so easy. First, let’s take an investor’s standpoint. Corporate governance is– and I mean this literally and non-pejoratively– a plutocracy. It’s voting, proportional to investment. Unfortunately, any aggregative voting system is at risk of corruption, because there are a lot of passive players who don’t really care either way, and who will be inclined to swing their votes for personal favors: cheap votes. That’s why vote-buying must be made illegal in a democracy: there are a lot of people out there who would swing their votes for $100. Advertising, for one example, is all about capturing the advantage that a brand holds over cheap voters who prioritize product image over quality. The civil danger of cheap votes is that a voting bloc’s power grows as the square (in statistical impact, measured by variance) of its size, which means that people who develop the ability to harness cheap votes and tie them together become extremely powerful, and can hijack the system even if they’re only able to buy a small share of votes. Cheap vote problems are typically solved by electing representatives whose job is not to be cheap and giving them disproportionate power, but also empowering voters to fire them, on the assumption that they’ll do a better job than the political machines that specialize in cheap-vote trading. That’s why management exists. Permanent managers are held to be more effective in running the company than a plutocracy subjected to cheap-vote abuses.

When that runs poorly, management loots and investors lose. In truth, as a cynic, I’d argue that if typical management had its way, there would never be profits. What would be profit would go directly toward the executive payroll.

On the side of labor (true, non-executive labor) there’s a different cheap votes problem. Employees are the cheap votes. How often does a low-level worker take up cause against his employer, risking termination (likely) and damage to her reputation (a possibility, in the feudal reputation economy of references and resumes) in doing so? It’s so rare that it makes the news, and such people are often blacklisted and ruined for doing it. Whistleblowing is an activity where there’s a fixed amount of punishment to be allocated to a variable number of adversaries, which makes isolated whistleblowing so dangerous it rarely happens, and with no one willing to be the first, one sees a culture of terrified silence. A powerful company can pretty easily ruin a single person’s professional reputation– with frivolous lawsuits against her, negative references, and possibly negative statements to the press about the departure. All of this is illegal, but she’s a single person up against a company with limitless resources. On the other hand, if 20 people blow the whistle, the company can’t discredit all of them. It must go into “damage control mode” to repair its image. It will offer generous settlements. If a thousand people act, and talented people start losing faith in the company and leaving, the firm will actually need to change its behavior. However, conditions are such that unethical companies and managers can, like Gus Fring, hide in plain sight, because people are too scared of whistleblowing’s consequences for the opposition ever to get to 20, much less a thousand.

Opacity is justified by expediency (“we can’t have transparent compensation; that would just be crazy“) but it conceals the fact that a powerful set of people abuse information to rob investors and workers to an extreme degree. Is it a conspiracy? No, I wouldn’t go that far. As I said, executives hide in plain sight. They don’t hide the fact, for just one example, that those who fight opacity by openly disclosing their salaries are socially excluded, isolated, and eventually fired for it. (This, also, is illegal. Disclosure of salary is protected; it’s an anti-unionbusting provision.) It’s pretty well known what happens, in the white collar world, to people who disclose their salaries. The true dishonesty pertains not to the social norms, but to their reasons for existing. Managers claim they fire those who engage in salary discussion because it’s “rude” and “threatening to team cohesion”, when their real motivations are more sinister.

Executive parasitism is a huge problem for most companies– much more of one than operational inefficiency or unfavorable market conditions. Its severity is one thing that the trust-averse “Theory X” got right: given too much power, people will turn to parasitism as they focus on protecting what they have. The problem with Theory X is that the gun points the wrong way. In Theory X corporations, prevailing distrust is used to justify abuse of workers by management, while management must be trusted by both sides (workers and investors) have no other choice– they are just too far out of power. Theory X uses prevailing distrust to shift power to those who are least deserving of any trust.

The old, Marxist, model puts investors and workers on two sides of a chasm, with each side despising the other. Theory-X management steps in and tells investors, “Hire us, and we’ll keep your workers from stealing from you”. That’s their sales pitch. (In the modern economy, workers who favor their career goals over the organization’s are seen as “time thieves”; I disagree, but that’s a side note.) It turns out that professional managers are very good at preventing stealing; they want all the action for themselves!

Workers and investors don’t belong on opposite sides of some hadal chasm anymore. We need to recognize our common enemy: looting management. The “workers vs. investors” concept made sense in 1848 when the vast majority of people were not only desperately poor, but locked into dead-end labor with no chance of improvement. There was no such thing as a career or a 401k. It’s not true in 2013. Workers can be well-compensated and treated well if they develop unique skills that give them leverage. Their main obstacle as “career capitalists” is not knowing what the market will value, nor the true long-term needs of society that they might be able to fulfill (later on) for a profit. Their managers certainly have no interest in showing them the way. That brings us into…

Third Circle: Career incoherency

If workers can be viewed as investors (the careerist perspective) then a question arises: why do so many people end up stuck in poor investment strategies that, quite visibly, pay off poorly? A well-managed asset portfolio can appreciate by 6% per year without any work. People (at least in the top 15% by talent, and maybe more) ought to be able, pretty easily, to garner 8 to 15% annual increases (at least for a good 10 to 20 years, at which they’re into very-high-skill labor legitimately wealthy) through their control of one of the most important variables, which is how hard they work. Yet we don’t see that.

Becoming great at something– good enough to make a substantial living on the free market at convex work– requires the proverbial “10,000 hours” of deliberate practice. I don’t care to debate the exact number; that’s certainly the right order of magnitude, and it’s probably within +50% for most fields of endeavor. At 10,000 hours, most people should have independent reputations and credibility, and the right to access the market’s will-to-pay (or, at least, their own company’s) directly rather than through a manager/pimp taking god-knows-what (opacity) percent. That would take 5 years, given a typical 2000-hour work year. Yet most people don’t even get there in 25 years. Why? Most of them are assigned to crappy work which confers no career benefit. They’re putting 2000 hours in at “work”, but they’re not getting deliberate practice. This is the norm in organizational employment, and most people never really get out of the dues-paying period. They’re lucky if they get 200 hours of quality work in a year, which means they never become good at what they do. The lack of developed skill leaves them with no leverage and they can be exploited into perpetuity.

In the long term, and for society, this is devastating. One of the first things an economist learns about the Third World is that cheap labor is a curse, because it makes labor-saving technologies (that would make society richer) too expensive by comparison. Why buy a dishwasher when you can pay someone 30 cents an hour to do it? National elites become addicted, unknowingly, to cheap labor and their countries decline. Why invest in people, for the future, when you can exploit them in the present? The long-term result of this is that everyone (even that national elite) loses. This is more true than ever in the corporate world. Corporate managers are averse to training employees out of a fear that more marketable underlings will be likely to leave them. In reality, the reverse seems to be true. Good people leave jobs because they stop learning, not because they learn “too much” and find better employment elsewhere. I’m an unapologetic job hopper and I would easily stay at a job (perhaps for 10 years, perhaps until retirement) if I genuinely believed I was improving my market value by 20% per year. But if I’m not learning anything, then my rate of growth is negative 5 percent per year, which I just won’t tolerate.

Career coherency exists when one’s job requirements also serve one’s career– there is no conflict between the immediate work assignments and the person’s long-term career goals. People do their best work when career coherency is the case, and (except for the MacLeod Clueless) they will do as little as they can get away with amid incoherency.

Of course, career incoherency is depressingly common. Companies tend to load the junior or politically unsuccessful with fourth quadrant work that “just needs to get done”, so people get resentful and leave. The truth is that only the MacLeod Clueless take career-incoherent assigned work very seriously. Losers manage themselves to the Socially Accepted Median Effort (SAME) but slack off beyond that point, while Sociopaths often blow it off entirely. A common Sociopath response to being assigned career-incoherent work is to fail badly at it, but in such a way that the blame can’t be directly assigned to him (to use Venkat Rao’s terminology, a “Hanlon dodge”). This is a hard balance to strike, but extremely powerful when it works. What eventually happens is that management, should it distract him with crappy work, is punished just enough that future crappy work is sent elsewhere, but not so severely that it makes the Sociopath look incompetent or noncompliant. He performs poorly while retaining plausible deniability, and it helps if management is a little bit scared of him (“if I keep giving him low-yield assignments and he fails, maybe he’ll blame me“). The Sociopath figures out just how far he will get the benefit of the doubt, and plays accordingly.

In general, the best and worst employees of a company tend toward a self-executive– meaning that they serve investors’ interests and their own directly, ignoring the interference of parasitic executives– and almost insubordinate pattern of behavior. What about the moral and industrial middle classes? Those are the ones most affected by the prevailing culture. In a rank culture, they’ll be compliant and superficially loyal. In a tough culture, they’ll be viciously competitive. In a self-executive culture, they’ll tend to be technocratic and organizationally altruistic. The very good and very bad are pretty much the same whereever one goes, and both categories will blow off career-incoherent management and work (although for very different reasons) so it tends to be people in the middle who define, and also who are most defined by, the corporate culture.

What happens if a company shuts off self-executivity? Do the very-good and very-bad stop being insubordinate? Absolutely not. The very good are pushed into increasingly desperate public stands for what is right. It should be predictable what happens to them: they get fired, or humiliated so badly that they must resign. The very bad, however, tend to be good enough at playing the people to stick around. When self-executivity is outlawed, only outlaws will be self-executive (i.e. be able to get anything done). The result of this is an environment similar to the violent black markets that emerged in the Soviet Union; the shadowy nature of transactions puts a lot of otherwise unconnected baggage and friction on them. Even staid products like lightbulbs, when there was a public shortage, often had to be obtained from characters more like speakeasies than hardware stores. A modern analogue is the market for illegal drugs. Yes, most of the products are poisonous, but the violence surrounding this economic activity has a lot more to do with the illegality of the trade than the toxicity of the product, which most serious players in the business never even use.

When you shut off self-executivity, you shut off internal innovation– to a point. The very-good insubordinates will still tilt at windmills, and be summarily fired for the quixotry. The very-bad will furtively pursue innovations on their own, but with malevolent intent. That’s how you get evil innovations like Google’s “calibration scores”, which were an obvious (and, sadly, successful) move to sabotage the company. Those are the direct product of what happens when self-executivity is pushed to the black market.

In the short term, under desperate circumstances, debate and self-executivity must be curtailed out of the need for focused dedication toward a coherent definition of corporate success. This is one reason why, while I support open allocation, I recognize it as inappropriate (at least at full extent) for the needs small, bootstrapped companies. They should pursue open allocation in spirit and values, but shipping a coherent product takes first priority. However, large and wealthy technology companies have a moral responsibility to implement open allocation; if they shirk it, they not only fail morally but, because they cease to have a culture worth caring about, they also demolish themselves through brain drain.

Why do companies shut down self-executive behavior? Why aren’t employees allowed to manage their careers and contribute directly to the company’s internal market? Self-executive work is, empirically, typically worth at least three times as much as traditionally managed work. (In software, it’s more like 10, hence the phrase “10x programmer”.) So shouldn’t there be infinite demand for something that makes everyone 3+ times more productive? Well, to answer that one, we must progress further into corporate hell…

Fourth Circle: False scarcity

One prevailing trait of the Clueless is that they never question arguments from scarcity or desperation. “Deadlines” must be met, and “there’s no money in the budget” is taken at face value. This enables the Sociopaths at the top to create a false scarcity that’s empowers the Clueless to do reprehensible things that they otherwise wouldn’t. It’s the opposite of 1984. In Orwell’s depiction of totalitarian socialism, people were misled with false claims of prosperity. Corporations go the other way, by creating a phony scarcity while million-dollar bonuses are funneled to the executive thugs who enforce it.

That’s where career incoherency often comes from. The company “just can’t afford” to do things properly, to compensate fairly, or to invest in employees. Things just need to be done, this way, and now. Debate and a progressive outlook can’t be afforded. The “emergency mode” in which there might be justifiable cause for curtailing employee autonomy and long-term concerns becomes permanent.

Why does this actually work? It seems counterintuitive. Soviet governments lied about being rich and exaggerated growth figures to improve morale; American corporations claim to be poorer than they are. Why? Wouldn’t that damage morale?

I lack a more global perspective on this, but I think that false scarcity is most effective in an American setting, where individual prospects trump corporate morale. People aren’t going to be especially bothered by the idea that their company is poor or unsuccessful, as long as they have a good place in it. They’d rather the firm be rich, obviously, but they care more about their individual station and long-term career prospects than the organization itself. Caring about the macroscopic reputation of one’s firm is a luxury for real members of it (executives, who have something to lose if it goes down). What is the firm’s prestige, they ask, going to do for me? Americans (moreso than other nationalities) are happy to work for unsuccessful, uninspiring, and even desperate institutions if there’s personal gain (compensation, promotions, career opportunities) to be made in doing so. One major exception is academia, where social status is somewhat divorced from compensation. In the rest of the economy, people don’t mind working for macroscopically mediocre institutions if they have an inside track to a legitimate role.

In this way, companies don’t lie to their own people about how successful or strong they are. Instead, they present themselves as somewhat weak and hampered so that whatever bone is thrown to a worker seems like a genuine favor. “We have no raise pool this year, but we really want to keep you so I fought for days and got you a 2-percent cost-of-living increase, which no one else is getting.” Excluding academia, Americans would rather have excellent positions in mediocre companies than crappy, subordinate positions in excellent companies. In a opaque regime where workers rarely know what others are getting in terms of compensation, career development, and project allocation, the self-effacing company can mislead a large number of workers into believing, each, that they are favorites: what you’re getting is meager, but everyone else is getting less.

The false scarcity has one toxic side effect. Sociopaths recognize it as a negotiation tactic and leave it at that, but the Clueless actually buy into it and “volunteer” to enforce its directives. So they end up shutting down self-executivity, as the desperation mentality evolves into cargo cult of “urgency” enforced by idiotic middle managers. For an analogue; in the 1990s, there was a fad where unskilled programmers would direct compilers to inline aggressively because “it makes the code go faster”. That’s not always true, and it’s not the whole story. Some code runs faster when the compiler is told to inline heavily but, in general, people are not smarter about such details than the compiler writers, and abuse of inlining makes the generated code substantially worse. The word urgent is the manager’s variety of “inline-all”. “It makes people work faster.”

False scarcity is a present-term negotiation tactic with deleterious long-term effects. Let’s take the Socratic method to an employee asking his manager for a raise. Here’s a conversation from Anywhere, U.S.A. between manager (Kim) and employee (Larry):

Larry: …so based on the market for Widgeteers in this region, I believe I should be making $85,000.

Kim: I can’t give you a raise, Larry. You’re already at the maximum salary for Widgeteer III’s.

Larry: I know that, but I’ve been doing the work of a Widgeteer IV for almost two years. Even you’d agree that I do more work than John did, and he was a Senior Staff Widgeteer when he retired.

Kim: You’re welcome to apply for promotion in February, but I won’t be able to support you. You need three consecutive 3′s on your performance reviews and I can only give you a 2 this year. Meets Expectations.

Larry: Explain to me why I’m a Two. I was a Three last year and I’ve only improved.

Kim: Last year, I had 24 review points to give out for the Proton team, but this year Jake got three more review points for his team because he’s fucking Janice, so I get three points less. I only have 21. I have to give Alex a 3 or he’ll mope and get nothing done for a year– you know how he is– but with only 21 points, if I give more than two 3′s, I have to give someone a 1. The damn paperwork that comes with a 1, well… you just wouldn’t believe it.

Larry: Would you be able to move me to the Neutron team? I’m sure you have plenty of points for that project, given the launch.

Kim: The Neutron team does not have enough headcount to accept Twos. We don’t want people who meet just Meet Expectations.

Larry: But you just admitted that I deserve to be a Three!

Kim: Go take your Meets-Expectations ass somewhere else, Twoser. I knew you were a Two the day I met you.

Larry: Well, wait. What is it that the Neutron team seeks? Maybe I could learn the skills now, and when a slot opens up, I could make the transition smoother.

Kim: We have deadlines. There’s no way that I can allow you to learn on company time. We just don’t have the slack. You need to put your head down and keep working on Proton.

Larry: So I can’t move to Neutron because I don’t have the skills, and you won’t let me take the time to learn them, even though it’s a project of much higher business value?

Kim: That’s right.

Larry: What if I spend a day per week learning Neutron, and come in on Saturdays?

Kim: I wouldn’t normally ask you to work on Saturdays, but if you’re going to come in on weekends, I ask that you work only on your assigned project. I can’t have you getting distracted. If I suspect that you are using Saturdays to pursue side projects and you are doing it on company resources, I will have to write you up for insubordination.

Larry: What if I work on my assigned project on Saturdays, and spend Fridays with the Neutron team?

Kim: Larry, I am not in on Saturdays and I am not going to come in on the weekend just to make sure you get your work done.

Larry: But you know that I get my work done!

Kim: The performance review I am writing says ‘Meets Expectations’. One point lower and you’d be a ’1′ and I’d have to write a Performance Improvement Plan. This would require me to write a negative summary of your performance, with dates and events to create the perception of legalistic precision when really it’d be all bullshit and we’d both know it. You are not a One, but you are clearly a Two, an expectations-meeter, because I have no more review points to give you. That means that I am disallowed from knowing that you get your work done.

Larry: What if I apply to the work-from-home program, but still come into the office five days a week, so that I can work Saturdays remotely?

Kim: That is an option, but your file says you live in zip code Q6502. You would be in a Category IV location for cost-of-living, and I would have to dock your pay by $6,500. So that is not going to get you your raise.

Larry: So what happens if I apply for transfer to a team that has more room for growth?

Kim: I will send you links to appropriate resources and make introductions to other teams’ managers for you, but then I will put negative commentary about your performance on your personnel file that you won’t be able to see. No one will want you, for the simple reason that I have credibility and you don’t. You won’t know what I’m saying and will have no way of appealing it. In this way, I am like a feeder who makes his captive unhealthy, sexually repulsive to other men, and preferably immobile so as to have complete dominance over her because no one will want her once she is morbidly obese. The difference between you and a bedridden feedee force-fed 18,000 calories per day is that you won’t know that it’s happening, and it will be entirely out of your consent. Then, I will allow your position to be cut from my team in a trade that gets me a $5,000 personal bonus and 2 more review points so I can get Bob promoted and not have to deal with his damn high-pitched voice anymore. You will have three weeks to endure transfer interviews in which you will have no chance, because I’ve already smeared your performance review history, at the end of which you will be fired not by me, but by a person you’ve never met.

Larry: Well, that might not be so bad. Is there a severance package?

Kim: We don’t like severance packages because it means we are rewarding failure. Besides, there’s no money in the budget[pauses, lowers voice] However, my promotion packet is coming down to decimal points, so those “360-degree” reviews of bosses that usually don’t matter? This might be the one time in a hundred where director-level people give a shit what people like you have to say. If you let me write your review, we can work out a story that gets you $20,000. How’s that sound?

Larry: Make it forty thousand.

Kim: 27-five. A pleasure!

This might seem like an attempt at anti-corporate humor. It’s not. Conversations like this actually go down all over the place in Corporate America. I’ve probably seen every perversion in this (except for the word “Twoser”) at least once.

Corporations have a problem with abuse of process, but there’s something else that pervasive scarcity allows. Abusive process. It comes down the snake and the grass. The snake is seen as vicious and malignant. The grass is viewed as being compliant and beneficent. But what covers the snake, so it can strike? The grass. The corporate grass has a good-cop, bad-cop flavor. There are abusive policies that exist (justified by false scarcity constraints and an overzealous need for bureaucratic consistency) that are so severe that nothing can get done unless exceptions are made, but exceptions are made so often that people view them as harmless, like a too-low speed limit in a place where no one is ticketed unless actually at an unsafe speed. The rules on paper are the ugly, barren ground that would be exposed without plant cover. Then, there are the “nice guy” makers of exceptions who enable people to actually get stuff done. They’re the grass. The snakes are the ones who have the power to turn off the making of exceptions in order to bring down a rival. 

The worst scarcity companies generate is the scarcity of work. The “problem” with self-executive employees is that they tend to generate projects that management never imagined. They create work for themselves. Good work. Work of a much higher quality than is typically seen in something assigned by management, because they’re self-motivated. This is good for them and their employers, but their managers view it as a negative. They might outshine the master.

This brings us to the so-called “lump of labor” fallacy. How fallacious is it? Is the demand for labor fixed and limited, or can it can grow as people and society progress? On one hand, there will always be limitless demand for making peoples’ lives better. On the other, structured work environments generate a pernicious and visible work scarcity.

If demand for work is truly finite, you get a competitive society where the fight to “get” work is more defining than the actual doing of work. If it’s limitless, you get structural cooperation as people work to make each other (and themselves) more productive. Most economists consider the “lump of labor”– that there’s a finite amount of work to go around, leaving us in zero-sum competition for it– to be an erroneous and regressive mentality. In the abstract, they’re right. Adding value, improving processes, and making peoples’ lives better should always have limitless demand.

However, within the typical corporation, the lump-of-labor mentality is pervasive and almost a permanent fixture. You need the attention of a manager (a professional “no man”) to get a project sanctioned. “Plum projects”– the rare case of desirable work that has high-level sanction– are handed out as political favors. High-impact work is directed only to the managerially blessed; most people don’t get any of that and are loaded up with fourth-quadrant evaluative nonsense with no purpose other than the living out of a painful dues-paying period. Sure, in the real world that exists outside of this corporate bullshit, there’s limitless need for people to make life better, often by implementing ideas that no executive would ever think of. However, under the corporate regime, there is a fixed (and small) amount of sanctioned work that it will accept as sufficient justification for retaining an employee. This means that the lump-of-labor slugfest– a race to the bottom among MacLeod Losers and Clueless as even the fucking process of getting to do real work becomes competitive– is very much in force over corporate denizens.

As corporations begin to believe their own false-scarcity myths (perpetrated by Sociopathic robbers at the top, and implemented by Clueless useful idiots in middle management) they start to fall under the delusion that they’ll fail outright unless all work is directed toward “sanctioned projects” as defined by a small, powerful set of people. Executives are often too far out of touch to have any clue what projects deserve sanction, and middle managers are both distracted by their own career needs and hobbled by their own tendency toward Cluelessness. Thus, they tend to generate a “sanctioned project” pool that is not only small but also increasingly divorced, as time goes on, from the company’s actual business needs.

It’s the increasingly myopic scope of “sanctioned projects”, and the morally degenerate competitive infrastructure (closed allocation) that builds up around them, that makes most corporate workplaces so horrible. But what’s the alternative? Can workers really just be allowed to define their own work? Well, it works for Valve and Github, two of the most successful companies out there. With self-executive work being worth 3 to 10++ times as much as traditionally managed work, it’s an unambiguous win for a company that can afford the risk and, with computational machinery now extremely cheap, that essentially only requires trusting them with their own time.

So why is this so rare? We have to go into a deeper Circle of Hell for that…

Fifth Circle: Trust sparsity

I focused heavily on trust in parts 17 (financial trust and transparency), 18 (industrial trust and time management), 19 (living in truth vs. convex dishonesty), and 20 (simple trust vs. Bozo Bit) because it’s increasingly clear how much damage is done to organizations by the lack of it. When you see large companies “acq-hiring” mediocre engineers at $10 million per head, it becomes clear that firms are desperate.

What are they desperate for? These “acq-hiring” firms have plenty of engineering talent in-house, but they get to a point where the prevailing assumption is that all of their own people are incompetent, lazy, and ineffectual, so they staff important projects with external jackasses bought in at a panic price. This behavior is a lot like impulsive hoarding, where a person’s living quarters become so messy that he has to buy a new winter coat every November not because he wears out the old one, but because his house is too much of a mess for him to find it again. The one difference is that, for the metaphor to apply fully to acq-hires, he’d have to be spending $15,000 on a $200 coat.

Why do companies hire so many people but trust so few of them? I examined this in Part 20, but the gist of it is that, while the larger concept of trust has degrees and variations, simple trust (whether a person is treated as competent and worthy of respect) tends to be binary (“bozo bit”) and it is usually a global systemic property of a group of people. It is either trust-dense, meaning people are generally held to be independently credible, or trust-sparse, which tends to generate a dysfunctional array of warring cliques. In a trust-sparse environment, being without a clique leads to isolation, exclusion, and failure, so the pressure to become part of one generates a feudalistic pattern of behavior.

Because simple trust is a binary and systemic property, one person “flipping the switch” can turn off the lights for good. It just doesn’t take much. Trust density, although the core of any healthy business, seems to be fragile. What makes it this way? I’ve concluded that there’s a cardinal rule that organizations, unless they want hell to break loose, must follow: don’t hire before you trust.

Most rules in business have exceptions. Not this one. Only hire people with the intention of investing simple trust– trust to do the right thing with their own time– in them. If you hire someone who proves unworthy of simple trust– it’s uncommon, but it happens– than fire him. Don’t be a dick about it– write a generous severance package– but get him out as quickly as you can. Also, don’t keep an unethical high performer around just because he’s “hard to replace”. You need a company where everyone can be afforded simple trust and, if you lose that, it’s almost impossible to get it back.

Plenty of companies hire people with full intention never to make them real members of the company. They’re brought in for grunt work, because it seems less risky to hire a schmuck off the street (the hiring can be undone) than to automate the undesirable work (a project that might fail). Bad move. This addiction to cheap labor accelerates itself because a trust-sparse company can never find and trust capable people who’ll automate the crappy work, which is what should be done. Soon enough, the company is one where new hires spawn with the Bozo Bit in the “on” position, which creates resentment between the old and new hires. No one likes being seen as a “bozo” and it turns out that the most reliable ways to turn off one’s bozo bit are generally considered unethical (convex dishonesty). The corrosion is pretty much immediate.

Why in the hell would a company sell off its culture, and hire people it distrusts? I’m actually going to sample from evolutionary biology and invoke r- and K-selection. An r-strategic species aims for rampant proliferation but low quality of individual offspring. A hundred may be born, but only a few will survive. On the other hand, a K-strategist aims to have few, highly successful, offspring. In humans, women tend toward K-selection (because of the natural reproductive bottleneck) while men can be r- or K-strategists. However, r-strategic behavior in men leads to positional violence, maltreatment of women, and population catastrophes. Civilization began when humans discovered monogamy and, instead of successful men having tens of “wives” (sexual slaves in a harem) and hundreds of children, with no paternal investment; they were encouraged to have few wives (often, only one) and a smaller number of children, in whom they invested highly. In other words, civilization began when men were forced to be mostly monogamous K-strategists, ending the extreme frequency (death rates of 0.5 to 1 percent per year) of male positional violence and enabling stability.

If we view business as a reproduction– of work processes and values, knowledge and relationships– then we find that there are also r- and K-selective business strategies. K-strategist “parents” (bosses) want to have few “children” and invest in them highly, treating them as proteges or advisees. More common are the r-strategic corporations that hire a bunch of people, invest nothing in their careers, and expect only a few to thrive. For concave work, the r-strategic approach was probably the most profitable one, since adding more heads meant pulling in more dollars. But the 21st century is showing us that, for convex work, a K-strategic approach to business expansion is the only way to go.

It is bad that these r-selective companies hire before they trust, and it is also dishonest. When recruiting, companies engage people by telling them a story of what kind of work they’d be doing as trusted real members of the team while failing to state that most new hires will end up in the untrusted, bozo category for arbitrary reasons. It’s when that happens that companies start to evolve credibility black markets, and the panicked trading that transfers power to ethical degenerates sets in. To understand the process behind this, we have to descend yet again, into…

Sixth Circle: Passive aggression 

Tough cultures believe that within-company competition is beneficial and makes people work harder and produce more. They’re wrong. Rank cultures tend toward “harem queen” dynamics as people jockey for managerial favor. That’s bad as well. Dysfunctional companies, and that’s most of them, tend to be marked by passive-aggressive behavior and social competition. The contest for the artificially scarce resources (good projects, managerial attention) that success depends upon, and the fear of ending up in the bozo category, generate patterns of behavior so negative that they ruin a company outright.

This style of degeneracy is hard for managers to detect because, when workers are engaged in it, they appear affable and dedicated. They race against each other to work the longest hours, and take on responsibility to gain power over critical nodes of the company. This is also why it’s critically important to fire unethical high performers, no matter how “indispensable” they seem. Unethical people, unless they are completely devoid (like, bottom 1%) of social skills, will always seem like high performers, and they will always appear indispensable. They develop the skills of shifting blame and taking credit so rapidly that by their early 20s they have more experience in manipulating people (yes, that includes you) than most people have in a lifetime. They always seem too important to get rid of. Don’t fall for that. You can afford to fire an unethical high performer, especially because he’s probably not a legitimate high-achiever; you can’t afford not to.

Social competition is what the truly toxic use in order to get their way. They isolate targets and rivals, and they often take advantage of the false scarcity in work allocation to make sure that the best people get the worst work, driving them out. Clueless middle managers, who take complaints from ladyboy favorites at face value, are typically oblivious to the demoralizing backstabbing that goes on in front of them. They’re just bad judges of character. It always gets me when managers say they infallibly shut down anyone who tries to “play politics” under them; if anyone is visibly playing politics, he is clearly unsuccessful at it, and perhaps he took the blame for someone else’s political plays. Sociopaths, on the other hand, don’t care too much about the character of people working for them either way, so long as those aren’t a personal threat to their goals. Clueless don’t know about all the nasty politics that exists below them; Sociopaths can see it but don’t care.

On the whole, however, people tend to agree that ethical character is important; even Sociopaths don’t want to deal with those who will rob them. Character is far more important than talent. The problem is that it’s very hard to judge a person’s ethical mettle. How does one know what a person would do in extreme circumstances, when such conditions are so rare? That’s where human social dynamics come into play. People assume, often wrongly, that the little betrays the big. In a heterogeneous world, this fails in a major way. If someone pronounces words in a slightly different and characteristic way, that’s called an accent and it’s not a sign of stupidity. If someone can’t work 80 hours per week, that’s not a sign of poor ethical character but an artifact of typical human limits and of health problems that are irrelevant at the 40-hour level. Yet, human social organisms tend to believe, in spite of the ridiculousness of it, that social reliability (the little) betrays true character (the big). Thus, companies tend to attempt to measure ethical character through superficial reliability contests, and the amusing thing there is that, even though they consistently backfire by promoting the bad people who are most used to such contests, corporations (especially tough cultures, where reliability tests are the point) continue to use them.

Winners of reliability contests tend to be the worst people, because the artificial “crunch times”, deadlines, and scarcity push people to their limits and strain their social resources. Psychopaths are naturally adept at manipulating this in order to make sure others faceplant first, leaving them standing. Psychopaths don’t tire in social competition because it’s fun for them to watch everyone else burn out. Management, in general, is not capable of figuring out when this is happening. The psychopath presents himself as a high performer, and colleagues are too scared to tell the truth.

Most of these reliability contests, not by design but through ignorance, are built in such a way that the psychopaths are most adaptive to them. The social competition dynamics of a reality show (e.g. Survivor) and office politics are at least a million years old. Since psychopathy is most likely an individually fit (but socially harmful) r-strategy, it co-evolved with that nonsense. It turns out that “the bad guys” have been hacking our social reliability competitions for a thousand times longer than we’ve had language to describe any of these ideas. 

I’ll take a concrete example, which is the stigma associated with “job hopping”. Why is it there? Employers understand that the most dangerous people are the high-talent unethical ones. They’re right. And job hoppers tend to be high-talent “disloyal” people; at the least, they don’t give loyalty away for free. Unfortunately, since there’s no way to measure ethical character, the rage is taken out on people who have “too many jobs”. Well, through various consulting projects I’ve had access to more data on this matter than most of these HR idiots could see in twelve lifetimes, and I have the answer on that: unethical people don’t hop from job to job, continually subjecting themselves to social change and potential disadvantage. Instead, they most commonly ingratiate themselves with upper management early on, build deep trust over time (since that’s the only way to do it) and, when the opportunity emerges, betray everyone in one fell swoop. Knowing the power that comes with seniority, they’re more prone than the general population to long job tenures. Unethical people tend to have a Doppler Effect in which there’s one perception of them from ahead and above them (that they are affable, subordinate, dedicated) and there is another, much more accurate, view from those below and behind them whom they consider unworthy of impressing.

The best way to avoid taking on a large number of unethical people is not to attract them. It is, in general, impossible to detect them until they’ve done their damage, so the only strategy is passive defense: build a (K-strategist) company that won’t attract them. This ties into trust sparsity. Unethical people love trust-sparse environments, because those mean there is a Bozo-Bit switchboard to be found, played with, and used to gain power. That brings us to the chief accelerant, as well as byproduct, of trust sparsity. We come to the Seventh Circle. Headlong into the flames we go…

Seventh Circle: Extortion

I’ve said my piece about closed allocation being a form of extortion, because the conflict of interest between people and project management forces the employee to serve the manager’s career goals or face isolation, firing, and possible humiliation. “Work for me or you don’t work here.” It’s far from true that managers are the only people guilty of this behavior. Companies have been targeted by morally bankrupt programmers who built defects (“logic bombs” and back doors) into their systems. It’s expensive and humiliating to be extorted, and the emotional scarring can last for a long time.

That extortion leads to distrust should be so obvious that it doesn’t need explanation, so we’ll treat it as self-evident. Some of the “scar tissue” that companies develop is a direct result of previous extortions by employees, management, counterparties, and investors. Some more of it is cargo-cult transplant scar tissue that executives transcribe, without knowing why it exists, from one company to another as they move about. The tendency for companies to evolve toward a “Theory X” distrust of employees– sometimes without knowing why– comes from this replication of other companies’ post-traumatic policies.

Most of the other Circles of Hell tie into this Seventh. What is the benefit that it confers to a psychopath to have access to the “Bozo Bit” switchboard of a trust-sparse company? What, precisely, is most social competition? It is extortion. When a venture capitalist threatens to “pick up a phone” and turn off interest among nominal competitors if you don’t accept an abusive term sheet, what is that? What is the purpose of feudal reputation economies? Oh, right.

What is extortion? When does negotiation, which is unambiguously acceptable, go into black hat territory? I would say that extortion has a few defining characteristics:

  • There is an asymmetric power relationship, usually conferred by social access to people capable of extreme physical or social violence (esp. harm to reputation).
  • The extorter is attracted to the extortee by the latter’s participation in productive activity, in an attempt to draw a share of the profits through coercion. For this reason, the extortee’s success will only draw more extortion.
  • The harm is sometimes presented as a punishment, but is an extreme one and usually in retaliation toward something the extortee has the right to do. In other cases, it’s presented as an offer of “protection” (from oneself, or one’s hired thugs.)

Extortion is the epitome of parasitism. It adds nothing to the ecosystem. Rather, it feeds off the profile of the most productive players. Extortion is not the same as blackmail. They’re similar crimes, but there’s a fundamental difference in why each is illegal. Blackmail is illegal because it’s selective, corrupt law enforcement: even if the blackmailer has the right not to report the crime (this differs by jurisdiction) he does not have the right to selectively do so based on a personal payment. Extortion is illegal because it’s a drain against productive activities, as extortionists ratchet up their demands, often putting producers out of business. It is also exceedingly common if not made illegal, and hard to drive out even then.

Is management (in the classic corporate sense) extortion? I’ve made this claim before; can I defend it? Well, let me explain it in detail. Managers ought to have the right to terminate a relationship, just as employees do. In a small company, this would mean the end of employment. But in a large company, should managers have unilateral termination authority? Absolutely not. Do they? From a Clueless perspective, no. From a cynical (and accurate) perspective, they do. Companies rarely afford managers unilateral termination because it’s too much of a lawsuit risk; but they give the manager so much credibility (especially if performance reviews are Enron-style, meaning that they’re part of an employee’s transfer packet in internal mobility) that they can engage in “passive firing” (damage to employee’s reputation, often deliberate and invisible to the target, that makes him ineligible for internal mobility). Why do they allow this? For the sake of “project expediency”. Companies grant this power to managers out of the misguided belief that the trains simply won’t run on time unless bosses have that power. They can’t grant unilateral termination explicitly (lawsuit risk) so they create mean-spirited performance review systems and passive-firing infrastructures toward the same goal.

How is managerial authority most often used, both in rank and tough cultures? The subordinate employee is coerced into throwing all her weight into the manager’s career goals, with the scraps given to her own. What exactly is that? Again, it’s pure extortion. Companies permit it, because the manager has credibility.

The whole point of a credibility market is to allow extortion in the name of “project expediency”. Does it actually serve that purpose, and improve project success? No. The extreme success of open allocation proves that companies don’t need extortionist managers. While there probably is a need for some of what is called “management” in most companies– for training, direction and guidance only– there is no evidence, anywhere, that a healthy company benefits (except in short-term, existential emergencies that require “my way or the highway” leadership) from this sort of behavior.

One might notice that all of the six Circles above tie into managerial extortion.

  • Opacity gives power to management over employees’ long-term careers, since they have no clue what the actual economic landscape or market climate is, especially if they face an adverse manager (reference problem).
  • Parasitism is the obvious goal of the extortionist, but extortionists find the organization’s fear of parasitism (“low performer” witch hunts) to be an effective tool of aggression.
  • Career incoherency is a result of widespread extortionist managerial culture. It’s the manager’s right to say, “You work for me or for no one here” that forces people to do work of limited or no career value.
  • False scarcity is what encourages people who might object to the extortion, instead, to passively tolerate it. This is the “project expediency” argument; many companies believe (falsely) that nothing will be achieved, and the company will die, without extortionist thugs (using the threat of harsh credibility reductions) to police the bottom.
  • Trust sparsity is the philosophical underpinning of the extortion market. It creates the “Bozo Bit switchboard” which is the holy grail of a psychopath.
  • Passive aggression is enabled by a perverse and intentionally dysfunctional bureaucracy where people can cause harm through inaction. I know someone who was fired because his boss forgot to write a performance review, and the default rating assigned in the no-review case happened, that year, to fall under the 5% mandatory-fire cutoff. Whether this was a case of forgetfulness or passive aggression is beyond my knowledge, I honestly have no idea. But many companies operate on an original sin principle where employees are ruined (no credibility) unless protected by a manager. And what is mandatory “protection”?

It’s extortion that is at our enemy’s heart. That’s the core of corporate evil, at least on the internal front. It must seem that we’re at the bottom, but we don’t yet have full explanatory power over what motivates so goddamn much extortion. What makes people extortive? Is that truly “human nature”, or is it merely human behavior when people are subjected to humiliating false scarcity and nonsensical, dehumanizing processes? Let’s go into that, right now. It’s an ugly place, but we’ve been through plenty of those…

Eighth Circle: Powerlessness

Evil exists, and lawful evil is a defining force of intra-corporate social competition. However, maybe there’s room for compassion: sympathy for the damned. Why, we might permit ourselves to ask, is there so much bad behavior in the corporate context? Is it the stakes? That’s the Theory-X explanation. There’s a lot of money in it; ergo, people steal. I don’t buy that, because work isn’t the highest-stakes thing people do, and there are processes with more importance and less moral corruption. Theory Y’s explanation of bad behavior at work is that it tends to be self-accelerating; people are naturally inclined toward good action, but one bad behavior leads to several more, with the victims often unable to retaliate directly and, therefore, propagating the misery through the company. That’s quite true, but it doesn’t explain the first injection of evil. Where does that come from? Theory Z, being agnostic on the broader moral questions, takes the teamist approach of planting “fire brakes” between teams so that any submodule (person, team, department) that turns toxic can be sloughed off in isolation. (This is why internal transfer is so hard– a fact that extortionist managers love, obviously– in a Theory-Z organization.) Who’s right?

Theory Y is mostly right, but probably only 90% correct. There are first-strike extortionists and thugs out there. They exist. Bad people are real. An organization that can’t defend itself against them will perish. While passive defense (not attracting them) is best, companies do need to keep a watchful eye on this behavior. The problem with this is that extortionists get their first practice on the people the organization cares the least about, and are already well on a roll by the time the negative behavior becomes a visible problem.

Companies tend to get their moral policy utterly backward. They take a Theory-X attitude toward their people in general, imposing restrictions designed to guard the firm against extortive behavior and theft. However, it’s impossible to get anything done in such an environment. That’s why trust-sparsity generates convex dishonesty, even in heroes (“stone soup”) who are forgiven after the fact. The result of this is that trust-sparse companies must make exceptions, and they do this based on million-year-old social protocols originally designed to encourage K-strategists to deny sexual favor to unworthy and unsavory r-strategists. There’s nothing wrong with that. As a product of human evolution, I’m glad that the K-selective machinery exists. But the psychopaths have a million-year track record of hacking it and getting in. They do the exact same thing to trust-sparse companies. Firms need to defend themselves with something else.

Total denial doesn’t work, because companies can’t operate if they never trust anyone; and conditional denial leads to the victory of psychopaths who’ve spent a million years making themselves exceptions to other peoples’ (well-advised) rules.

What we need is to go the other way, to full-on trust density, and release all non-extortive power to employees. The company must grant autonomy and freedom to serve the business goals to everyone, except to those who attempt to steal or extort, which it must terminate immediately. It turns out that this is a stronger way of policing ethical behavior, because the people on the ground (a) actually care about organizational health, and (b) aren’t afraid to speak up when they see problems.

What’s the alternative? What dominates corporate life for most people? Powerlessness. If trust-sparsity is allowed, then anyone can become an untrusted member of the group, and almost everyone is exposed and afraid. If that’s the case, then all but a few people are in a disempowered and humiliating position. Lord Acton told us that power (of the extortive kind) tends to corrupt, and he’s right. Powerlessness, however, also corrupts.

Power makes bad people evil, and it makes weak people (and that’s a large pool, sadly) bad. Powerlessness, on the other hand, makes good people ineffectual, bad people similarly evil, and weak people both bad and strong. How do I mean that? What could possibly occur through powerlessness that makes the weak strong? Individually, they’re defeated, but they become cheap votes (see above) for the true bad guys to corral and deploy. Since the statistical voting power of a bloc is proportional to the square of its size, and blocs of the weak become substantially more cohesive amid powerlessness and fear, they become strong when under direction from evil. This is why Clueless middle-managers, although few are innately unethical, can easily be misled toward criminal activity.

What is the end game of the corporation when people are powerless? Well, it generates the MacLeod pyramid, and also accelerates its degeneracy. Make people more powerless, and:

  • Losers will become increasingly apathetic, content to draw a salary for no contribution which is (despite some economists’ claims to the contrary) not a comfortable arrangement for most. The Socially Accepted Median Effort (SAME) will drift toward zero. Even managers will tolerate non-contribution as it becomes clear that no one is able to get real work done, anyway.
  • Clueless develop an awareness of low performance (theirs and others) and their overactive superego drives them toward overcompensation. This adds back-and-forth “Brownian management” to the mix. It only adds noise because, while these Clueless are eager, they lack coherent strategic direction.
  • Sociopaths rebel and sabotage operations if they are personally rendered powerless, but they’re also (unlike most) prone toward assessing power in relative terms, so a Sociopath who is macroscopically powerless might still be happy to improve his personal power base by trading on the credibility market. Sociopaths can tolerate macroscopic powerlessness if they can still double their micro-scale power at a sufficient rate. 

This seems to be the end state of dysfunction: powerlessness. Companies fear extortion by employees– as they, perhaps, should, because it happens– but they go so far as to disempower those inclined toward good-faith experimentation and creativity. Then, people lose all reason to care about the health of the organization. Why would anyone care about a company that views her as a bozo? She shouldn’t. She should take what she can and get out once something better is avaiable.

We come to the bottom of the Eighth Circle and find a black hole in the center of the floor. We look into it, and see no bottom. It’s just a chasm. To most people, it would be terrifying. Few of the denizens of the other circles, as miserable as they may be while they are, would dare to enter it, but we’ve come this far. We need to complete our journey, so let’s get on with it. Let’s jump into the hole and find the bottom…

Ninth Circle: ??????????

We start to fall, and in the thin air down here, we continue to accelerate for a long time. It’s not painful, although the air is somewhat hot and it is very loud to be falling this fast. As we get to about a thousand miles an hour, we realize we have time for somewhat of a side conversation. While we drop, it looks like we have time for an aside about religion, and then about economics.

Yes, I said religion. Don’t worry. It will make sense when we get to the bottom, but while we’re careening toward the center of the earth, let’s talk about it.

Westerners (especially detractors of religion) tend to believe that religion exists to allay fears about death, even though religious belief predates faith in any desirable afterlife. (Babylonians, for example, believed in a repulsive afterlife state.) Actually, religion exists to guide people through their fears about life. Religion has had enough on its hands in helping people understand this world. At any rate, I’m most simpatico with Buddhism, so I’m going to discuss the first of the Four Noble Truths. Commonly interpreted as “all life is suffering” (dukkha) it is better interpreted as meaning, “there is suffering in all (samsaric) life”. Rich and poor, animals and humans, and even the traditional gods and demigods (if they exist) of other religions are in a world where dukkha is possible. Shakespeare’s Hamlet referred to “the thousand natural shocks that flesh is heir to”, and he seems to have been discussing the same thing. There’s an overarching theme: life is difficult and there is pain in it. The ancient Greeks blamed a woman named Pandora; Jewish and Christian mythologies involved a snake in the Garden of Eden. Both myths implicated curiosity (philosophically, this could be related to the idea, through rarely formalized this way, that evil exists because we want to know what it is) while the Eastern approaches focus on attachment.

The Western Christian arc evolved a doctrine called original sin, even though it’s not strictly Biblical. I don’t find it to be a useful concept at all. It has been used to justify horrible actions in the past such as the cultural (and sometimes physical) genocide of non-Christian people and, in my mind, it serves no value. It takes the view that we deserve to be punished and are (without supernatural intervention) of negative worth. It’s anti-humanist. I take a more Eastern approach to human nature: there is no such (inflexible) thing. Therefore, there can be no “original sin”. There are ignorance, karma, suffering, and a myriad of biological impulses we have to sort out, but total depravity is nonsensical. It just might be, in fact, one of the worst concepts ever.

Original sin found its way into the “Calvinist work ethic” that outlived actual Calvinism, and has since become a fully secular doctrine. According to the original-sin economics, people are devoid of human value unless made productive (“saved”) by a large, successful, and rich institution. Individual people have no credibility, and being on the job market is taken to be so humiliating that a person should fear being there (even though, since companies can fire at-will, everyone is always on the job market). Companies, however, do have credibility. Prestige. Reputation. It’s all the same thing. They can save. Independent individuals, however, are seen as depraved and damnable, especially if they’ve been unemployed for “too long”.

It’s bad enough that we have to deal with this original-sin nonsense in the greater society, which is one reason why the United States will always be thirty years behind Europe when it comes to healthcare and a modern insurance (they are not as ashamed to use the word “welfare” here) state. That’s bad enough; it really sucks, actually. However, we even have it within companies. You are not credible unless a member of the (mostly corrupt) priesthood called “management” speaks for you. If you seek another job within the company, your performance reviews are part of the transfer packet, even though that’s almost illegal. (It is not, technically speaking, illegal for companies to include HR history in internal mobility decisions, but it is illegal for managers to interfere malevolently with work performance, which includes internal mobility if such options exist. Therefore, a negative review that is visible in the transfer process is, in fact, already in violation of the law.)

What does this original-sin mentality buy us? It creates an economy where almost everyone is in danger of falling to the bottom, or being excommunicated, and this fear creates an economy of extortion so vast and toxic as to be self-perpetuating. That’s what we’ve got, thanks to our original sin mentality.

The evil isn’t capitalism or communism. It’s much older than that. It’s the belief that most humans are devoid of any value, and require salvation through some process that is actually the whim of a corrupt clerical hierarchy.

I haven’t solved the Organization Problem, and at 12 kilowords already I’ll have to put that in Part 23, but I’ve found it…

We land on the bottom. This is it. The Ninth Circle. Here we are. I light a torch and there’s… nothing here. It’s just a cave. Have we passed through the center of the earth and into the antipodes? It is a comfortable temperature here. It does not look like Hell.

Is this a ruse? No, it doesn’t seem to be. There seems to be nothing Hellish about this spot. Maybe that’s the point.

In fact, I might remind you that you were never in a cave (much less Hell) at all. You’re reading text on a computer screen! If you pictured a descent into Hell, that was your doing (but I hope it didn’t disturb you too much) and no one else’s, because that hell didn’t exist.

So let’s look at this Ninth Circle of hell: there is no there there. So it is, also, with the Corporate Hell. Yes, there are extortions and people are powerless, and there is a common fear of loss of income. This is all nasty stuff. There’s plenty of ugly behavior. The beast is cruel and yet… where is it? Who is it? It’s vapor! It exists because it lives in minds, and because we care– too much, perhaps– about it. Quite possibly, it lives in your mind. It has certainly raged on in my mind. That’s how I know what it is. Now I wish to kill it, on a global scale. I want these extortionists driven back into the shadows from which they came. I have no idea how long or how much work it will take to succeed, but that’s no excuse not to try.

Given the non-substance of our enemy, and the fantastical nature of the Hell that it has created for us, maybe we can. Maybe we can end the cycle of extortions and powerlessness for good. We know that the Emperor has no clothes. We’ve known it for decades. Maybe we, as a generation, can summon the courage to laugh at his tiny balls.

It might not be so clear, and it might take another essay (Part 23, coming up) to show this, but each of these levels taught us something about the organization, and all of them contain subproblems that must be solved. The structure of the solution will mirror, somewhat, the “Inferno” shape of the problem, which has been given here. So that’s what comes next, in part 23. Stay tuned. It won’t be long.



Blub vs. engineer empowerment

$
0
0

No, I’m not quitting the Gervais / MacLeod Series. Part 23, which will actually be the final one because I want to get back to technology in how I spend spare time, is half-done. However, I am going to take a break in it to write about something else. 

I’ve written about my distaste for language and framework wars, at least when held for their own sake. I’m not fading from my position on that. If you tell go off and tell someone that her favorite language is a U+1F4A9 because it’s (statically|dynamically) typed, then you’re just being a jerk. There are a few terrible languages out there (especially most corporate internal DSLs) but C, Python, Scala, Lisp and Haskell were all designed by very smart people and they all have their places. I’ve seen enough to know that. There isn’t one language to rule them all. Trust me.

Yet, I contend that there is a problem of Blub in our industry. What’s Blub? Well, it’s often used as an epithet for an inferior language, coined in this essay by Paul Graham. As tiring as language wars are, Blubness is real. I contend, however, that it’s not only about the language. There’s much more to Blub.

Let’s start with the original essay and use Graham’s description of Blub:

Programmers get very attached to their favorite languages, and I don’t want to hurt anyone’s feelings, so to explain this point I’m going to use a hypothetical language called Blub. Blub falls right in the middle of the abstractness continuum. It is not the most powerful language, but it is more powerful than Cobol or machine language.

And in fact, our hypothetical Blub programmer wouldn’t use either of them. Of course he wouldn’t program in machine language. That’s what compilers are for. And as for Cobol, he doesn’t know how anyone can get anything done with it. It doesn’t even have x (Blub feature of your choice).

As long as our hypothetical Blub programmer is looking down the power continuum, he knows he’s looking down. Languages less powerful than Blub are obviously less powerful, because they’re missing some feature he’s used to. But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn’t realize he’s looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well. Blub is good enough for him, because he thinks in Blub.

When we switch to the point of view of a programmer using any of the languages higher up the power continuum, however, we find that he in turn looks down upon Blub. How can you get anything done in Blub? It doesn’t even have y.

By induction, the only programmers in a position to see all the differences in power between the various languages are those who understand the most powerful one. (This is probably what Eric Raymond meant about Lisp making you a better programmer.) You can’t trust the opinions of the others, because of the Blub paradox: they’re satisfied with whatever language they happen to use, because it dictates the way they think about programs.

So what is Blub? Well, some might read that description and say that it sounds like Java (has garbage collection, but not lambdas). So is Java Blub? Well, not quite. Sometimes (although rarely) Java is the right language to use. As a general-purpose language, Java is a terrible choice; but for high-performance Android development, Java’s the best. It is not James Gosling’s fault that it became the go-to language for clueless corporate managers and a tool-of-choice for mediocre “commodity developers”. That fact may or may not be related to weaknesses of the language, but it doesn’t make the language itself inferior.

Paul Graham looks at languages from a language-designer’s viewpoint, and also with an emphasis on aesthetics. As an amateur painter whose original passion was art, that shouldn’t surprise us. And in my opinion, Lisp is the closest thing out there to an aesthetically beautiful language. (You get used to the parentheses. Trust me. You start to like them because they are invisible when you don’t want to see them, but highlight structure when you do.) Does this mean that it’s right for everything? Of course not. If nothing else, there are cases when you don’t want to be working in a garbage-collected language, or when performance requirements make C the only game in town. Paul Graham seems to be focused on level of abstraction, and equating the middle territory (Java and C# would take that ground, today) with mediocrity. Is that a fair view?

Well, the low and high ends of the language-power spectrum tend to harbor a lot of great programmers, while the mediocre developers tend to be Java (or C#, or VB) monoglots. Good engineers are not afraid to go close to the metal, or far away from it into design-your-own-language land, if the problem calls for it. They’re comfortable in the whole space, so you’re more likely to find great people at the fringes. Those guys who write low-latency trading algorithms that run on GPUs have no time to hear about “POJOs“, and the gals who blow your mind with elegant Lisp macros have no taste for SingletonVisitorFactories. That said, great programmers will also operate at middling levels of abstraction when that is the right thing to do.

The problem of Blubness isn’t about a single language or level of abstraction. Sometimes, the C++/Java level of abstraction sometimes is the right one to work at. So there certainly are good programmers using those languages. Quite a large number of them, in fact. I worked at Google, so I met plenty of good programming using these generally unloved languages.

IDEs are another hot topic in the 10xers-versus-commodity-engineers flamewar. I have mixed feelings about them. When I see a 22-year-old settling in to his first corporate job and having to use the mouse, that “how the other half programs” instinct flares up and I feel compelled to tell him that, yes, you can still write code using emacs and the command line. My honest appraisal of IDEs? They’re a useful tool, sometimes. With the right configuration, they can be pretty neat. My issue with them is that they tend to be symptomatic. IDEs really shine when you have to read large amounts of other peoples’ poorly-written code. Now, I would rather have an IDE to do than not have one (trust me; I’ve gone both ways on that) but I would really prefer a job that didn’t involve trudging though bad legacy code on a daily basis. When someone tells me that “you have to use an IDE around here” I take it as a bad sign, because it means the code quality is devastatingly bad, and the IDE’s benefit will be to reduce Bad Code’s consumption of my time from 98% to 90%– still unacceptable.

What do IDEs have to do with Blub? Well, IDEs seem to be used often to support Blubby development practices. They make XML and Maven slightly less hideous, and code navigation (a valuable feature, no disagreement) can compensate, for a little while, for bad management practices that result in low code quality. I don’t think that IDEs are inherently bad, but I’ve seen them take the most hold in environments of damaged legacy code and low engineer empowerment.

I’ve thought a lot about language design and languages. I’ve used several. I’ve been in a number of corporate environments. I’ve seen good languages turn bad and bad languages become almost tolerable. I’ve seen the whole spectrum of code quality. I’ve concluded that it’s not generally useful to yell at people about their choices of languages. You won’t change, nor will they, and I’d rather work with good code in less-favored languages than bad code in any language. Let’s focus on what’s really at stake. Blub is not a specific language, but it is a common enemy: engineer disempowerment.

As technologists, we’re inclined toward hyperrationality, so we often ignore people problems and mask them up as technical ones. Instead of admitting that our company hired a bunch of terrible programmers who refuse to improve, we blame Java, as if the language itself (rather than years of terrible management, shitty projects, and nonexistent mentorship) somehow jammed their brains. Well, that doesn’t make sense because not every Java programmer is brain damaged. When something goes to shit in production, people jump to the conclusion that it wouldn’t have happened in a statically-typed language. Sorry, but that’s not true. Things break in horrible ways in all kinds of languages. Or, alternatively, when development is so slow that every top-25% engineer quits, people argue that it wouldn’t have happened in a fast-prototyping, dynamically-typed language. Wrong again. Bad management is the problem, not Scala or Python or even Java.

Even terrible code isn’t deserving of the anger that’s directed at it. Hell, I’ve written terrible code, especially early in my career. Who hasn’t? That anger should be directed against the manager who is making the engineer use shitty code (because the person who wrote it is the manager’s favorite) and not at the code itself. Terrible romance novels are written every day, but they don’t anger me because I never read them. But if I were forced to read Danielle Steele novels for 8 hours per day, I would fucking explode.

Ok, that’s enough negativity for a while…

I had a bit of a crisis recently. I enjoy computer science and I love solving hard problems. I enjoy programming. That said, the software industry has been wearing me down, this past couple of years. The bad code, low autonomy, and lack of respect for what we do is appalling. We have the potential to add millions of dollars per year in economic value, but we tend to get stuck with fourth quadrant work that we lack the power to refuse. I’ve seen enough of startups to know that most of them aren’t any better. The majority of those so-called “tech startups” are marketing experiments that happen to involve technology because, in the 21st century, everything does. I recently got to a point where I was considering leaving software for good. Computer science is fine and I have no problem with coding, but the corporate shit (again, just as bad in many startups) fries the brain and weakens the soul.

For some positivity, I went to the New York Clojure Meetup last night. I’ve been to a lot of technology Meetups, but there was a distinct feel at that one. The energy was more positive than what I’ve seen in many technical gatherings. The crowd was very strong, but that’s true of many technical meetups. Here, there was a flavor of “cleaner burning” in addition to the high intelligence that is always the case at technology meetups. People weren’t touting one corporate technology at the expense of another, and there was real code– good code, in fact– in a couple of the presentations. The quality of discussion was high, in addition to the quality of the people.

I’d had this observation, before, about certain language communities and how the differences of those are much greater than differences in language. People who intend to be lifelong programmers aren’t happy having New Java Despondency Infarction Framework X thrown at them every two years by some process-touting manager. They want more. They want a language that actually improves understanding of deep principles pertaining to how humans solve problems. It’s not that functional programming is inherently and universally superior. Pure functional programming has strong merits, and is often the right approach (and sometimes not) but most of what makes FP great is the community it has generated. It’s a community of engineers who want to be lifelong programmers or scientists, and who are used to firing up a REPL and trying out a new library. It’s a community of people who still use the command line and who still believe that to program is a virtue. The object-oriented world is one in which every programmer wants to be a manager, because object-orientation is how “big picture guys” think.

I’m very impressed with Clojure as a language, and that community has made phenomenally good decisions over the past few years. I started using it in 2008, and the evolution has been very positive. It’s not that I find Clojure (or Lisp) to be inerrant, but the community (and some others, like Haskell’s) stands in stark contrast against the anti-intellectualism of corporate software development. And I admire that immensely. It’s a real sacrifice that we 1.5+ engineers make on an ongoing basis when we demand that we keep learning, do things right, and build on sound principles. It doesn’t come easy. It can demand unusual hours, costs us jobs, and can put us in the ghetto, but there it is.

In the mean time, though, I don’t think it’s useful to mistake language choice as the prevailing or most important issue. If we do that, we’re just as guilty of cargo cultism as the stereotypical Java-happy IT managers. No, the real issue that matters is engineer empowerment, and we need to keep up our culture around that.


The Disentitled Generation

$
0
0

Anyone else up for some real rage? I can’t promise that there won’t be profanity in this post. In fact, I promise that there will be, and that it will be awesome. Let’s go.

People don’t usually talk about these things that I talk about, for fear that The Man will tear their fucking faces off if they tell the truth about previous companies and how corporate office really run themselves, but I am fucking sick of living in fear. One can tell that I have an insubordinate streak. It’s a shame, because I am extremely good at every other fucking thing the workplace cares about except subordination; but that’s one thing I never got down, and while it’s more important (in the office context) than any other social skill, I’m too old to learn it.

Let’s talk about the reputation that my generation, the Millennials (born ca. 1982 to 2000), has for being “entitled”. This is a fun topic.

I’ve written about why so-called “job hopping” doesn’t deserve to be stigmatized. Don’t get me wrong: if someone leaves a generally good job after 9 months only because he seeks a change of scenery, then he’s a fucking idiot. If you have a good thing going, you shouldn’t seek a slightly better thing every year. Eventually, that will blow up in your face and ruin your life. Good jobs are actually kinda rare. I repeat: if you find a job that continues to enhance your career and that doesn’t make you unhappy, and you don’t stick with it for a few years, then you’re an idiot. You should stay when you find something good. A genuine mentor is rare and hard to replace. That’s not what I’m talking about here.

The problem? Most jobs aren’t good, or don’t make sense for the long term. Sometimes, the job shouldn’t exist in the first place, provides no business value, and is terminated by one side or the other, possibly amicably. Sometimes, the boss is a pathological micromanager who prevents his reports from getting anything done, or an extortionist thug who expects 100% dedication to his career goals and gives nothing in return. Sometimes, people are hired under dishonest pretenses. Hell, I’ve seen startups hire three people at the same time for the same leadership position, without each other’s knowledge of course. (In New York, that move is called “pulling a Knewton.”) Sometimes, management changes that occur shortly after a job is taken turn a good job into an awful one. This nonsense sounds very uncommon, right? No. Each of these pathologies is individually uncommon, but there are so many failure modes for an employment relationship that, taken in sum, they are common. All told, I’d say that about 40 percent of jobs manage to make it worthwhile to keep showing up after 12 months. Sometimes, the job ends. It might be a layoff for business reasons. Sometimes it’s a firing that may not even be the person’s fault. Most often, it’s just pigeonholing into low-importance, career-incoherent work, leaving the person to get the hint that she wasn’t picked for better things and leave voluntarily. Mostly, this political injection is random noise with no correlation to personal quality. Still, I think it’s reasonable to say that 60% of new jobs fail in the first 12 months (even if many go into a “walking dead” state where termination is not a serious risk, but in which it’s still pointless and counterproductive to linger). That means 13 percent of people are going to draw four duds for reasons that are no fault of their own. One in eight people, should they do the honest and mutually beneficial thing which is to leave a job when it becomes pointless, becomes an unemployable job hopper. Seriously, what the fuck?

So let me get one thing out there. Not only is the “job hopping” stigma outdated, it’s wrong and it’s stupid. If you still buy into the “never hire job hoppers” mentality, you should fucking stop using your company as a nursing home and instead, for the good of society, use an actual nursing home as your nursing home. I’m serious. If you really think that a person who’s had a few short-term jobs deserves to be blacklisted over it when the real corporate criminals thrive, then letting you make decisions that affect peoples’ lives is like letting five-year-olds fly helicopters, and you should get the fuck out of everything important before you do any more damage to peoples’ lives and the economy. I’m sorry, but if you cling to those old prejudices, then the future has no place for you.

It needed to be said. So I did.

The “job hopping” stigma is one rage point of mine, but let’s move to another: our reputation as an “entitled” Millennial generation. Really? Here are some of the reasons why we’re considered entitled by out-of-touch managers:

  1. We “job hop” often, tending to have 4 to 6 jobs (on average) by age 30.
  2. We expect to be treated as colleagues and proteges rather than subordinates.
  3. After our first jobs, we lose interest in “prestigious” institutions, instead taking a mercenary approach that might favor a new company, or no company. 
  4. We push for non-conventional work arrangements, such as remote work and flex-time. If we put in 8 hours of face time, we expect direct interest in our careers by management because (unlike prior generations who had no choice) we consider an eight-hour block a real sacrifice.
  5. We question authority.
  6. We expect positive feedback and treat the lack of it as a negative signal (“trophy kids”).

Does this sound entitled? I’ll grant that there’s some serious second-strike disloyalty that goes on, with a degree of severe honesty (what is “job hopping” but an honesty about the worthlessness of most work relationships?) that would have been scandalous 30 years ago, but is it entitled? That word has a certain meaning, and the answer is “no”.

To be entitled, as a pejorative rather than a matter-of-fact declaration about an actual contractual agreement, implies one of two things:

  1. to assume a social contract where none exists (i.e. to perceive entitlement falsely.)
  2. to expect another party to uphold one side of an existing (genuine) social contract while failing to perform one’s own (i.e. one-sided entitlement).

Type I entitlement is expressed in unreasonable expectations of other people. One example is the “Nice Guy Syndrome“, wherein a man expects sexual access in return for what most people consider to be common courtesy. The “Nice Guy” is assuming a social contract between him and “women” that neither exists nor makes sense. Type II is the “culture of entitlement” sometimes associated with a failed welfare state, wherein generationally jobless people– who, because they have ceased looking for work, are judged to be failing their end of the social contract– continue to expect social services. These are people whose claims are rooted in a genuine social contract– the welfare state’s willingness to provide insurance for those who continually try to make themselves productive, but fail for reasons not their fault– but don’t hold up their end of the deal.

So, do either of these apply to Millennials? Let me assess each of the six charges above.

1. Millennials are “job hoppers”. There’s some truth in that one. The most talented people under 30 are not going to stick around in a job that hurts their careers. We’re happy to take orders and do the less interesting work for a little while, if management assists us in our careers, with an explicit intent to prepare us for more interesting stuff later. Failing that, we treat the job as a simple economic transaction. We’re not going to suffer a dues-paying evaluative period for four years when another company’s offering a faster track. Or, if we’re lucky, we can start our own companies and skip over the just-a-test work entirely and do things that actually matter right away. Most of us have been fired or laid off “at will” at least once, and we have no problem with this new feature (job volatility) of the economy. None of us consider lifelong employment an entitlement or right. We don’t expect long-term loyalty, nor do we give it away lightly.

2. Millennials “expect” to be treated as proteges. Not quite. Being a cosmopolitan, well-studied generation exposed to a massive array of different concepts and behaviors from all over the world, we expect very little of other people. We’ve seen so much that we realize it’s not rational to approach people with any major assumptions. The world is just too damn big and complicated to believe in global social contracts. Getting screwed doesn’t shock or disgust or hurt us. It doesn’t thwart our expectations, because we don’t really have any. We simply leave, and quickly. For us, long-term loyalty is the exception, and yes, we’re only going to stay at a job for 5 years if it continues to be challenging and beneficial to our careers. That’s not because we “expect” certain things, and we aren’t “making a statement” when we change jobs. It’s not personal or an affront or intentional “desertion”. We can do better, that’s all.

3. Millennials don’t have respect for prestige and tradition. Yes and no. We don’t start out that way. The late-2000s saw one of the most competitive college admissions environments in history. Then there’s the race to get into top graduate departments or VC-darling startups or investment banking– the last of these being the Ivy League of the corporate world. Then something happens. Around 27, people realize that that shit doesn’t matter. You can’t eat prestige, and many of the most prestigious companies are horrible places to work. Oh, and we think we’re hot shit until we get our asses handed to us by superior programmers and traders from no-name universities and learn that their educations were quite good as well. We realize that work ethic and creativity and long-term diligence and deliberate practice are the real stuff and we lose interest in slaving away for 90 hours per week just because a company has a goddamn name.

4. Many of us expect non-conventional work/life arrangements. This is true, and there’s a reason for it. What is the social contract of an exempt salaried position, under which hourage expectations are only defined by social expectations rather than contract? As far as I can tell, there are two common models. Model A: worker produces enough work not to get fired, manager signs a check. Model B: worker puts a serious investment of self and emotional energy into the work as a genuine working relationship would involve, and management returns the favor with career support and coherence. Under either model, the 8-hour workday is obsolete. Model A tells us that, if a worker can put in a 2-hour day and stay employed, he’s holding up his end of the deal, and it’s management’s fault for not giving him interesting work that would motivate him to perform beyond the minimum. Model B expects a mutual contract of loyalty to each other’s interests, but does not specify a duration or mode of work. Model B might be held to generally support in-office work with traditional hours, for the sake of collaboration and mentoring, but that opens up a separate discussion, especially in the context of individual differences regarding when and how people work best.

5. Millennials question authority. True, and that’s a virtue. Opposing authority because it is authority is no better than being blindly (or cravenly) loyal to it, but questioning it is essential. People who are so insecure that they can’t stand to be questioned should never be put in leadership positions; they don’t have the cojones for it. I question my own ideas all the time; if you expect me to follow you, then I will question yours. It’s a sign of respect to question someone’s ideas, not a personal challenge. It’s when smart people don’t question your ideas that you should be worried; it means they’ve already decided you’re an idiot and they will ignore or undermine you. 

6. We expect positive feedback and respond negatively to a lack of acknowledgement. That’s true, but not because we believe “everyone’s a winner”. If anything, it’s the opposite. We know that most people lose at work and would prefer to play a different game when that appears likely to happen. No, it’s not about “trophies”. A trophy is a piece of plastic. We get bored unless there’s a real, hard-to-fake signal that we aren’t wasting our time. Not a plastic trophy, but management that takes our career needs seriously and complete autonomy over our direction. We know that most people, in their work lives, end up with incompetent or parasitic bosses who waste years of their time on career-incoherent wild goose chases, and we refuse to be on the butt of that joke. Does this mean that we’re not content to be “average”, and that we require being on the upside of a zero-sum executive favoritism to stay engaged with our work? Well, in order to have it not be that way, you need to create a currently-atypical work environment where average people don’t end up as total losers. With all the job hopping we do, we don’t care about relative measures of best or better. We want good. Make a job good and people won’t worry about what others around them are getting.

I think, with this exposition, that there’s a clear picture of the Millennial attitude. Yes, we take second-strike disloyalty to a degree that, even ten years ago, would be considered insolent, brazen, and even reckless in the face of the career damage done (even now) to the job-hoppers. We’ve grown bolder, post-2008. Quit us, and we quit. It’s not that we like changing jobs every few months– believe me, we fucking don’t. We’re looking for the symbiotic 5- or 10-year-fit, as any rational person would, but we’re not going to lie to ourselves for years– conveniently paying dues on evaluative nonsense work while our bosses spend half-decades pretending to look for a real use for our underutilized talents (only to throw us out in favor of fresher, more clueless, younger versions of ourselves)– after drawing a dud.

Is the Millennial attitude exasperating for older managers, used to a higher tolerance for slack on matters of career coherency? I’m sure it is. I’m sure that the added responsibility imposed by a generation characterized by fast flight is unpleasant. It is not, however, entitled. It’s not Type I entitlement because we don’t assume the existence of a social contract that was never made. We only hold employers to what they actually promise us. If they entice us with promises of career development and interesting work, then we expect that. If they’re honest about the job’s shortcomings, we respect that, too. But we only expect the social contract that we’re explicitly given. I’d also argue that it’s not Type II entitlement because Millennials are, when given proper motivation, very hard-working and creative. We want to work. We want genuine work, not bullshit meetings to make the holder of some sinecure feel important.

What are we, if not “entitled”? We’re the opposite. We’re a disentitled generation. We never believed in the corporate paternalist social contract, and most of us are comfortable with this brave new world that has followed its demise. Yes, we’re mercenary. We respond in kind (in fact, often disproportionately) to genuine loyalty, but we’re far too damn honest to pretend we’re getting a good deal when we’re thrown into a three-year dues-paying period rendered obsolete in a world where fast advancement is possible and fast firing is probable for those who don’t advance. I’m in software, where, by age 35, becoming a technical expert (you need a national reputation in your specialty if you want to be employable as a programmer on decent terms by that age) or an executive becomes mandatory. As this leaves 13 years to “make a mark”, one simply will not find people willing to endure a years-long dues-paying period that one would want to hire. Asking someone to risk 2 of those 13 years on dues-paying (that might lead nowhere) is like asking a person to throw 15 percent of her net worth into a downside-heavy investment strategy with no potential for diversification– a bad idea. Reasonable dues-paying arrangements may have existed under the old corporate social contract of cradle-to-grave institutional employment, but that’s extinct now. So should be the “job hopper” stigma and the early-stage dementia patients who still believe in it.


Why we must shut down the Corporate System

$
0
0

I’ve come to the conclusion that the Corporate System deserves to be shut down. What is the Corporate System? No, it’s not the same thing as “corporations”. Corporations are just legal entities. In fact, they’re good things. Limited liability in good-faith business failure is, if not quite a right, a privilege of extreme value to society. The Corporate System is the authority structure that grew up within it: the internal credibility markets of businesses, the attempt to create distributed social status through resumes and references, and all the other nonsense designed to scare the hell out of people so they compliantly execute the ideas handed to them from on high. Most people experience it as managerial authority while society, as a whole, gets the butt-end of externalized costs (e.g. pollution). These might not seem connected, but they are intimately linked. Externalized costs are permitted to exist in a world where people are terrified of long-term career ramifications at the hands of vindictive executives. 

It’s not business or capitalism that we need to shut down. Far from it. As a syncretist at heart, I don’t believe that we can build a decent society while taking innovations only from Column A (capitalism) or Column B (socialism). We need both. Nor do we need to get rid of corporations as the legal abstraction. Again, limited liability under good-faith business failure is one of the best ideas this society has come up with. Rather, the enemy is the network of extortions, lies maintained due to careerist fear, and social exclusion that enables a small set of well-connected parasites to rob investors and employees both, as well as society at large via externalized costs, at an obscene scale.

At any rate, outlawing capitalistic activity (in addition to being morally wrong) just doesn’t work. The problem with capitalism and communism both is that, when rigidly enforced on people, they tend to generate a shitty version of the other. Corporate capitalism is, for the 99%, the worst of both systems. Look at air travel: capitalistic price volatility, but Soviet-style service. For the well-connected 1%, it’s the best of both systems. What we really need is a hybrid system dedicated toward providing the best of each for everyone, but I’ve gone on for megawords about that.

Still, the Corporate System is an expensive, inefficient, authoritarian, self-contradicting dinosaur. It’s time to kill it. Let me establish why we should tear it down. We ask one simple question: why do we allow private business to exist?

Answers are:

  1. People have a right to trade services and resources with the market, so long as they aren’t hurting others by doing so. I think this is a pretty straightforward moral claim. If I can do something that confers only benefit to the stakeholders, I shouldn’t have to ask anyone for permission.
  2. Central authorities can’t reliably outguess markets. Pricing and exchange rates are an NP-complete computational problem in theory and, in practice, even harder because the information informing fair levels is (a) scarce, and (b) constantly changing. Markets aren’t perfect, but they have a much greater information surface area than a central bureaucracy.
  3. If people are deprived of the right to interact with the market independently, the government owns them. A political authority that outlaws private business is making itself The Boss, and making fair competition impossible. (Competition will exist, but involving personal risk, because it’s illegal.)

Private business has a lot of problems. It generates economic inequality. It tends toward organizational sociopathy (although I believe that problem can be solved, by taking a K-selective approach to process and value reproduction where quality trumps rapid growth). Some regulatory oversight is required. It can’t solve all of our economic problems; we need a socialist infrastructure that protects the unlucky. Yet, for all its flaws, the capitalistic market economy is still a wonderful, necessary thing. (I say this as a 3-sigma leftist, because it’s true.) It funds creation because governments can’t. It’s self-correcting. In many ways, it works– although not perfectly.

The Corporate System, however, is something else. It provides none of the benefits of private business that mandate our acceptance of it. Rather, it’s an attempt to build a corrupt blatnoy bureaucracy within capitalism. It occurs when the entrenched grow tired and no longer want to live in a world where ambitious up-and-comers can compete with them. It’s not entrepreneurial. In fact, it’s the deployment of managerial authority (within a company or, in the VC-istan case, via the reputation economy of the most credible investors) to shut down true innovation.

This is the fundamental reason why Corporate Authority is such a pile of self-contradicting, hypocritical dogshit. Government authority (e.g. to enforce speed limits) is just necessary in small doses. There are problems that require authority to solve it. Knowing the abuses that occur when such authority is unaccountable, we demand the right to fire the elected representatives at the top of that structure. However, capitalism, properly practiced, is not about authority. They don’t belong in the same room together. Rather, we allow capitalism to exist (in spite of its flaws, such as inequality) because we know authority is inadequate to solve all problems, and has no right to go into most spheres of human endeavor. Capitalism, done right, is about the removal of authority.

This is why the interfering, self-serving managerial authority that makes Corporate America such hell deserves to end. Now. It’s bad for us and it’s bad for capitalism. We must make it retreat in disgrace.


Status checks and the stink-when-breaking problem

$
0
0

Most software engineers, being rational people and averse to legacy, hate management– a set of legacy processes that were necessary over the concave, commodity work that existed for 200 years, but counterproductive over the convex stuff we do as software engineers. No, we don’t hate managers. They’re just people. They have this job we dislike because it seems to require them to get in our way. We only covet their jobs insofar as we wish we had more control over tools and the division of labor– we have this impractical dream of being managers-who-code (i.e. exceptionalism) meaning that we’re full-time programmers who have managerial control, even though the real world doesn’t work that way and most actual managers find their time spent mostly in meetings– but mostly, we think what they do is unnecessary. That’s not 100% true. It’s about 97% true. Resolving personnel issues, mentoring new hires, building teams and looking out for the health of the group– all of these are important tasks. The “do what I say or I’ll fire you” extortionist management that exists when bosses prioritize “managing up” and stop giving a shit about genuine leadership and motivation is the problem. I’ve said enough on that to make my opinions well-known.

A typical inefficiency that we perceive, as engineers, is the fact that most of the work that “needs to be done” doesn’t. It’s fourth-quadrant work that exists only because people lack the ability to refuse it. We’re rationalists and careerists, so we doubly hate this. We realize that it’s inefficient, but, more to our dislike, that it’s individually bad for us to work on low-yield projects. Why is there so much nonsensical dues-paying? And why does Corporate America feel more like the Stanford Prison Experiment than a group of people working together to achieve a common goal? I believe I can answer that. We need to understand what “business” is to non-rational people. It’s emotional. Judgments are made quickly, then back-rationalized. Decisions are made without data, or on sparse and poorly audited data, which is why an error in an Excel model can be such a big deal. So let me get to the one question that non-technical “businessmen” ask about people when evaluating them. It is…

Do they stink when they break?

Some people drop to 0.9 (that is, they lose 10% of peak productivity) under managerial adversity. (A few might go up, with a 1-in-1-million effective-asshole manager like Steve Jobs. So rare I’m ignoring it.) Some drop to zero. Some go to -10 as they cause morale problems. Managers want to see what happens ahead of time so they can promote the first category, mark the second as non-advancers, and fire the third. People are sorted, then, based on the rightward side of their decline curve. The reason this is fucked-up and morally wrong is that, if management is at all decent, that extreme territory is never seen. So a lot of what management tries to do is probe people for those tendencies in little ways that don’t disrupt operations.

Rationalists like engineers don’t like this. We look at the affair and say, “Why the fuck are you trying to break people in the first place? That’s fucking abusive and wrong.” We try to build systems and processes that fail harmlessly, do their jobs as well as possible, and certainly never break people. We like things that “just work”. Some of us are glory whores when it comes to macroscopics and “launches”, and we have to be that way for career reasons but, on the whole, the best endorsement of a product is for people to use it and not even know they are using it, because it works so damn well. When we look for weak points in a system, we work for technical issues and try to solve those. Non-rationalists don’t think that way. They look for weak people and, when there are none, they make one up to justify their continued high position (and as an easy blame eater).

When you deal with a non-rationalist, the first think you need to know about him is that he’s trying to figure out how bad it smells when you break. If you’re loaded up with 80 hours per week of undesirable work for two months on end, will you bear it? Or will you quit? Underperform? Make mistakes? Whine and cause morale problems? Grind to a halt, get fired, and demand severance not to sue? Sabotage systems? Non-rationalists want to find peoples’ failure modes; engineers want to build systems that are tolerant of human failure.

I think that this stink-when-breaking matter is one of the fundamentals of negotiation, as well. When you stink, you lose all negotiatory power. You whine, you make messes, you get people pissed off, and you end up in a position of weakness. You prove that you’ve been broken. You might have worse kinds of stink up your sleeve, but you’ve put your managerial adversary into damage-control mode and lost your element of surprise. The best way to extract value from a non-rationalist is, instead, to leave him not knowing whether you’ve broken yet, which means you have to minimize your smell (of defeat, resentment, hatred, or adversity). Are you intact, or is your stink very mild? You just can’t stink, if you want to negotiate from a position of strength, no matter how badly you’ve been broken. Breaks will heal; stink is permanent.

Okay, we now have an understanding that allows us to understand the non-rationalists that we either answer to, or answer to people who answer to. That’s a start. Now I’m going to talk about the bane of programmers: impromptu status checks. I’m not talking about necessary communication, as would occur during a production crisis, nor about planned standups (which are painful when poorly run, but at least scheduled). I’m talking about the unplanned and involuntary games of 52-Card Pickup that managers inflict on us when they’re bored. Mention status pings to a software engineer, and it’s like you just uttered a racial slur. We fucking hate hate hate hate hate those with a +8 modifier to Anger. I call it “email syndrome”. Email can be checked 74 times per day with no degradation to performance. Making the same assumption about a person is a terrible idea. For an engineer, one impromptu status ping per day is an order of magnitude too much (during normal circumstances). If you really need daily status information (and you shouldn’t; you should trust people to do their jobs) then run a scheduled standup for which people can prepare. It’s just disrespectful to put people on trial, constantly, without their having the right to prepare.

I used to take a Hanlon-esque approach to status pings. (Hanlon’s Razor is “never attribute to malice what can be explained by benign incompetence.”) Now, my attitude is more cynical. Why? Because the first thing you learn in Technology Management 101 is that engineers can’t fucking stand impromptu status pings. Our feeling on this is simple: if you really don’t trust us with hours and days of our own fucking time, then don’t hire us. It’s that simple and, not only that, but most technology managers know that programmers hate these utterly pointless status checks. If we wanted to be interrupted to feed babies, we’d have our own.

So what’s the real point of an impromptu status check? You know that Carlin-esque phenomenon where you look at your watch but don’t read it, then have to look again because you didn’t catch what time it was? That’s exactly how managers’ impromptu status checks work. It’s not actually about project status. No one has any clue how long things “should” take anyway; on that, the world is split between people who know they don’t know and people who don’t know that they don’t know. It’s about making a manager feel secure. You have two jobs. One is to make the manager feel important; the whole reason he is interrupting you is because to establish that he can, and your job is that of an actor. The other is to make him feel like you are not slacking, and that aspect of it (lack of trust) is goddamn insulting. Again, I’m not talking about a production crisis that might actually merit such a fast audit-cycle. I’m talking about 2-20 status pings per week during peace time. It’s painful and wrong and the context-switches drop productivity to zero.

Why don’t managers intuitively understand this? I’m going to paint management in the best possible light. They’re salesmen. That’s not a pejorative. Far from it. That’s a valuable function, especially because while trust sparsity is horrible as a matter of policy, it tends to be somewhat inevitable, and salesmen are the best people at breaking down trust sparsity. They sell their superiors on their ability to deliver a project. They (if they’re good) sell their reports on the value of the project to their careers. All day, they’re selling someone on something. So they’re in Sales Mode constantly, and that is their flow. For a manager, a status ping is a welcome break: a chance to have someone else do the selling (i.e. prove that he hasn’t wasted the previous 3.37 hours of working time) to them. For the programmer, though, it’s a sudden and unexpected jerk into Sales Mode, which is a different state of consciousness from Building Mode. Both kinds of flow have their charms and can be a lot of fun, but mixing the two is both ineffective and miserable, especially when not by one’s choice.

If these status pings are so horrible, then why are there so many programming environments in which they’re common? Well, most managers are non-rationalists, and non-rationalists are all about emotion, perception, and superficial loyalty. Again, remember that question, and that test: does this guy stink when he breaks? Is his attitude (that he’s feeding an impulsive toddler) evident? Does his face show a look of enmity or defeat? Does he make eye contact with the boss or with the computer screen? What’s his posture? What’s his tone of voice?

Of course, it’s these status checks have negative value as an assessment of project progress, because the signal is going to be opposite of the truth. Employees pass these impromptu status checks when they’re prepared, and that means they aren’t doing any real work. They fail (because they’re pissed off about the interruption) when they’re actually getting something accomplished. That, however, is not what they’re really about. The manager will forget everything you told him, 15 minutes later. (That’s part of why status checks are so fucking infuriating; they’re about social status, not project progress.) They’re about not stinking when you break. It doesn’t matter what you got done in the past few hours, but you need to show an ability to communicate progress without getting pissed off about the indignity of a tight audit cycle and hope, for fuck’s sake, that either he’ll start to trust you more or that you’ll be able to get a different manager.

What’s the solution? Well, there are two kinds of status checks. The first kind is the simple kind that doesn’t involve follow-on questions. I think the solution is obvious. Write down a sentence about what you’ve accomplished, every 30 minutes or so. Maybe this could be integrated with the Pomodoro technique. Now you have a story that can be read by rote and don’t have to you don’t have to deal with the social and emotional overhead of a sudden switch into Sales Mode. You’re prepared. That will work as long as there aren’t follow-on questions; if there are, then it’s the second kind of status check.The second kind is the cavity search that comes from a manager who just has nothing else to do. I think the only solution in that case (unless it’s rare) is to change companies.


The shodan programmer

$
0
0

The belt-color meritocracy

Nothing under the sun is greater than education. By educating one person and sending him into the society of his generation, we make a contribution extending a hundred generations to come.” — Dr. Kano Jigoro, founder of judo.

Colored belts, in martial arts, are a relatively modern tradition, having begun in the late 19th century. It started informally, with the practice in which the teacher (senseiwould wear a black sash in contrast against the white uniform (gi) in order to identify himself. This was later formalized by Dr. Kano with a series of ranks, and by replacing the black sash (in addition to a white belt, holding the gi together) with a black belt. Beginners were assigned descending kyu ranks (traditionally, 6th to 1st) while advanced ranks were dan (from 1st up to 10th). At a dan rank, you earned the right to wear a black belt that would identify you, anywhere in the world, as a qualified teacher of the art. Contrary to popular opinion, a black belt doesn’t mean that you’ve fully mastered the sport. Shodan is taken, roughly, to mean “beginning master”. It means that, after years of work and training, you’ve arrived. There’s still a lot left to learn.

Over time, intermediate belt colors between white and black were introduced. Brown belts began to signify nearness to black-belt level mastery, and green belts signified strong progress. Over time, an upper-division white belt began to be recognized with a yellow belt, while upper-division green belts were recognized with blue or purple. While it’s far from standard, there seems to be a general understanding of belt colors, approximately, as following:

  • White: beginner.
  • Yellow: beginner, upper division.
  • Green: intermediate.
  • Purple: intermediate, upper division.
  • Brown: advanced. Qualified to be senpai, roughly translated as “highly senior student”.
  • Black: expert. Qualified to be sensei, or teacher.

Are these colored belts, and ranks, good for martial arts? There’s a lot of debate about them. Please note that martial arts are truly considered to be arts, in which knowledge and perfection of practice (rather than mere superiority of force) are core values. An 8th dan in judo doesn’t mean you’re the most vicious fighter out there (since you’re usually in your 60s when you get it; you are, while still formidable, probably not winning Olympic competitions) because that’s not the point. These belts qualify you as a teacher, not a fighter only. At that level, knowledge, dedication and service to the community are the guidelines of promotion.

Now, back to our regularly scheduled programming (pun intended)

Would colored belts (perhaps as a pure abstraction) make sense for programming? The idea seems nutty. How could we possibly define a rank system for ourselves as software engineers? I don’t know. I consider myself a 1.8-ish ikkyu (1-kyu; brown belt) at my current level of programmer development. At a typical pace, it takes 4-6 years to go from 1.8 to 2.0 (shodan); I’d like to do it in the next two or three. But we’ll see. Is there a scalable and universally applicable metric for programmer expertise assessment? I don’t know. 

To recap the 0.0-to-3.0 scale that I developed for assessing programmers, let me state the most important points:

  • Level 1 represents additive contributions that produce some front-line business value, while level-2 contributions are multiplicative and infrastructural. Level-3 contributions are global multipliers, or multiplicative over multipliers. Lisps, for example, are languages designed to gift the “mere” programmer with full access to multiplicative power. The Lispier languages are radically powerful, to the point that corporate managers dread them. Level-2 programmers love Lisps and languages like Haskell, however; and level-3 programmers create them.
  • X.0 represents 95% competence (the corporate standard for “manager doesn’t need to worry”) at level X. In other words, a 1.0 programmer will be able to complete 95% of additive tasks laid before him. The going assumption is that reliability is a logistic “S-curve” where a person’s 5% competent on tasks 1.0 levels higher, 50% at 0.5 above, and 95% at-level. So a 1.8 engineer like me is going to be about 85% competent at Level-2 work, meaning that I’d probably do a good job overall but you’d want light supervision (design review, stability analysis) if you were betting a company on my work.
  • 1.0 is the threshold for typical corporate employability, and 2.0 is what we call a “10x programmer”; but the truth is that the actual difference in value creation is highly variable: 20x to 100x on green-field development, 3x to 5x in an accommodating corporate environment such as Google, and almost no gain in a less accommodating one.
  • About 62% of self-described professional software engineers are above 1.0. Only about 1 percent exceed 2.0, which typically requires 10-20 years of high-quality experience. The median is only 1.1, and 1.4 is the 85th percentile.
  • At least in part, the limiting factor that keeps most software engineers mediocre is the extreme rarity of high-quality work experience. Engineers between 1.5 and 1.9 are manager-equivalent in terms of their potential for impact, and 2.0+ are executive-equivalent (they can make or break a company). Unfortunately, our tendency toward multiplicative contribution leads us into direct conflict with “real” managers, who consider multiplicative effects their “turf”.

Programming– like a martial art or the board game Go, both being uncommonly introspective on the measurement of skill ad progress– is a field in which there’s a vast spectrum of skill. 2.0 is a clear candidate for shodan (1st dan). What does shodan mean? It means you’re excellent, and a beginner. You’re a beginner at being excellent. You’re now also, typically, a teacher, but that doesn’t mean you stop learning. In fact, while you can’t formally admit to this too often (lest they get cocky) you often learn as much from your students as they do from you. Multiplicative (level 2) programming contributions are fundamentally about teaching. When you build a Lisp macro or DSL that teaches people how to think properly about (and therefore solve) a problem, you are a teacher. If you don’t see it this way, you just don’t get the point of programming. It’s about instructing computers while teaching humans how the systems work.

In fact, I think there is a rough correlation between the 0.0 to 3.0 programmer competence scale and appropriate dan/kyu ranks, like so:

  • 0.0 to 0.4: 8th kyu. Just getting started. Still needs help over minor compilation errors. Can’t do much without supervision.
  • 0.5 to 0.7: 7th kyu. Understands the fundamental ideas behind programming, but still takes a lot of time to implement them.
  • 0.8 to 0.9: 6th kyu. Reaching “professional-grade” competence but only viable in very junior roles with supervision. Typical for an average CS graduate.
  • 1.0 to 1.1: 5th kyu. Genuine “white belt”. Starting to understand engineering rather than programming alone. Knows about production stability, maintenance, and code quality concerns. Can write 500+ line programs without supervision.
  • 1.2 to 1.3: 4th kyu. Solidly good at additive programming tasks, and can learn whatever is needed to do most jobs, but not yet showing leadership or design sense. Capable but rarely efficient without superior leadership.
  • 1.4 to 1.5: 3rd kyu. Developing a mature understanding of computer science, aesthetics, programming and engineering concerns, and the trade-offs involved in each. May or may not have come into functional programming (whose superiority depends on the domain; it is not, in high-performance domains, yet practical) but has a nuanced opinion on when it is appropriate and when not.
  • 1.6 to 1.7: 2nd kyu. Shows consistent technical leadership. Given light supervision and permission to fail, can make multiplier-level contributions of high quality. An asset to pretty much any engineering organization, except for those that inhibit excellence (e.g. corporate rank cultures that enforce subordinacy and disempower engineers by design).
  • 1.8 to 1.9: 1st kyu. Eminently capable. Spends most of his time on multiplier-type contributions and performs them well. Can be given a role equivalent to VP/Engineering in impact and will do it well.
  • 2.0 to 2.1: 1st dan. She is consistently building high-quality assets and teaching others how to use them. These are transformative software engineers who don’t only make other engineers more productive (simple multiplierism) but actually make them better. Hire one, give her autonomy, and she will “10x” your whole company. Can be given a CTO-equivalent role.
  • 2.2 to 2.3+: Higher dan ranks. Having not attained them, I can’t accurately describe them. I would estimate Rich Hickey as being at least a 2.6 for Clojure, as he built one of the best language communities out there, creating a beautiful language on top of an ugly but important/powerful ecosystem (Java), and for the shockingly high code quality of the product. (If you look into the guts of Clojure, you will forget to hate Java. That’s how good the code is!) However, I’m too far away from these levels (as of now) to have a clear vision of how to define them or what they look like.

Is formal recognition of programmer achievement through formalized ranks and colored belts necessary? Is it a good idea? Should we build up the infrastructure that can genuinely assess whether someone’s a “green belt engineer”, and direct that person toward purple, brown, and black? I used to think that this was a bad idea. Why? Well, to be blunt about it, I fucking hate the shit out of resume culture, and the reason I fucking hate it is that it’s an attempt to collate job titles, prestige of institutions, recommendations from credible people. and dates of employment into a distributed workplace social status that simply has no fucking right to exist. Personally, I don’t lie on my resume. While I have the career of a 26-year-old at almost 30 (thanks to panic disorder, bad startup choices, and a downright evil manager when I was at Google) I feel like I still have more to lose by lying than to gain. So I don’t. But I have no moral qualms about subverting that system and I encourage other people, in dire circumstances, to engage in “creative career repair” without hesitance. Now, job fraud (feigning a competency one does not have) is unacceptable, unethical, and generally considered to be illegal (it is fraud). That’s different, and it’s not what I’m talking about. Social status inflation, such as “playing with dates” to conceal unemployment, or improving a title, or even having a peer pose as manager during a reference check? Fair game, bitches. I basically consider the prestige-title-references-and-dates attempt to create a distributed workplace social status to be morally wrong, extortionate (insofar as it gives the manager to continue to fuck up a subordinate’s life even after they separate) and just plain fucking evil. Subverting it, diluting its credibility, and outright counterfeit in the effort to destroy it; all of these are, for lack of a better word, fucking awesome.

So I am very cynical about anything that might be used to create a distributed social status, because the idea just disgusts me on a visceral level. Ranking programmers (which is inherently subjective, no matter how good we are at the assessment) seems wrong to me. I have a natural aversion to the concept. I also just don’t want to do the work. I’d rather learn to program at a 2.0+ level, and then go off and do it, then spend years trying to figure out how to assess individuals in a scalable and fair way. Yeah, there might be a machine learning problem in there that I could enjoy; but ultimately, the hero who solves that problem is going to be focused mostly on people stuff. Yet, I am starting to think that there is no other alternative than to create an organization-independent ranking system for software engineers. Why? If we don’t rank ourselves in a smart way, then business assholes will step in and rank us anyway, and they’ll do a far shittier job of it. We know this to be true. We can’t deny it. We see it in corporate jobs on a daily basis.

A typical businessman can’t tell the difference between a 2.0 engineer and a 1.2 who’s great at selling his ideas. We tend to be angry at managers over this fact, and over the matter of what is supposed to be a meritocracy (the software industry) being one of the most politicized professional environments on earth; but when we denigrate them for their inability to understand what we do, we’re the ones being assholes. They police and measure us because we can’t police and measure ourselves.

So this may be a problem that we just need to solve. How does one get a black belt in programming? Most professional accreditations are based on churning out commodity professionals. We can’t take that approach, because under the best conditions it takes a decade to become a black belt/2.0+, and some people don’t even have the talent. This is a very hard problem, and I’m going to punt on it for now.

Brawlers and Expert Experts

Let’s peer, for a little while, into why Corporate Programming sucks so much. As far as I’m concerned, there are two categories of degeneracy that merit special attention: Brawlers and Expert Experts.

First I will focus on the Brawlers (also known as “rock stars” or “ninjas”). They write hideous code, and they brag about their long hours and their ability to program fast. There’s no art in what they do. They have only a superficial comprehension of the craft. They can’t be bothered to teach others what they are doing, and don’t have enough insight that would make them passable at it anyway. What they bring is a superhuman dedication to showing off, slogging through painful tasks, and kludging their way to something that works just enough to support a demo. They have no patience for the martial art of programming, and fight using brute strength.

Brawlers tend, in fact, to be a cut above the typical “5:01″ corporate programmers. Combine that with their evident will to be alpha males and you get something that looks like a great programmer to the stereotypical business douche. Brawlers tend to burn themselves out by 30, they’re almost always men, and they share the “deadlines is deadlines” mentality of over-eager 22-year-old investment banking “analysts”. There is no art in what they do, and what they build is brittle, but they can do it fast and they’re impressive to people who don’t understand programming.

Let’s think of corporate competition as a fight that lasts for five seconds, because power destroys a person’s attention span and most executives are like toddlers in that regard. In a three-minute fight, the judoka would defeat the brawler; but, in a 5-second fight, the brawler just looks more impressive. He’s going all out, posturing and spitting and throwing feint punches while the judoka seems passive and conservative with his energy (because he is conserving it, until the brawler makes a mistake, which won’t take long). A good brawler can demolish an untrained fighter in 5 seconds, but the judoka will hold his own for much longer, and the brawler will tire out.

With the beanbag coming in after 5 seconds, no one really lands a blow, as the judoka has avoided getting hit but the brawler hasn’t given enough of an opening for the judoka to execute a throw. Without a conclusive win or loss, victory is assessed by the people in chairs. However, the judges (businessmen, not programmers) don’t have a clue what the fuck they just watched, so they award the match to the brawler who “threw some really good punches” even though he failed to connect and would have been thrown to the ground had the fight lasted 5 seconds more.

Where are Brawlers on the engineer competence scale? It’s hard to say. In terms of exposure and knowledge they can be higher, but they tend to put so much of their energy and time into fights for dominance that the quality of their work is quite low: 1.0 at best. In terms of impressions, though, they seem to be “smart and gets things done” to their superiors. Managers tend to like Brawlers because of their brute-force dedication and unyielding willingness to shift blame, take credit, and kiss ass. Ultimately, the Brawler is the one who no longer wishes to be a programmer and wants to become more like an old-style “do as I say” manager who uses intimidation and extortion to get what he wants.

Brawlers are a real problem in VC-istan. If you don’t have a genuine 1.5+ engineer running your technical organization, they will often end up with all the power. The good news about these bastards (Brawlers) is that they burn themselves out. Unless they can rapidly cross the Effort Thermocline (the point at which jobs become easier and less accountable with increasing rank) by age 30, they lose the ability to put a coherent sentence together, and they just aren’t as good at fighting as they were in their prime.

The second category of toxicity is more long-lived. These are the people called Expert Beginners by Erik Dietrich, but I prefer to call them Expert Experts (“beginner” has too many positive and virtuous connotations, if one either takes a Zen approach, or notes that shodan means “beginner”). No, they’re not actual experts on anything aside from the social role of being an Expert. That’s part of the problem. Mediocrity wants to be something– an expert, a manager, a credible person. Excellence wants to do things– to create, to build, and to improve running operations.

The colored-belt metaphor doesn’t apply well to Brawlers, because even a 1.1 white belt could defeat a Brawler (in terms of doing superior work) were it not for the incompetence of the judges (non-technical businessmen) and the short duration of the fight. That’s more of an issue of attitude than capability, I’ve met some VC-istani Brawlers who would be capable of programming at a 1.4 level if they had the patience and actually cared about the quality of their work. It’s unclear what belt color applies; what is more clear is that they take their belts off because they don’t care.

Expert Experts, however, have a distinct level of competence that they reach, and rarely surpass, and it’s right around the 1.2 level: good enough to retain employment in software, not yet good enough to jeopardize it. They’re career yellow belts at 1.2-1.3. See, the 1.4-1.5 green belts have started exposing themselves to hard-to-master concepts like functional programming, concurrency and parallelism, code maintainability, and machine learning. These are hard; you can be 2.0+ and you’ll still have to do a lot of work to get any good at them. So, the green belts and higher tend to know how little they know. White belts similarly know that they’re beginners, but corporate programming tends to create an environment where yellow belts can perceive themselves to be masters of the craft.

Of course, there’s nothing wrong with being a yellow belt. I was a novice, then a white belt, then yellow and then green, at some point. (I hadn’t invented this metaphor yet, but you know what I mean.) The problem is when people get that yellow belt and assume they’re done. They start calling themselves expert early on and stop learning or questioning themselves; so after a 20-year career, have 18 years of experience in Being Experts! Worse yet, career yellow belts are so resistant to change that they never get new yellow belts, and time is not flattering to bright colors, so their belts tend to get a bit worn and dirty. Soot accumulates and they mistake it (as their non-technical superiors do, too) as a merit badge. “See! It’s dark-gray in spots! This must be what people mean when they talk about black belts!”

There’s a certain environment that fosters Expert Experts. People tend toward polarization of opinion surrounding IDEs but the truth is that they’re just tools. IDEs don’t kill code; people kill code. The evil is Corporate Programming. It’s not Java or .NET, but what I once called “Java Shop Politics“, and if I were to write essay now, I’d call it something else, since the evil is large, monolithic software and not a specific programming language. Effectively, it’s what happens when managers get together and decide that (a) programmers can’t be trusted with multiplicative work, so the goal becomes to build a corporate environment tailored toward mediocre adders (1.0 to 1.3) but with no use for superior skill, and (b) because there’s no use for 1.4+, green-belt and higher, levels of competence, it is useless to train people up to it; in fact, those who show it risk rejection because they are foreign. (Corporate environments don’t intentionally reject 1.4+ engineers, of course, but those tend to be the first targets of Brawlers.) It becomes a world in which software projects are large and staffed by gigantic teams of mediocre developers taking direct orders with low autonomy. It generates sloppy spaghetti code that would be unaffordable in its time cost were it not for the fact that no one is expected, by that point, to get anything done anyway.

Ultimately, someone still has to make architectural decisions, and that’s where the Expert Experts come in. The typical corporate environment is so stifling that 1.4+ engineers leave before they can accumulate the credibility and seniority that would enable them to make decisions. This leaves the Expert Experts to reign over the white-belted novices. “See this yellow belt? This means that I am the architect! I’ve got brown-gray ketchup stains on this thing that are older than you!”

Connecting the Dots

It goes without saying that there are very few shodan-level programmers. I’d be surprised if there are more than 15,000 of them in the United States. Why? What makes advancement to that level so rare? Don’t get me wrong: it takes a lot of talent, but it doesn’t take so much talent as to exclude 99.995% of the population. Partly, it’s the scarcity of high-quality work. In our War on Stupid against the mediocrity of corporate programming, we often find that Stupid has taken a lot of territory. When Stupid wins, multiplicative engineering contributions become impossible, which means that everyone is siloized into get-it-done commodity work explicitly blessed by management, and everything else gets thrown out.

Brawlers, in their own toxic way, rebel against this mediocrity, because they recognize it as a losing arrangement they don’t want; if they continue as average programmers in such an environment, they’ll have mediocre compensation and social status. They want to be alpha males. (They’re almost always men.) Unfortunately, they combat it by taking an approach that involves externalized costs that are catastrophic in the long term. Yes, they work 90 hours per week and generate lots of code, and they quickly convince their bosses that they’re “indispensable”. Superficially, they seem to be outperforming their rivals– even the 1.4+ engineers who are taking their time to do things right.

Unfortunately, Brawlers tend to be the best programmers when it comes to corporate competition, even though their work is shitty. They’re usually promoted away from the externalized costs induced by their own sloppy practices could catch up with them. Over time, they get more and more architectural and multiplier-level responsibilities (at which they fail) and, at some point, they make the leap into real management, about which they complain-brag (“I don’t get to write any code anymore; I’m always in meetings with investors!”) while they secretly prefer it that way. The nice thing, for these sociopaths, about technology’s opacity in quality is that it brings the Effort Thermocline to be quite low in the people-management tier.

Managers in a large company, however, end up dealing with the legacy of the Brawlers and, even though blame has been shifted away from those who deserve it, they get a sense that engineers have “too much freedom”. It’s not sloppy practices that damaged the infrastructure; it’s engineer freedom in the abstract that did it. Alien technologies (often superior to corporate best practices) often get smeared, and so do branch offices. “The Boston office just had to go and fucking use Clojure. Does that even have IDE support?”

This is where Expert Experts come in. Unlike Brawlers, they aren’t inherently contemptible people– most Expert Experts are good people weakened by corporate mediocrity– but they’re expert at being mediocre. They’ve been yellow belts for decades and just know that green-belt levels of achievement aren’t possible. They’re professional naysayers. They’re actually pretty effective at defusing Brawlers, and that’s the scary bit. Their principled mediocrity and obstructionism (“I am the expert here, and I say it can’t be done”) actually serves a purpose!

Both Brawlers and Expert Experts are an attempt at managerial arrogation over a field (computer programming) that is utterly opaque to non-technical managers. Brawlers are the tough-culture variety who attempt to establish themselves as top performers by externalizing costs to the future and “the maintenance team” (which they intend never to be on). Expert Experts are their rank-culture counterparts who dress their mediocrity and lack of curiosity up as principled risk-aversion. So, we now understand how they differ and what their connection is.

Solve It!

I did not intend to do so when I started this essay, in which I only wanted to focus on programming, but I’ve actually come upon (at least) a better name for the solution to the MacLeod Organizational Problem: shodan culture. It involves the best of the guild and self-executive cultures. Soon, I’ll get to exactly what that means, and how it should work.


Seeking co-founders. [April 23, 2013]

$
0
0

It has rained frogs before and, yes, it occasionally rains dragons. This is one of those times.

I have a startup concept. I think it’s legit. Here are some of its features:

  • validated business model. This is an improvement (using machine learning, user-supplied data, and an innovative market mechanic that has never existed before) on a billion-dollar industry. 
  • a “sexy” problem. I’m not getting into details, but if we pull this off, it’ll be known around the world, and hundreds of thousands of people will love (and some will pay lots of money for) the product.
  • a progressive outlook. One of my major gripes with most of “Social” is that it’s focused on documenting social graphs rather than expanding them and improving peoples’ lives. This is focused on hard-core expansion.
  • lots of open-source activity. For the goal of rapid prototyping, I’d like to build this thing in Clojure. We’ll be building up Clojure’s machine-learning infrastructure in a major way, and we’ll be putting a lot of our best stuff into the open-source world.
  • “blue ocean” (for now). Right now, there are no competitors. There will be, within time, so we need to get a head start.
  • pivot potential. One of the subproblems is to assess code quality (including documentation, hence NLP) automatically, and that’s something that could become an enterprise product if the riskier, many-moving-parts concept that I have in mind proves impractical.

Here are the downsides:

  • I have about 2 weeks to pull starting parts together. I’ve worked for some terrible startups that have wrecked my finances, so I don’t have the savings that a person of my age should. I can’t afford to burn savings, at all. I’m going to need a “10x hustler” (co-founder #1; see below) who can bring in month-to-month money right away. Failing that, we can’t do the project. I’m married and almost 30 and can no longer afford to pretend that “boring adult stuff”– like health insurance, the need to save for retirement, and saving for future child-raising expenses– don’t exist. I don’t expect to be matching the compensation level I’d be getting in finance (I regularly turn away $200k+ hedge fund jobs, and I wouldn’t draw that much out) but I can’t afford to be savings-negative.
  • We’ll be heavily reliant on third-party relationships. This is a problem with three main classes of actors. For two of them, there’s already strong signal that they would participate. The third class (who will provide our starting data) is more uncertain; it has two subclasses, one of which would be thrilled to participate but whose data will require serious NLP to mine, the other of which will have high-signal data but want to protect its interests. It’ll probably take 6 months to get that in order, and I’ll need someone with strong negotiation skills.
  • We’ll need front-end expertise. I’ve worked on back-end projects my whole life, but back-end is hard to sell. That comes down to front-end work. I can learn FE, and I will, but while I come up to speed on the presentation aspects, we’re going to need someone who’s already capable of building kickass demos.

Here are some traits of the company I want to build:

  • “Mid-growth”: after 25 people, no more than 40% headcount growth per year. I don’t want to be one of those horrible VC-istan companies that loses its culture in fast hiring. This isn’t a problem that requires a lot of “meat” but it does require a lot of smarts. 
  • Open allocation after 25 people. In our high-risk startup stage, we’ll make some autonomy compromises that are necessary to make the damn thing work. Laser focus. (We’ll be compromising our own autonomy, as its leaders, so it’s only fair…) However, once we’re de-risked, we simply won’t have jockeying for “plum projects” or “head count” squabbles as managers fight turf wars, because people will be able to transfer freely across the company, making projects compete for engineers rather than the reverse. No corrupt “transfer process” infrastructure that devolves into managerial extortion. I want a “stop-complaining-and-fix-the-damn-thing” culture where, instead of complaining to managers to get things done, people dive in and do it.
  • Outside of a traditional “tech hub”, with tolerance of distributed work. This is a longer-term project than the go-viral-or-die VC-istan nonsense. We’re going to need some very strong people, and we’ll need to pay them well, but we can’t afford to locate ourselves in a city where $150k is “a unit”. If this happens, I think we’ll probably set up in Austin, Madison, Portland, Pittsburgh or possibly Boston– unless business needs require us to be in SF or NYC. Of course, we’ll need to draw talent from everywhere and we’ll allow ourselves to be up to about 25% distributed.
  • Targets of 25+% female technologists, 40+% over-35, and 25+% with-children. Talent comes form all places, and we better be serious about making sure that all kinds of people are able to fit in: not just privileged, young white males like me. Forty-seven-year-old woman who happens to be a top-notch Lisper? Ready to kick ass? Then come in! I want this to be a company that I’d be happy to work for at age 60. This isn’t only because value diversity (although that’s true) but also because, if no women or older people want to work for you, it’s a sign that you have a fucked-up culture. We need canaries and we need to make sure they thrive.
  • Employees given the same respect of investors. Because they are. In fact, they’re investors of time and their careers which (in my view) accords them even more respect.
  • Profit-sharing over equity. Tiny equity allotments, typical in VC-istan, will be replaced with much larger profit-shares. My intention is to kick 50% of profits back to employees with no software engineer getting less than 1/(2*N) of that where N is the number of people.
  • Programmer is the highest title. We’ll have CxOs for external purposes as we market the company. Internally, the highest title will be Programmer. Not “Principal Software Developer” or “Chief Architect” or “Senior Staff Software Engineer XVI”. Top title (in pay, and respect) is Programmer (link NSFW; Zed Shaw rant).
  • Non-extortive workplace. Teaching and inspiring others will be the only method of management. To the extent that we will may need “people management”, we will never accord them with unilateral termination or credibility-reduction powers. If they are not helpful to the managed, they go. Employees work for the company, not their specific “bosses”, and have the right to change managers or projects as needed.
  • No “performance” reviews. The language of “performance” reviews is fucking insulting and I won’t tolerate it in the company we build. We’ll have impact reviews in which we assess how well an employee’s work is being integrated with the rest of the company. Unless bad-faith non-performance is obvious, low-impact is our fault, not the employee’s, and we will work together to fix that.

Okay, so here are two co-founders I will absolutely need.

Co-founder #1: a business co-founder.

You respect and understand technology (but enough to leave it alone) and you know how to sell it. We will be equals but, to the outside world, you’ll be the CEO. You have the contacts and resources to keep this idea funded and make sure it is coherent with the market. You will have authority to change business strategy. If you tell me that we need to “pivot” because the market is not interested in what we are offering, I will trust your judgment and make it happen. This will give you a great deal of authority over what is now “my idea” but, once we are a team it is no longer my idea. You’ll be responsible for driving what we do to market coherence (i.e. keeping us paid and profitable).

You will raise money and public interest in the product, but your ultimate goal must be to bring us to profitability as quickly as we can there (so we can protect ourselves from aggressive investors who might threaten the culture). If we cannot raise capital through typical means, then you will find consulting clients, for me and other programmers we employ, sufficient to fund the firm until we are profitable. (I’m a top-notch functional programmer with machine learning experience; it shouldn’t be hard for you to do that. You’d have to really fucking suck at sales not to be able to sell me, and when we start, all the programmers we hire are going to be roughly as good as I am, if not better.)

Remember: we have about two weeks (starting today) to decide if this can happen. If not, I get a day job and (because I’m sick of “job hopping”) I am going to make a real effort to stay there for 4+ years, meaning that this window closes.

Co-founder #2: top-notch front-end programmer and designer.

You’re a top-notch front-end programmer, fluent in technologies like JavaScript/ClojureScript and HTML5. You’re a designer and teacher who doesn’t consider his or her work done until it’s easy to use. We’ll probably spend a large percentage of the first 6 months teaching each other so that I have a passing competency at what you do, and vice versa. Ultimately, we’re both going to be full-stack. But for the short-term, I need someone who’s already “plug-and-play” when it comes to front-end design work. We will be on terms of equality but your title (for external reference) will be Chief Design Officer.

Co-founder #3: that’s me. 

Who in the eff am I? In 2003, I invented a kick-ass card game called Ambition. In 2004, I earned an Honorable Mention (58 out of 120 points; about 65th place nationally) on the Putnam exam. I was a data scientist “before it was cool” and have been using functional programming languages and working on machine learning projects since about 2006. I’ve used Clojure, which is what we’ll be building our back-end in, off and on since 2008, when it was brand-spanking-new. I run a blog (this one) that gets about 2,000 hits per day, with almost all of that in the technology community.

I will work on the back-end programming, the machine learning research, and also the cultural infrastructure that will make us one of the best companies in the world circa 2018. For external purposes, I will Chief Technology Officer and Cultural Director. However, I intend to cede the CTO role (remember that titles are only of external importance) within time and take a Chief Research Officer role.

Okay, let’s go.

If you want to talk, now’s the time. Email me at michael.o.church at Google’s email service, and we’ll talk. Please tell me which of the two roles above you believe you’d be able to fill, and I’ll get back to you within a couple of hours, or the next morning if you write at night. I’m also generally available for phone conversations between 6:30 and 22:00 EDT.


Update on what I’d like to build [April 28, 2013]

$
0
0

First of all, I owe an apology to many people. I said on April 23 that I’d be getting back to people quickly if they reached out, and I failed at that. My mailbox is full of messages to which I intend to reply. My life is hectic right now. First, I’m evaluating whether my startup concept has legs. I’m concurrently job searching, and working on a couple consulting projects, and getting back into Clojure, which has proven itself an awesome language because my knowledge has “stuck” even though I was away from it for 2 years. It would seem a bit manic, but I know what I’m doing. Solving problems is something I’m good at, and can do for 70 hours per week. Put me in an office and make me suffer social anxiety due to visibility (fuck open-plan offices, which make me feel like a zoo animal) and expected hours and various forms of conformist human idiocy, and you’re lucky if you get 20 decent hours (out of me, or out of anyone, because humans don’t work that way). Give me a problem to solve and I can just go. I’ve been “going” on a lot of different, important problems. Anyway… my apologies. I’ve been far busier over the past few days than I would have ever expected.

Let me get a bit more into “The Idea” behind the startup concept that I discussed on April 23, 2013. I won’t get too specific, because the tactics are still being determined. I’m ready to discuss the why. The how I am keeping confidential.

I have one policy with regard to the “stealing” of my ideas as I share them. I can’t stop you. So go ahead, but let’s agree on one rule. Only do it if you’re in that tiny set of people (maybe 20 in the world for this particular idea) who would do a better job than I would. If you can take my 9.5 idea and turn it into a 9.8 and build something transformative, then go ahead. And if you’re that good, you’ll certainly recognize that I have a place in whatever is built, at least in the discussion; if you think I’m a “bozo” then you can’t assess talent properly and you have no place in anything within 1000 miles of a hard problem. So just keep that in mind.

What problem do I want to solve? Career coherency. First focus would be on software engineering, where the job market is disastrously inefficient because it’s extremely hard for engineers and employers to evaluate each other. Here are just a few of the pain points:

  1. Good engineers are very hard for employers to find, and good jobs are very hard for engineers to find (bilateral matching problem).
  2. Engineers rarely improve on paid work, having to dedicate time and mental energy to unpaid learning, possibly jeopardizing job performance. (That’s career incoherency, which is when job and career conflict.)
  3. Engineers rarely know how to improve, or even where to look for mentorship. How do we turn a query like “I want to become a data scientist in 3 years” into a traversal of the pre-requisite graph (e.g. start with linear algebra, then take a machine learning course, then learn about databases) that a person can actually execute?
  4. Employers’ demands for “production experience” with hot technologies (rather than self-directed learning being sufficient) create environments within companies characterized by nasty, careerist technology wars that have nothing to do with solving problems. Finding some other way to validate engineering capability would be valuable, if for no other reason than the fact that it would clear out the toxic “resume-driven development” that creates unnecessary tech-stack churn.
  5. Companies currently have very little incentive to invest in employees’ careers. They tend to see that as counterproductive, since it makes them more externally marketable, and because there’s no penalty (especially in a fascist environment where “bad-mouthing” even deserving companies hurts a person’s career) for them to hire people but fail to invest in them.

There are some really BIG problems here… Can I solve all of them, individually? Of course not. I’d rather build the tools that make people who have the relevant domain knowledge better at solving them. For example, coding tests might be one way to validate engineer capability, but I’d rather not get into that business directly (by designing the exams) if I don’t have to, because I’d be competing in a red ocean with people who know the problem better than I do.

“Recruiting” is, likewise, a field that is ripe for disruption and it’s quite a major business. Wall Street and technology recruiters collect 15 percent of first-year salary (regardless of employee success). I don’t want to put them out of business or compete with them (again, I can’t). I’d rather work on building tools to make them better. Can a recruiter assess the quality of someone’s open-source, 3000-line Scala library? Of course not. Can an engineer? Probably, but who has the time to read all that? The goal isn’t to compete with existing players. It’s to make them better at their jobs by giving them tools to help them do what they’re currently bad at, and having those tools collect lots of valuable data about market conditions, the career landscape, and the “hidden nodes” of the job market structure.

Finally, a major component of what I want to build is career coaching for engineers: find them mentors, recommend jobs, tell them how to improve their employability, long-term success and happiness, and become better engineers. Hidden node discovery is also a part of it. Let’s say that Jake is 23 years old and just discovered Python. He wants to be a data scientist by age 26. There’s a huge prerequisite graph (linear algebra, to start) of which he has no idea. How do we provide that in an automated way? How do we connect him either with an individualized career plan, or with someone who can build such for him?

The Whole Problem would take 5-10 years to solve, at least. But there are subproblems within it that can get the game started.

Where do I (and what I want to build) come in? Well, there’s a lot of data to gather, and then to mine. I personally would like to focus on the data-mining and market analysis, because that’s my strong suit. That’s where I’m a 3-sigma player– not recruiting or designing code tests. This isn’t a “Big Data” problem, because we’re not talking about sorting through petabyte-scale datasets to extract microstructural signals. It’s a Smart Data problem. The end goal, actually, is to provide a value-added ranking (or employer “credit scoring”) of job and educational opportunities that tells young talent which jobs will help their careers, and which will hurt them.

See, code tests (in theory, anyway) and open-source code samples solve half the problem, which is keeping engineers honest by evaluating their capability in order to see if they’re as good as they claim they are. We don’t need to build code tests; people are working on that problem, and it’s probably cheaper to partner with them (if possible) than to build all that infrastructure ourselves.

Instead, I want to attack the other half, which is using various signals about engineer success (compensation, capability, career progress) to keep employers honest. Companies that improve their employees’ long-term career prospects should be lauded in the public. Companies that take in a lot of talent, but load people with crappy work that stagnates their careers, should be penalized severely. People have a right to know whether a specific job is likely to improve their (a) compensation, (b) career fulfillment and happiness, and (c) skill growth. Right now, they don’t. The result of this opacity is that employers can (and often do) recruit in bad faith by overselling crappy roles and work that damage peoples’ careers. I want to drop a pipe on that shit.

Here are some queries that, right now, go unanswered, and for which no one knows the right answer:

  • do engineers improve faster (in compensation, career mobility, and skill set) working at Google or at Facebook? How about Google’s Mt. View office versus New York versus Seattle? What’s the best job to take if you want to become a data scientist? Or a founder? Or if you don’t care but really want to make a lot of money?
  • how will the decision to live in Palo Alto versus Austin affect one’s earnings 10 years out? Is it worth it to live in a high-COL “star city”? Or are the career opportunities just as good or better in the Austin area?
  • who are the 10 best engineers, based on quality of open-source contributions, in Boston?
  • for a client founder: what city has the best brains-to-bucks ratio? Developer salaries (per location) are easy to figure out. What about developer quality?
  • which recruiter at, e.g., Google is most likely to respond positively to John Doe’s resume? Which recruiters place the most successful candidates (both in terms of getting the job, and in doing well there)?
  • for a client engineer, Mike Roe: is Mike Roe strong enough to command $7,500 per month as a freelancer? How would this change if he moved to San Francisco? Can we connect him with appropriate work? If he’s not ready to go independent, which jobs will get him there the fastest?
  • for a client founder or CEO: is Sarah Noh a capable enough engineer to be CTO? How adept is she at assessing technologies and reviewing code for quality?

I’d like to solve some of that stuff, with some hard-core data analysis, and fix this fucked-up inefficient job market. Here are the things I really want:

  • I want engineers to get what they’re worth. Right now, engineers suffer under conditions of low autonomy that render them unable to deliver much economic value, and are poorly compensated because of that low value rendered. In other words, typical shitty companies give engineers low-quality work and compensate them appropriately for what is done (within the context of the closed-allocation system). Slightly better companies give them good work and compensate them at-market because, if they’re already giving engineers real work, they’re already ahead of 90% of the pack and don’t need to compete on salary!
  • I want companies to get good engineers. Right now, assessing a software engineer (even after day-long interviews, code samples and tests, etc.) is almost impossible. Non-engineers literally can’t do it (a non-technical CEO is playing the lottery when he hires a CTO, because if he pulls in a smiling idiot– and those smiling idiots are great at seeming smart– his company will be bozos all the way down) and most engineers themselves are bad at it (tending to rely on technological prejudices, e.g. he used <language X> so he’s an idiot, rather than substance).
    • The two problems above are connected! Engineer compensation is low because assessment is bilaterally impossible (employers can’t tell if an engineer is any good, and engineers can’t tell if a job is any good, until ~6 months). What emerges is a sociopathic climate of reckless over-hiring, low compensation, and fast firing. Compensation and job autonomy are low because there are so few good engineers out there; but in the rare case of a “hit”– the engineer and company work well together– conditions rarely improve because, as the firm sees it, he should be happy to have a job!
  • I want transparency (into engineer and job quality) that kills off this feudalistic reputation economy. For good. Word-of-mouth is slow. It doesn’t scale. It makes location far more important to a person’s career (driving up real estate prices) than it should be. It also enables professional extortion (bad references, “job hopping” stigma, internal and external credibility markets). Let’s nuke this nasty edifice from space by outperforming it so decisively (by actually figuring out the things it’s supposed to assess, but fails at doing so) that it can never come back.
  • I want engineers to improve. Or, at least, I want them to know how to improve. If Sarah wants to be a data scientist at Palantir in 4 years and is willing to put in the work, she should have the resources to get there.
  • I want companies to improve. If employers are ranked in a globally visible way by the long-term career success (or, more specifically, the value-added; Google’s alums do well but not because of the work most of them get, but because Google gets high-quality inputs) of their alumni, we can start discovering companies (and educational programs) that improve peoples’ careers, and we can embarrass companies that hurt their peoples’ careers. This will provide an incentive for good behavior by employers that, currently, doesn’t exist.

So, I think above is a rough outline of the game I’d like to play. Unfortunately, I can’t afford to do this unless month-to-month funding starts coming in quickly (hence the importance of the business co-founder). As I said, I’m concurrently job-hunting and, if I find something compelling before that happens, I’d like to make it a 4+ year fit, meaning I’ll have to put this concept away for a while. But that’s the idea, as it exists now. Thoughts?



Gervais / MacLeod 23: Managers, mentors, executives, cops, and thugs

$
0
0

This isn’t going to be the last Gervais / MacLeod post (first: here; most recent: here) and I don’t think there is a closed-form solution, so much as something I’m paddling toward. I’m still getting incrementally closer to “the coup”, but I think there’s at least one more topic I must cover before I can do that. I’ve ripped on managers, as a class, quite a lot. I feel like I have to answer the question: can managers be good? You might be surprised to hear me say this, but the answer’s “yes”. We need to revisit the concept of the job, though.

Based on what I’ve had to say about open allocation, one might think that I’m anti-manager. Indeed, I’ve been in technology for almost a decade and I can confidently say that, as a category, managers take away a lot more than they add. Most of them are ruinous and damage productivity severely, bringing organizations down to a tenth of what they could be. Good managers are rare; bad ones are common. I’ll first get into some of the archetypes that I’ve seen, to explain the kinds of badness.

Types of technology managers

1. Volcanos

The stereotype of the evil boss is the “cartoon asshole”– constantly yelling, insulting, and verbally humiliating his subordinates– and I don’t think they’re that dangerous. They’re verbally abusive, emotionally incontinent, and erratic, but they actually aren’t that dangerous from a careerist perspective. First, they’re predictable. Second, if someone wears his character flaws on his sleeve like that, he’s not powerful. He’ll fire you in the heat of the moment on Thursday, then try to rehire you on Monday. I know someone who fired his CTO at least ten times. The CTO was stoic about these unplanned long weekends. This is dysfunctional and extremely stressful, but it’s rarely individually threatening.

You can usually coast, most of the time, if you have a Volcano as a boss, so long as you seem loyal and dedicated. Don’t leave at 6:00 if he had a meltdown at 4:45. Always take his side on everything. Work hard when there’s an actual external deadline for which it’ll hurt him if things are missed. He’ll never forgive you if he sees you as disloyal. If you keep him seeing you as “on his side”, however, you can work at quarter-speed because these bosses create environments where no one’s getting anything done.

2. Psychopaths

I call these guys “The New Boss” because they’re rarely in one managerial position for long. They rise quickly. Unfortunately, if you’re a high-talent individual, you’ll be the first one that one of these targets. Most bosses realize that it’s unethical to compete with their subordinates. Psychopaths don’t. They use the temporary power advantage to take everything they can, and a high-talent person is a long-term threat that must be shot down.

One thing about Psychopaths is that they’re rarely official bosses. If the manager has a favorite– a “golden child” or desk ornament– there’s a good chance that he’s a Psychopath. Being a managerial favorite is great from a Psychopath’s perspective. There’s no accountability, and the Psychopath will probably outlive his manager (and might even be the one to do him in). This is one of the reasons why I think “too good” people (including myself) don’t make the best managers. They’re not great judges of character, so someone horrible gets their ear and everything goes to hell.

Psychopaths do a great job of exploiting the other boss archetypes. They also have a behavioral pattern that might seem not to make sense. Psychopaths love social competition, and they start it even when there’s nothing to gain. A Psychopath is always targeting someone. Why? Practice. Most of the harmful social games that Psychopaths play aren’t done for direct gain. They’re done for amusement in part, but also as a way for the Psychopath to test his abilities so that, when a social competition with actual stakes emerges, he can win it. This, I should note, is not always a conscious process. People who enjoy social competition don’t have that particular self-awareness. They perceive it as malicious fun, not practice for anything larger. But it is probably the case that the evolutionary purpose for the psychopath’s love of social experimentation and competition comes from an r-strategic source; it’s where they learn the social skills necessary to play that game.

Why do Psychopaths so quickly become managerial favorites? The answer is that managers have two jobs: managing up and down, and only get paid for the first of these. The Psychopath/golden-child makes himself appear competent to take over the “managing down”, work he does for his own malicious purposes, enabling the actual boss (whose ear he has, and whose credibility he uses to advance his own agenda) to focus full time on managing up.

3. Limp Paddles

Bill Lumbergh, from Office Space, is a Limp Paddle. “I’m also going to need you to go ahead and come in on Sunday.” (Emphasis mine.) On some level, the Limp Paddle wishes he were able to motivate people without resorting to authority. He has no charisma, and on some level, he knows it. He just can’t motivate people to do things. He can’t build teams or figure out what people want from him, because his social skills are so poor. This makes him really insecure, and he compensates by taking a “Theory X” approach to employees. It’s not his lack of charisma that’s the problem, but their lack of dedication, intellect, and decency. It’s not him, it’s them. At the core of the Limp Paddle is an armor of denial about this.

Limp Paddles and Volcanos share this insecurity, but Limp Paddles are more stuck in it. Volcanos are typically highly intelligent but socially underdeveloped, but they can rise to the executive ranks if their creativity is recognized. That’s not common, but it’s a possibility. The Limp Paddle can’t. He was never “cool”, and he’s rarely smart, and he has almost no chance of getting higher than middle management, because he’s so pervasively mediocre.

While Limp Paddles are not explicitly evil, they are passive-aggressive and will tear down a talented subordinate out of their own insecurity. They’ll surprise a good employee with a bad review to “show who’s boss”, mindless of the fact that the target’s career within that firm has been ruined. (Most companies, these days, use Enron-style performance reviews, meaning that review history is part of an employee’s transfer packet.) They do a lot of damage, and they do it stupidly because they’re so goddamn insecure about their terminal middle-manager status and their underlying problem, which they’re either unwilling or powerless to solve: the fact that they can’t motivate shit to stink.

4. Chickenhawks

Michael Scott, in The Office, is one of these. Unlike the Limp Paddle, he has social skills and can inspire loyalty. However, he’s reminiscent of the high school sports coach who (a) never fully grew up, and (b) takes an unhealthy amount of interest in the personal lives of adolescents. Some Chickenhawks cross genders and some don’t. It’s not always about sex, and it doesn’t always mirror sexual preference. (Michael Scott is fully heterosexual, but Chickenhawks off Ryan and, to a lesser extent, Jim.) What they have, however, is an attraction to youth. They feel like they didn’t get their younger years right– didn’t go to an Ivy League school, have enough sex, do enough drugs, be stylish enough– and they’re drawn to the playing of favorites in order to live vicariously through their proteges. 

For Psychopaths, Chickenhawks are the easiest to exploit. Volcanos are erratic and Limp Paddles are uniformly mean to everyone, which means Psychopaths have to be cautious. Chickenhawks, though, want to be sold something and once the manager is sold, the favorite/desk-ornament can do no wrong. Chickenhawks want to feel cool again and, instead of buying a sports car or having an affair, they take a protege 20 years younger who still goes to nightclubs and has tried a few of the newest designer drugs– and, inadvertently, also give that person unilateral termination power over the the rest of the team. 

Chickenhawks are one of the worst kinds of managers, not because of what they do, but because of what happens when they pick up a favorite (inerrantly, a Psychopath who gets more sex than the Chickenhawk did at that age). If your Chickenhawk manager is unloaded (he has no desk ornament; his last one left him) then you will almost never get fired because Chickenhawks lack courage when unloaded, and the standard of work will be low. However, the loaded Chickenhawk will come down on you if his desk ornament (who is now his brain, and decides who the “performers” are) decides to attack you.

5. Executives

Being an Executive– a lazy “big picture” guy who wants others to implement his ideas– is more of an attitude than a matter of position. There are middle-managers who fancy themselves executives, and there are VP-level people who haven’t “learned” yet that they’re above any kind of work that involves accountability (including managing people). I’m talking about attitude as much as I am about rank.

What is the difference between an executive and a manager? Managers still have jobs, not sinecures. An executive’s job is just to Make Decisions and one of the perks is being unaccountable, because the executive can always blame subordinates for failing to implement his brilliant ideas properly.

Executives (small and big e-) don’t want to manage. They like the idea of holding power, but not the responsibility of using it correctly. They’re extremely toxic, because the only “work” they do is to make decisions that affect other people, and therefore criticizing an Executive’s decision becomes a personal affront. You’re effectively saying that he sucks at his job, in the context of how he defined it. The problem with the Executive is that (a) he holds the power, and (b) he’ll refuse to back down, because in his mind, the conversation isn’t about the decision, but about him. He’ll exhaust you into submission, extract a promise-by-default because you’re so worn-out, and then scream at you for failing to meet it– if he remembers the discussion, and the one nice thing about Executives is that they rarely remember what they told you to do yesterday.

Relevant to the Executive archetype is the engineer-manager impedance mismatch. As a software programmer, I’m routinely told by my “subordinate” (the compiler) that I fucked up at my job. Over minor typos, the machine yells back at me, “Fix your shit or I’m not doing anything.” It simply won’t do anything if I give it nonsensical instructions. It’s frustrating! But it’s also a part of how I see the world. I like blunt feedback. Loud failure draws attention and I can fix it. Silent failure is the worst. Executives don’t think this way. If they hand down nonsensical or conflicting requirements, your job is to “make it work anyway”, not push back (like a machine) until their instructions make sense. If you push back against a bad instruction (in the way that a compiler would, and we like that) he can’t separate (a) your concerns about the decision, which may have been horrible, and (b) his perception that your objection is a slight against him and his ability to lead. Programmers get regular feedback about mistakes they made; executives never do. That’s why they’re so much more arrogant than we our, despite our reputation.

Executives are horrible. No one can audit their decisions without ending up in a personal flamewar, so the independent thinkers quit or get fired and they’re left with a bouquet of yes-men.

6. Non-Managing Managers

You see a lot of these coming out of academia. They’re genuinely nice people. They work very hard, and they’re usually very smart. They don’t like holding authority. They don’t like telling people what to do. Over a small trust-dense team, they’re great. They’re available when needed, but would prefer to write code and read papers over being “The Boss”. Because they get out of their subordinates’ way and treat them like colleagues, the people below them do great work.

There’s a fatal flaw here, which is that they don’t scale well. Non-Managing Managers are great when they lead small groups of high-quality people who function as a genuine team, and their bosses notice this and give them more responsibility and more reports to manage. After about 20 people, however, the Non-Managing Manager gets overwhelmed. He didn’t sign up for all the meetings, for having to write reviews, and for all of the conflict resolution work that becomes an ongoing slog at that team size. At 60 people, there are Lord of the Flies dynamics going on below them and they’re oblivious to it. Non-Managing Managers fall down when young-wolf conflicts (i.e. Psychopaths trying to become managerial favorites) start to tear apart the team.

There are two subtypes of Non-Managing Managers. The first is what I call the Public Servant. Je doesn’t like managing, but sees it as an interpersonal and administrative duty the company requires of him, and when he starts out on this path, he’s extremely good at it. The problem with the Public Servant is that, after a few years, he stops liking the job– and if they can’t enjoy managing the work, how can they motivate their reports who are doing that work? Public Servants are the subtype of managers who truly are good people, but they turn bitter after a few years. Once they get tired, they’re no longer able to see young-wolf conflicts before it’s too late and they can’t police the team, much less motivate people to get things done.

The second is the Interesting Work Guy (or Gal). That’s me and, unfortunately, we tend to make above-average but not great managers.

The Interesting Work Gal is also a genuinely good person, but not the best manager. She’s sick of getting assigned slop work that bores her and hurts her career. She realizes that most companies create an arrangement wherein you either control the division of labor or it controls you, and she’d prefer the former. She becomes a boss because she wants dibs on the most interesting work, not because she wants to tell people what to do. Unfortunately, there are two problems with this. First, if she’s perceived to be “taking” interesting work from subordinates, they’ll sabotage her and she won’t know why. (She’s not maliciously or parasitically trying to “steal” work from them, and often she doesn’t know that others covet it; she just does what she wants to do.) Second, she’s loading herself up with two often-competing full-time jobs (people management, and the work she finds interesting) and will typically perform poorly at both.

By the way, I should delineate between the Interesting Work Guy, and the “I’m a big-picutre person” Executive. They are fundamentally different. The Executive is a lazy narcissist who wants to be rewarded and admired without having to work for it, or for doing “the fun work” that has no difficulty in it. He’s “not a details person”, which really means that he doesn’t want to do anything but Make Decisions. He’s a parasite. You shouldn’t just take him out of management; you should fire him because he’s a lazy jerk who will poison your company. Interesting Work Guy, on the other hand, has a genuine desire to work. He may be a mediocre manager, but he is excellent as an independent employee with a very high degree of autonomy. What Interesting Work Guy needs is hands-off management and an extremely rare (in the corporate context) degree of autonomy. He needs an escape from the manage-or-be-managed world, because he’ll only perform at his best when free on both sides from power relationships.

So after all that bad, is there good? Or, why do we need managers?

I’m a believer in Valve’s open allocation philosophy. I think that’s the right idea. Projects should compete for engineers (if no one’s willing to bet her career on something, it ain’t worth doing) rather than the reverse. However, I don’t actually believe that managers are completely unnecessary. Do managers, as a class, take more out of technology companies than they add? Yes, because incompetents are systematically selected for the role. However, I can’t in good conscience say that management is “completely unnecessary”. That’s not the case.

Social exclusion and competition will happen even if there aren’t official “managers”. Now, managers tend to make the situation worse because one only needs to convince one person that a target is “not performing” to flush the target out of the company, but the behaviors that make modern work hell are age-old and don’t require official authority for them to exist. They’ll emerge naturally. Some people will start telling others what to do, threatening each other with social and professional extortions, and creating their own malignant credibility systems as gossip congeals into the “official record” of peoples’ job performances. Official management (with the power of unilateral termination) makes these extortions and mobbings much easier, because fewer people have to be brought in (sometimes without their knowledge) to them, but they’d happen anyway.

A good manager (and these are extremely rare in technology; maybe 1 in 100) spots that shit from a mile away and shoots it from a bridge. Young-wolf conflict? Inappropriate delegation? Hostility and gossip? Zzzziiiip! Sniped. Fucker drops dead. Thud.

A manager is a cop. I think it’s obvious that most of the managerial archetypes above– even the obvious good-person cases like Interesting Work Guy– describe people who are not cut out for police work.

Of course, there are great people in law enforcement, and there are also some horrible ones. Some are dedicated public servants. Others do the job because they enjoy authority and power. Some are clean and some are dirty, and I’m not going to debate what percentage is which because I honestly have no idea, but the job is utterly necessary. You need a police force to keep society functioning. That said, successful societies make their jobs clear: police are there to enforce the law, not make it. When a society degrades to the point where police get to change the law on a whim, many become extortionist thugs rather than servants of the public. This becomes perverse and self-reinforcing, because dirty cops tend to gang up on and drive out the clean ones they can’t corrupt. Also, dirty cops have a huge advantage; they’re cheaper. In many of the more corrupt countries, policemen are poorly paid, or unpaid, they make their money “on the economy”. Most of these dynamics have parallels in the corporate world. Management is an internal police force by design, and most of them are quite dirty.

Why is this? Why do companies produce a squadron of dirty police officers but very rarely end up with any clean ones? One of the answers is obvious: corporations aren’t built on top of laws, but on the shifting sands of reputation and credibility and personality cults, and only about 1 person in 500 has the charisma to be stably successful in such an environment without resorting to extortion. That’s the first problem. If you don’t have laws, you’ll have thugs taking a might-makes-right approach. Without snipers on the bridge to take out Psychopaths in the young-wolf/manager-favorite stage, those will end up with all the power, and it will be too late.

I think, however, the bigger problem is that there’s a fundamental dishonesty about what managers are supposed to do. Employees want to be mentored and supported. The very top brass (which adheres to a Theory X mentality) wants an internal police force to make sure that the proles don’t steal (including, in the white-collar context, “time theft” in prioritizing their own career goals over official work). Middling executives want point-men for various internal efforts (e.g. project managers). Managers themselves have differing desires, according to the archetypes above. Unfortunately, though, there’s an arrangement in which no one good wants to do police work, while everyone bad does. Why? Well, it’s not very glamorous in its own right. If you’re a clean cop, you’re just enforcing laws that other people wrote. In the public, this is fine because there are sound reasons for supporting those laws. In a corporation, these “laws” are frequently themselves drawn out of the fears and desires of parasitic oligarchs (dirty top cops) with authority. Corporations usually end up expecting their internal police to do dirty work. Clean cops don’t want to do it. Dirty cops, on the other hand, love that. It’s not that they enjoy doing evil work on others’ behalf (except insofar as it gives them dirt on powerful people) but that they can usually use their positions of power for the own extortions, and their bosses rarely mind that they’re doing it.

What companies call management is actually a conflation of six jobs:

  1. Making large-scale decisions that affect a lot of people, especially pertaining to careers and incomes. (This is the job that Executives want.) 
  2. Public leadership and making the company look good to the outside world. (What CEOs really do.)
  3. Executing specific projects and goals. (Project management.)
  4. Serving as an internal police force to protect the company from its people. (Theory-X people management.)
  5. Serving as an internal police force to protect the company’s people from each other. (For example, by sniping young wolves.)
  6. Coaching junior employees, allocating work properly, and bringing talent forward. (Theory-Y people management, or mentoring.)

Unfortunately, these jobs all compete with each other. You can’t be someone’s mentor if you’re also responsible for preventing that person from “stealing” by prioritizing his career goals over explicit direction; that’s a huge conflict of interest. You need to pick one job or the other. Most managers end up prioritizing “managing up” and neglect jobs #5 and #6, whose beneficiaries are powerless. Some relish the authority that comes with #4; others want to grow toward the “fun work” of #1-3 and simply use the extortions afforded by #4– “you support my career goals or I fire you”– to get there.

Solving It

How does a company solve this problem? What does it need to have that thing of extreme rarity (at least in technology): genuinely good management? Here are some thoughts.

  1. You need actual laws. If the police are the law, that’s innately dysfunctional. But that’s what most companies are like. Unless an employee represents a direct lawsuit threat, companies are more than happy to give managers unilateral authority over that person’s career for the sake of “project expediency”. That’s a terrible arrangement. Companies should put serious work into setting an “Employee Bill of Rights” and a set of laws to protect people. Open allocation, for example, is a good law: employees have the right to allocate their time to any project that benefits the company.
  2. You need cops (very few) who don’t enjoy the work too much, but do it very well. The archetype coming to mind is Mike (an ex-cop turned hit man) from Breaking Bad. He doesn’t enjoy killing people. He’s given a job and he does it well. He doesn’t enjoy it, and he won’t do it for personal gain. Companies do need, after a certain point, to give at least one person the full-time job of sniping young wolves and preventing abuse of power vacuums. Also, the sniper should have no other job that conflicts with that one. (He can work on other things, such as recruiting or programming, but he’s evaluated based on his police work.) If he’s responsible for delivering on a project or he’s competing for executive roles against other people, he’ll be tempted to use that gun inappropriately. If he’s the type of guy who enjoys being “The Boss”, he’ll wave his gun around and no one will get anything done. If he has another job that’s important to his career, he’ll be tempted to play favorites, and that’s intolerable. He needs to be out of the way except when a young-wolf conflict is about to happen. I recommend hiring an older person (with no interest in positional advancement) for this job and, of course, not making it explicit that he or she is “the police”. I think it goes without saying it should not be a glamorous role; you don’t want people with giant egos doing that kind of work.
  3. You don’t need other “people management”, except for mentors. If you have principled police who take down young wolves and prevent inappropriate arrogation, incivility, and social exclusion– and who are, other than that, out of the way– you simply don’t need the other kinds of management (project managers, “tech leads”) titles. Self-organization will take care of that stuff! What you do need is a culture that encourages mentorship: investment in the junior people who have no power (and little to offer, immediately) but will become core players of the organization in the future. Now, this is a role that your police can’t fulfill. I repeat, they cannot be doing it. It’s not that there’s a dissonance of roles, because junior employees and good police (who protect the weak, by sniping the young wolves who’d otherwise gaslight junior employees and become unofficial managers, or just eat them alive) are actually natural allies. There’s a different reason why police can’t be mentors. You can’t have them playing favorites, ever. That said, just as much as you need police to protect the transiently weak junior employees, you need a culture of mentorship that gives them the knowledge and power to become the highly-productive more senior ones who’ll make your company great.

In sum, the duties collected together under “management” need to be separated. Law enforcement is needed, but they should be doing just that, and not have alternative careers (project management, executive ambitions) that create gigantic conflicts of interest. Project management and decision-making belong with the people, and shouldn’t be hogged by bike-shedding narcissists who arrogate unilateral authority over that stuff because it’s “the fun work”. Finally, the most important thing is to have a mentorship culture that brings people along to the point where they can deliver major world-improving contributions. You do need police to protect the weak, but you also need mentors whose ultimate goal is to make everyone strong.


Why You Suck

$
0
0

This is a post about Why You Suck. Since this is the rhetorical “you” that refers to a least-assumptions unknown person, it’s also about me and Why I Suck. Or, perhaps I should say that it’s about why all of us tend toward Suck sometimes. What do I mean by Suck? I mean that we’re so terrified of failure and embarrassment that it pushes to mediocrity and, at the extreme, entrenched anti-intellectualism.

Take fine arts, for one topic, because it’s one that draws out a lot of peoples’ insecurities. It’s actually quite hard to get a sophisticated understanding of what makes, say, opera good or bad. I don’t have it. I enjoy opera, but I don’t have the palette to have an informed opinion on the quality of an individual piece. I like it, or I don’t, but there’s nothing I have in terms of sophistication or exposure that gives me an elevated skill at critique. If I pretend to have a deep knowledge of opera, then I’ll sound like a fool. Now, you might be saying: so what? What’s wrong with having a mediocre exposure to opera? Why would anyone be insecure about that? It’s not a sign of a lack of talent. I just haven’t specialized in the appreciation of it. Well, that’s because it’s not a defining part of who I am. However, there are a lot of people who realize how hard it is to become fluent in something, and therefore get discouraged prematurely. It bothers them, because they really want to get good. When it doesn’t happen quickly, a lot of people go the other way and say “it’s esoteric and not worth knowing”. I say, bullshit. You don’t know it, and I don’t know it, but that doesn’t make it “not worth knowing”.

Let’s talk about foreign languages, another place where this attitude emerges. The fear there isn’t that learning a new language is hard (without exposure, it is; with exposure, most people can do it, at any age). It’s about embarrassment. No one wants to look like an idiot by getting words wrong. People would rather use the language they know best. That’s reasonable, but some people take it a step further and decide that some topic in which they lack knowledge just isn’t important. I mean, how much does opera do for us in our daily lives? For fine arts, that’s just passive anti-intellectualism. When it comes to foreign languages and cultures, which have every bit as much validity as our own but are often rejected as “unimportant” by the insecure, it’s being an asshole.

We all end up doing this. We find something we’re not good at and the first thing we want to do is find a reason why it’s not important. That’s why intellectually insecure politicians cut funding for public universities; they hate those “ivory tower” academics who make them feel stupid. It takes a certain awareness to look at the world and say, “this place is so big that, for everything I learn, there will be a billion things worth knowing, that I never will, because there isn’t the time to get good at everything”.

So, many people go off in the opposite direction. They conclude that the things they’ll never get good at (often by choice) are just useless and retrench in their anti-intellectualism. This is especially severe in the software industry (don’t even get me started on anti-intellectualism) where, in many companies, taking an interest in doing things right as opposed to the empty-suit-friendly bastardization of “object-oriented programming” that has been in corporate practice for the past 20 years will often make you a weirdo, a pariah, one who cares too much.

By the way, d’ya want to know why so many of us software engineers have shitty jobs that make us unhappy? Well, we don’t have a strong professional identity. Doctors report to other doctors. Lawyers report to other lawyers (by law) unless directly to the corporate board. Engineers (actual engineers, not “software engineers”) report to engineers. We, on the other hand, report to professional managers who think what we do is “detail-oriented grunt work”. To add to the insult, they often think they could do our jobs, because they’re smarter than us (otherwise, we’d have their jobs). Why is this? Why are we, as software engineers, in such a degraded state? Perhaps it is because we, as a tribe, are anti-intellectual. If we don’t know what functional-reactive programming is, many of us are ready to conclude that it’s “weird” and impractical and “not worth knowing”. (Oh, and I’ve seen hard-core functional programmers take the same attitude toward C and low-level coding and it’s equally ridiculous.) Don’t get me wrong; there are a large number of individual exceptions to that. I enjoy programming– and I don’t identify fully as a programmer; I’m only a 96ish percentile programmer but I’m a fucking murderous problem-solver– and I care so much about keeping up my programming skills because I’m not anti-intellectual. And because it’s fucking cool. I had a boss (a very smart guy, but clueless on technology) once who said he refused to learn programming because he thought it’d  kill his creativity. That’s that same anti-intellectualism on the opposing side. Perhaps it’s karma. Perhaps the anti-intellectualism that characterizes the average member of our tribe (defined loosely to include all professional programmers, the average of us being terrible not for a lack of talent, but mediocre drive) makes us a perfect karmic match for that other anti-intellectual tribe: the executives and “big picture” moneymen who boss us around.

Okay, I’m going to get to the source of all this devastating mediocrity.

“A million dollars isn’t cool. You know what’s cool? Social status.”

Yeah, I know. That sounds ridiculous, no? I’ll explain it.

Some will recognize the quote from The Social Network, in which Justin Timberlake portrays Sean Parker as an ambitious uber-douche who says the above quote, but with “a billion dollars” instead of “social status”. I re-appropriated it, because I’ve wanted for a long time to understand why we as humans are so incompetent at, well, being human, and doing so required me to understand human status hierarchies. So I douche-ified the un-Parker-like quote even further. Wanting to be a billionaire is pretty douchey, but why would one want so much money? It’s social status, the driving ambition of the douchebag (and a lesser ambition, alas, for all of us).

Take unemployment. Why is it that during a three- to six-month stretch of joblessness, the average person (with men being much more sensitive to this effect) will do less housework and perform more poorly on side projects than when that person has a full-time job? Most jobs don’t add much to a person’s life. A monolithic and inflexible obligation, usually toward ingrates, that by explicit design makes diversification of labor investment almost impossible, is hard to call a good thing for a person. Society has actually had to work at it to make the alternative (joblessness) so embarrassing that it’s worse for the vast majority of people. The social status penalty of not having a job must be so severe that people refuse to tolerate joblessness. One boss fires ‘em; they look for another. However, in the long term, this exacerbates the real underlying problem, which is that they’re so job-dependent that they’ve forgotten how to serve others (in trade, and often for personal benefit) in any other context. Anyway, my point here is that the embarrassing nature of joblessness has been made so severe that it’s worse for a typical person’s well-being (and out-of-work completion) than spending 8-10 hours in an office.

Our minds and our bodies are constantly taking signals as to our social standing, and reacting in ways we often can’t control. I’ve often believed that, at the least, mild depression emerged as an adaptive response to survival of transient low social status. Of course, the disease depression is something different: a pathology of that mechanism, which might trigger for no reason. I only mean to suggest that the machinery might be there for an evolutionary purpose. That also, to be, explains why exercise is so effective in treating mild depression. It tells the body that the person is of high social status (invited on the hunt) and causes the brain to perk up a little bit.

People often say, “I don’t give a fuck what other people think about me”. Bullshit. If that were true, you’d never say it– almost by definition, you wouldn’t, because it’s something people say to seem badass. Unfortunately, it misses the point. First, it’s dishonest. We’re biologically programmed to care what others think about us. To be ashamed of it is to be ashamed of our own humanity. Second, there’s good badass and bad badass and insane badass. Insane badasses don’t care what others think of them because they suffer frank mental illness that overrides even the most blunt social signals. Bad badasses generally quite a bit about their own social status; they just don’t have much empathy and therefore only care about others’ opinions when it interferes with them getting what they want. Good badasses, on the  other hand, are empathetic but they are also committed to virtue even in the face of unpopularity. All three types have a claim to not caring (as much as normal people do) what others think, but only one of those three is desirable.

Why do people make such a boast about not caring what others think? That’s because we abstractly admire that sort of emotional independence. In practice, it can go either way whether that’s a good trait. If you really don’t care at all about how your actions affect others, then you’re an asshole. Now, I’m generally on-board with a certain virtuous investment in actions over results, for sure. I also take a certain pride (not always to my benefit) in virtuous actions that lead to socially adverse results– because I am morally and intellectually superior to, at least, the dominant forces in our society (I can’t adequately compare myself either way to “the average person” because I don’t know him, but I am demonstrably superior to those running this world and that’s an obvious fact of my life) and I revel in it. I also still think that if you don’t care at all to pick up signals about how your actions are really affecting the world, then you’re just being a dick. You should care– just a little bit, but not zero– what other people think of you, especially as pertains to your effect on others. If you are helping people and suffering social adversity, you might be virtuous and that adversity might exist because the people who fling it at you are the epitome of vice and parasitism. On the other hand, social adversity might also be a sign that you’re doing things wrong. You should at least listen to the signal. If you understand its source and recognize that source as not worth caring about, then fine. Not listening makes you a jerk, however. 

So… I hope I’ve shot down the “I don’t care what others think” defense. I’m more badass than most people who say this and I care what other people think about me.

Now, I want to go back to “You know what’s cool?” No one can visualize a billion dollars. People with that much wealth never even see the pile of cash, except for Walter White. That billion-dollar net worth is just a linear combination of a bunch of other numbers about them strewn across the world. What they own, by entity and percentage. Who owes them money. To whom they owe money. That is a kind of social status, but a stable and legally recognized one called “ownership”. So there we are. All of economics is predicated on the idea that people want resources and money; and one of the biggest reasons, I would argue, that they want it is that it’s psychological: they want the social status. If that seems unduly negative, it shouldn’t be. Social status is the only reason I have a computer to write this post on, or a cup of coffee to drink in the morning. I’m able, because I speak certain natural and social languages and have certain skills (that I acquired by being born into the right country family, distinguishing myself early in academics, etc.) to get people to pay me for services that others could perform more cheaply (most of those cheap competitors wouldn’t do it as well, but neither would most of the higher-status, better-paid ones). Gift economies don’t scale. We can interact with the market only if we can prove by certificate (e.g. money) that someone thinks we have some status or value (making us worthy of employment or ownership of an asset) and so all of us need some kind of status, even if it’s just a little bit. It’s horrible that the world works that way, and that a person of merit might fail due to extreme lows of social status, but it’s how things work right now.

Now, a billion dollars isn’t cool. Even the disgusting rich douchebags don’t actually sleep (to quote Don Draper) on “a bed made of money”. Money is paper that would disgust us (because of all the places it has been) were it not for a certain social value. Rather, it’s the social elevation that drives people. “Money” is not the root of all evil; social status is. That’s what most people, and especially douchebags, find “cool”. Green cotton paper, even at the 10-ton level a billion dollars would require, has little to do with it.

In fact, we can tie social status to all of the seven deadly sins:

  • wrath: people use threatening emotions, postures, and violent actions to defend social status. 
  • envy: people covet social status and delight in the destruction of higher-status individuals.
  • sloth: unconditional “passive” social status (i.e. that doesn’t require work) is always preferable over kinds that are contingent on productive activity, which one might lose the ability to perform at an acceptable exchange rate (health problems, disinterest, superior competition).
  • lust: one of the primary reasons for high-status people to seek even higher levels of status (to the detriment of social and mental health) is the desire to indulge in sexual perversion.
  • greed: this one’s obvious. Most of the assets that inspire greed confer social status. People are rarely greedy toward things that don’t.
  • pride: also obvious. People create an outsized self-image out of a desire for deserved high status, then expect the world to conform to their grandiose self-perceptions.
  • gluttony: defined literally, this is an odd-man-out in modern times because obesity lowers one’s social status, but if we extend the metaphor to material overindulgence, we see it as a form of posturing. Conspicuous consumption enables a person

Of course, all of those sins are also sources of Suck– yours and mine. They blind us, make us do short-sighted and stupid things, and generally leave us bereft of moral courage, curiosity, creativity, and virtue. It turns out that social status is a driving force behind what makes humans horrible. The concern for social status seems, in many people, to be limitless and only more productive of vice and evil as they gain more of it. Satiation in most commodities sets in, and people stop being horrible. It’s rare to see two people fight over a piece of bread in an upscale restaurant, because average Americans are rich enough not to turn to vice over food. With social status, that’s not the case for many people. They don’t reach satiation and revert to virtue, but get worse as they climb and (a) satiation proves elusive, while (b) the competition for status becomes fiercer as they climb the ladder. They go beyond Suck and into outright Vice. Yeshua of Nazareth was right on: you cannot serve God and Mammon.

But back to Suck…

Vice is an interesting topic in its own right, but I’m here to talk about Suck. You and I both Suck. I don’t think I’m a vicious or bad person, and I doubt most of my readers are. However, we do things that are counterproductive. We avoid learning new technologies because “I might not get any good at it, and just embarrass myself.” We might do the right thing despite threat of social unpopularity, but it’s really hard and we spend so many clock cycles convincing ourselves that we’re doing the right thing that it takes the edge off of us. It’s almost impossible to excel at anything in this world. Why? Well, excellence is risky.

Something I read on Hacker News really impressed me. It explained a lot. I think it resonates with all the top-10% programmers out there who are constantly pushing themselves (often despite economic incentives, because there is a point where being a better programmer hurts your job security) to be better. Here it is (link):

No. Burnout is caused when you repeatedly make large amounts of sacrifice and or effort into high-risk problems that fail. It’s the result of a negative prediction error in the nucleus accumbens. You effectively condition your brain to associate work with failure.

Now, on the surface this is true. Failure is extremely demoralizing. However, as I think about it, it’s not project failure itself that brings us down. It’s annoying. It’s a learning experience that doesn’t go the way we hoped. In the discovery process, it usually means we discovered a way not to do things (which has lower information-theoretic value) than a way to do them. However, failure itself I do not think is the major problem. I think people who are used to doing hard things can learn to accept it in stride.

I am constantly trying hard things and attacking high-risk problems. I took difficult proof-based math exams in high school and college where very few people could solve even half of the problems in the allotted time. I’ve tried a great many things with sub-50-percent chances of success, and had some hits… and a lot of misses. Failure is difficult. It’s a struggle. It’s already hard without the world conspiring to make it harder. But it’s the social status damage that comes out of failure that really stops a person. That’s the force that pushes people toward self-protecting careerist mediocrity as they get older. Yes, it’s learned helplessness, but it’s not mere project failure that induces the neurological penalty. A more supportive, R&D-like, environment (as opposed to the mean-spirited caprice of contemporary private-sector social climbing) could mitigate that. (I worked at a think tank once where the unofficial motto was “bad ideas are good; good ideas are great” and that supportiveness motivated people to do some outright excellent work.) Failure isn’t what ruins people. It’s the dogshit heaped on a person by society after a project failure that has that effect. After a while, people get tired of the (transient, but extremely stressful) low social status that follows a failed project, and give up on high-risk excellence.

Going forward

Awareness of Suck and its causes is the first step toward overcoming it. Denying that one experiences it personally is not generally helpful, because almost everyone Sucks to some degree, and there are powerful neurological and social reasons for that. Admitting vulnerability to it is like admitting physical inferiority to polar bears; none should be ashamed of it, it’s just how nature made us.

Why are people so mediocre, both in moral and creative terms? We now have the tools to answer that question. We know where Suck comes from. And we can work, a little bit each day, on making ourselves not Suck. 

More important, however, is finding a way not to induce Suck in other people. I’m going to pull something else from Hacker News that I like a lot, this time from the Hacker School‘s social rules. I’m not going to post all of them; let me just give a flavor:

The first of these rules is no feigning surprise. This means you shouldn’t act surprised when someone says he doesn’t know something. This applies to both technical things (“What?! I can’t believe you don’t know what the stack is!”) and non-technical things (“You don’t know who RMS is?!”).

I’ll admit that I’m guilty of this, too. My eyes glaze over when another programmer mentions Visitor or Factory design patterns and doesn’t seem to be trolling me. Maybe I’m slightly better, in that Visitor usage is a positive symptom of idiocy while not knowing something is a negative symptom, and we all have an infinitude of negative idiocy-signals (because there are infinitely many things we don’t know and, arguably, should). Or maybe not. Maybe I should stop being a dick and assume (despite what Bayes would say) that the programmer who says “Visitor pattern” with a straight face is a talented person who just never learned better.

Other behaviors explicitly discouraged are cosmetic correction (over-cutting someone’s essentially correct statement with an irrelevantly more correct one) and backseat driving. This is good. Hacker School is making an admirable attempt to clear out the social processes that sometimes make intermediate-level programmers embarrassed by the gaps in their knowledge. thus risk-averse. That’s a great thing, because after a while, people who are made to feel insecure about gaps in knowledge tend to fly the other way, and that produces the “that topic isn’t important” anti-intellectualism.

Hacker School’s getting it right. If people aren’t afraid for their own social status, they’re more inclined to take risks, grow faster, and excel. This is an ideology that gets a lot of mouth-honor, but few people follow it.

Even VC-istan claims to “embrace” failure, but the reality is that “fail fast” is often an excuse for impulsive firing (without severance, typically) and “lean startup” often means “we want you to work 90 hours a week and be your own assistant instead of working 60 and hiring one”. The reality is that VC-istan’s collusive reputation economy allows it to be anything but tolerant of business failure, even the good-faith kind.

The only work culture in which project failure is tolerated is the R&D one. Most companies these days have mean-spirited, fast-firing cultures where a project failure results in someone getting fired, demoted, or punished for it. Sometimes there’s no one at fault and someone just gets randomly hit. Or, when there is someone at fault, it might not be that person who suffers (it usually isn’t, as bad managers are great at shifting blame). The result of the mean-spirited, fast-firing, performance-reviews-with-teeth structure of the modern corporate workplace is that competent people rarely invest themselves in efforts that might fail, even if successes will be enormously beneficial. Instead, they strive to put themselves on highly visible projects, but those with enough momentum that they are extremely unlikely to fail. The result of this is that project genesis has almost no ambition in it, and most of the best people aren’t coming up with ideas anyway, but looking to draft on someone else’s. Of course, by the time a project shows sure visible victory, so many people are aware of it that the competition to be “in on it” is cutthroat. (Closed-allocation companies aren’t about doing work, but about holding positions and being “on” important projects.)

If you have the open-allocation, high-autonomy R&D culture where good-faith failure is treated as a learning experience and people can move on gracefully, you get a sharing of knowledge because people are no longer pressed to hide failures. If you have anything else in a white-collar environment, however, you’re likely to end up with a blame-shifting culture. That’s where Suck really starts to assert itself, and take control.


Constructing Computation 1: Symbols, nonsymbols, and equality.

$
0
0

I want to do something different than my normal writing. I want to construct programming. What do I mean? I want to start from some basic principles and build up to the familiar. I won’t be proving anything, and  I’m not going to claim that what I’m building is necessarily the “right” way to do things– this is just for fun– but I want to explain how programming feels when one strips away the complexities and gets to the basics. This is a model of programming designed to work for anyone interested in learning the fundamentals. This will either be very interesting or loopy and divergent, and I will be editorially incompetent when it comes to determining which is the case.

To start, let’s create an object language (a world) where we have the following:

  • All objects (that is, all things in the world) are nonsymbols or symbols.

That’s easy enough. It’s tautological. It’s a symbol, or its not one. However, the distinction is important to us. Now, some computational models only have one category of “thing” (e.g. lambda calculus, in which everything is a function) and I’m starting with two. Why? With a symbol, we can “inspect” it and immediately know what it is. Nonsymbols are more like “black boxes”; you can observe their behavior, but you can’t know “everything” about an arbitrary nonsymbol.

Let’s start with some of the things we can do with symbols.

  • There is a function succ that takes a symbol x and maps it to a different x‘ (shorthand for succ[x].) There are no duplicates in the infinite set {xx’, x”, …}. We call that set Gen[x] and call x and y, incomparable iff xGen[y] and yGen[x]. We call a symbol x basic if there is no y such that y‘ = x. Applied to a nonsymbol, succ returns a special symbol, _error (which is basic, so there is no confusion between this return and a successful call to succ).
  • There is a symbol called 0 and, for any “English language word” (nonempty string of letters and underscores) “foo” there is a corresponding symbol _foo. These are all mutually incomparable and basic. The most useful of these will be _error.
    • We call 0′, 1; 0”, 2; and so on, so we have the natural numbers as a subset of our symbol language.
  • There is a meta-level function called eq that takes two symbols x and y and returns _true if they are identical and _false if they are not. It returns _error if either argument is a nonsymbol.

What makes symbols useful is the eq function, because it means that we can have a total knowledge about what a symbol is. We know that 0 is 0 is not 1. We know that _true”’ (call that _true3) is not _error or 5. We have a countable number of symbols, and assume that it’s one computation step to call eq on any two symbols (and get _true or _false). We also are going to allow another function to exist:

  • There is a function called comparable that takes two symbols x and y and returns _true if they are comparable, _false if they are not, and _error if either argument is a nonsymbol. This is assumed, likewise, to require only one computation step. The main purpose of this is to give us an natural? predicate, which we’ll use in code (later) to identify natural numbers. We also allow ourselves pred (which returns x when given x‘, and _error when given a basic symbol, like 0) and basic (which maps each symbol to the single basic symbol in the equivalence class defined by the comparability predicate) and consider those, as well, to be one computation step. 

Symbols are “data atoms”. We have something nice here. In a typical object-oriented language, a “thing” can have all sorts of weird properties– it might have changing state or unusual access behaviors, and it might have interacting local functions called methods. The space of things that can be objects is also extensible by the user. It can be a string, a file handle, or an abstract object used solely for perfect identity (i.e. comparing false to anything else, since it holds an address on the machine where it lives, that nothing else can.) There’s a lot less to symbols. The set of them that exists is limited (countably infinite) and pre-defined. You can’t do much with them. Strings, negative numbers, stateful entities, and other such things live at a higher level. We’ll get to those; they’re not symbols in this language.

Nonsymbols are not as well-behaved with regard to eq, the universal equality function. Given nonsymbols, it returns _error, giving us no information. Why? Because for nonsymbols, one cannot compare them for equality in the general case. The set of nonsymbols is not as well behaved. A nonsymbol can’t be observed for its “true” value directly. So let’s explain what we can do with nonsymbols:

  • There is a function called apply that takes two arguments. If its first argument is a symbol, it returns _error. However, the first argument will almost always be a nonsymbol. The second argument may be a symbol or nonsymbol, and likewise for the return. In this light, we can view nonsymbols as “like functions”; we liken a nonsymbol ns to the function mapping x to apply(nsx). The purpose of apply, then, is to give us the interpretation of nonsymbols.

We write a nonsymbol’s notation based on its behavior under apply, so one nonsymbol is {_ => _error}, which returns _error for any input. That’s not especially interesting. Another one (which I’ll call “CAT”) is {0 => 67, 1 => 65, 2 => 84, _ => _error}. What I’m writing here in the {} notation is in our natural language; not code. However, I want to make something clear. About nonsymbols that we create and “own”, we can know the full behavior. We know that {_ => 0} is constantly zero. We just can’t assume such “niceness” among arbitrary nonsymbols. I’ll also note that for every nonsymbol we can describe, there are infinitely many that we can’t describe.

Let’s look at a more pathological nonsymbol A:

A = {0 => A, _ => _error}

It’s self-referential. Is that allowed? There’s no reason it can’t be. Under apply with argument 0, it returns itself. This does mean that a naive “print nonsymbol” utility on A might fail, printing:

{0 => {0 => {0 => {0 => {0 => {0 => ...[infinite loop]...}}}}}}

There are also less-pathological nonsymbols with interesting behavior that can’t be written explicitly as this-to-that correspondences. For example, the succ function is realized by the nonsymbol:

{0 => 1, 1 => 2, 2 => 3, …; _error => _error‘, … ; _true => _true‘; …; $nsym => _error}

and eq by:

{0 => {0 => _true, $sym => _false, _ => _error}, 1 => {1 => _true, $sym => _false, _ => _error}, …, $nsym => {_ => _error}}

where $sym matches any symbol and $nsym matches any nonsymbol. 

However, there are nonsymbols without such easy-to-write patterns. For example, here’s one:

.{ [Symbol or nonsym. N : eq[_true, apply[natural?, apply[N, _size]]] does not return _true] => _error,
.  [Nonsymbol N : eq[_true, apply[natural?, apply[N, _size]]] =>
.    {$sym => _error, $nsym =>
{_size => k, 0 => apply[$nsym, apply[N, 0]], ..., k-1 => apply[$nsym, apply[N, k-1]]]}}}

That’s an extremely important nonsymbol. It’s called “VectorMap” and it works like this:

A := {_size => 3, 0 => 5, 1 => 7, 2 => 2, _ => _error}

F := {0 => 1, 1 => 2, ...} # that's succ (but VectorMap can take any nsym)

apply[apply[VectorMap, A], F] => {_size => 3, 0 => 6, 1 => 8, 2 => 3, _ => _error}

Given a nonsymbol understood to follow a certain set of rules (that _size => integer k and only symbols 0, 1, …, k – 1 are supported) it produces a new vector that is the image of the old one. What if those rules don’t apply? Then there are fields we end up not copying. It is just not possible to ask any questions about an arbitrary nonsymbol’s behavior over all inputs. If we created nonsymbol ns, we can verify that it’s compliant with invariants we created for it; however for an arbitrary one, we cannot answer questions that require us to query across all inputs. In other words, we’re not allowed to ask questions like:

  • Does nonsymbol ns return y for some argument x
  • Do nonsymbols ns1 and ns2 differ in returns for some argument x? (This is why nonsymbol equality comparisons can’t be made.)
  • Is nonsymbol ns constant (that is, is ns(x) identical to ns(y) for all x and y)?
  • Is there some finite number of k for which ns^k(0) — that is, ns applied to 0 k times– is a symbol?

To have an intuition for the theoretical reasons why such operations can’t be done on nonsymbols, consider (and just take on face value, for now) that some nonsymbols N exist of the form {[n is a natural number corresponding, in Godel encoding, to a proof of an undecidable statement] => 1, _ => 0}. In that case, its constancy is undecidable. That should explain the three impossibilities above; the fourth is an expression of the Halting Problem, which Turing proved undecidable. 

The problem with all such questions is that we don’t have an infinite amount of time, and those require an infinite amount of search. Symbols alone are countably infinite, and nonsymbols exist in a class so large that it cannot be called a set. (Nonsymbols, unlike mathematical sets, can contain themselves.) We are somewhat lucky in that the set of nonsymbols that we can describe in any formal language is still countable, reducing how much we have to worry about in the real world; still, these questions remain unanswerable (Godel, Church, Turing) even over that (countable) set of nonsymbols we can build.

Because nonsymbols are unruly in the general case, we find ourselves wanting to define contexts (which will later be formalized better as types; contexts here are informal English-language entities we use to understand certain kinds of nonsymbols) which are principled processes of calling apply on nonsymbols (making observations) to recover all information considered relevant. For example, above we used the Vector context:

Observe _size. If it's not a natural number, then it's not part of the Vector context. If it is some natural number k, then observe 0, 1, ..., k - 1.

This gives us, additionally, an equality definition for Vector. If all observations are identical according to this process, then the Vectors are considered equal. This means that these:

{_size => 1, 0 => 18} (from now on, assume _ => _error unless otherwise specified.)

and

{_size => 1, 0 => 18, _metadata => _blow_up_world}

are equal in the Vector context, even though the latter has gnarly _metadata. In some other context, they might not be equal.

For a nonsymbol to be “not part of the Vector context” means that one can’t interpret it as a Vector. Nothing prevents us from doing so computationally, because we’ve created a language in which even garbage function calls, such as apply with its first argument a symbol, return something– specifically, _error. For example, using vector equality semantics on {0 => 1, 1 => _empty} and {0 => 2, 1 => _empty} leaves us with nonsensical instructions, because observing _size yields _error, and “0, …, _error – 1″ doesn’t make any sense. Computationally, a vector-equality function would be expected to return _error in such a case. If the machinery responsible for looping over “0, …, k-1″ weren’t guarded against erroneous values, one can imagine a naive interpretation that falls into an infinite loop, because succ^k(0) never reaches _error.

Here’s another context called LL for now.

The symbol _empty is in LL, and no other symbols are. For a nonsymbol N, observe N(0). If N(1) = _empty, terminate. Otherwise, observe N(1) in the LL context.

So, here’s something in the LL context:

{0 => 67, 1 => {0 => 65, 1 => {0 => 84, 1 => _empty}}}

What we have, there, is a lazy linked list corresponding to the word “CAT” (in Ascii).

In practice, we’ll also want to keep a preferred context living with our data, so we might see nonsymbols that look like this:

{_type => _vector, _size => 64, 0 => 83, …, 63 => 197}

That gives us an Object1 context for equality:

Two symbols x and y are equal if eq[x, y]. A nonsymbol and symbol are never equal. For two nonsymbols, observe _type for each. If they are both equal symbols, "look up" the corresponding context (e.g. Vector for _vector) and apply it. If they are differing symbols, then they are unequal. If this lookup fails (i.e. their _type is unknown) or either has a nonsymbol observation _type, then they don't live in the Object context and this comparison fails.

By “look up”, I’m assuming that we have access to a “golden source” of interpretations for each _type observation. In the context of distributed systems, that turns out to be a terrible assumption. But it will work for us for now. Even still, the above context is not always satisfactory. Sometimes, we want cross-type equality, e.g. we want

{_type => _list, 0 => 67, 1 => {_type => _list, 0 => 65, 1 => {_type => _list, 0 => 84, 1 => _empty}}}

and

{_type => _vector, _size => 3, 0 => 67, 1 => 65, 2 => 84}

to be treated as equal in our Object context, since they both represent the word “CAT” (in Ascii). Well? That gets gnarly quick. That if our Object context admits N^2 different equality contexts because we need one for each pair of possible _type values. It gets even worse if we allow users to extend the definition of Object, making its list of supported types potentially infinite.

We treat nonsymbols as lazy, which means that their return values are provided on an as-needed basis. They don’t exist in a “completed” form. This is important because many of them contain an infinite amount of data inside them. For example, one has already seen how basic linked lists behave:

{0 => 212, 1 => {0 => 867, 1 => {0 => 5309, 1 => _empty}}}

but then there are also lazy streams that have infinite depth, such as this one:

{0 => 2, 1 => {0 => 3, 1 => {0 => 5, 1 => {0 => 7, 1 => {0 => 11, 1 => {0 => 13, 1 => {…}}}}}}}

which contains the entire set of prime numbers.

This laziness has a lot of benefits. It gives us one of the most powerful nonsymbols out there, which I’m calling Branch:

{[nonsymbol N s.t. N(0) is _true] => {T => {_ => T(0)}}, 
  [nonsymbol N s.t. N(0) is _false] => {_ => {F => F(0)}}}

A thunk is a nonsymbol whose return value is constant, e.g. Thunk[x] = {_ => x}. Since it’s irrelevant argument what is used to “pierce” it, we can use 0 by convention (or _unit if we wish to suggest, more strongly, a never-used input value). The design principle of the Branch nonsymbol is to take three thunks. The first (condition) is observed at 0 (that is, evaluated) no matter what. If it returns _true, we evaluate the second thunk and never the third. If it’s _false, then we ignore the second and evaluate the third.

We use apply3[x, y, z, w] as shorthand for apply[apply[apply[x, y], z], w] and we note that:

apply3[Branch, Thunk[_true], Thunk[42], Thunk[_blow_up_world]]

never, in fact, blows up the world. It returns 42. This gives us conditional execution, and it will be used to realize one of the most important language primitives, which is the if statement.

I’m going to talk about one other context, which will seem radically disconnected with how we usually think of the concept. I’m going to call it “Array” even though it seems nothing like an actual computational array (i.e. a block of contiguous memory):

No nonsymbols are in the Array context. Symbols not comparable to zero (i.e. not natural numbers) are not in the Array context. If it is an Array, call that natural number k. Observe the largest integer m such that m^2 <= k. If k - m^2 >= m, then it's not in the Array context. Otherwise, observe k - m^2.

In other words, only a subset of the natural numbers are in this context. The first observation gives us a maximal value, and the second gives us a value corresponding to the data it contains.

For example, 281474976842501 is an array. That number is equal to 2^48 + 2 * 2^16 + 3 * 2^8 + 5; so m = 2^24 and our second observation is 131845, which we interpret as the 3-byte array [2, 3, 5].

We now have an adjective we can apply over contexts. A context is Arrayable if:

  1. there is some principled way of encoding all things that live within it into an Array, and
  2. there is a fixed set of observations, after making which; for all things that live within it, we will have a finite upper bound on the integer value of the array computed.
  3. each semantically different value will map to a single Array, distinct from any other, but any values that are equal within that context will map to the same Array.

Vector is not Arrayable, but here are two examples of Arrayable contexts, BigInt and AsciiString:

BigInt: Observe _upper. Observe _data. If _data exceeds _upper, the nonsymbol does not live in the BigInt context. This is Arrayable as n(_upper)^2 + n(_data).

AsciiString: Observe _size and, if it is a natural number, call it k. (Otherwise, the nonsymbol is not an AsciiString.) Observe 0, ..., k-1. If all observations are natural numbers from 0 to 127 inclusive, you have an AsciiString. Otherwise, the nonsymbol does not live in this context. If it does, this is Arrayable as: 128^0 * n(0) + 128^1 * n(1) + ... + 128^(k-1) * n(k - 1) + (128^k)^2.

Here is an AsciiString nonsymbol for “CAT”:

{_size => 3, 0 => 67, 1 => 65, 2 => 84}

Its corresponding Array (integer) is 2^42 + 84 * 2^14 + 65 * 2^7 + 67 = 4398047895747.

Whither code?

So far, we’ve dealt only with data. I haven’t gotten yet to code: what it is, and why we like it. Code is what we call a nonsymbol within some Arrayable context (called a language) that allows us to produces a symbol or nonsymbol, in a predictable and useful way, from it. For example, here’s a piece of code, expressed as a nonsymbol.

{_size => 15, 0 => 40, 1 => 102, 2 => 110, 3 => 32, 4 => 120, 5 => 32, 6 => 40, 7 => 115, 8 => 117, 9 => 99, 10 => 99, 11 => 32, 12 => 120, 13 => 41, 14 => 41}

It can better be expressed as an Ascii string:

"(fn x (succ x))"

Our language might give us the tools to transmit this into the nonsymbol we want: {x => x’}.

We can’t trust nonsymbols very far. The space in which they live is too big. Let’s just talk about a simple (but infinitely supported) nonsymbol called Add:

{0 => {0 => 0, 1 => 1, 2 => 2, …}, 1 => {0 => 1, 1 => 2, 2 => 3, …}, 2 => {0 => 2, 1 => 3, 2 => 4, …}}

We can’t afford to realize (or “complete”) this nonsymbol in its entirety, but that doesn’t mean we can’t use it, because we’re only going to need a small number of its infinite cases in a real-world running program. We need some way of specifying what it is, without having to allocate infinite memory (which no machine has) for the table. We end up wanting to be able to write something like this (in pseudocode):

$Use name #add for (fn x -> (if (eq x 0) then (fn y -> y) else (fn y -> (succ (#add (pred x) y)))))

When we execute this, we see its operational semantics.

(#add 2 3) is ((#add 2) 3)
. => (((fn x -> (if (eq x 0) ...)) 2) 3)
. => (((if (eq 2 3) ...)) 3)
. => ((if false ... (fn y -> (succ (#add (pred 2) y))) 3)
. => ((succ (#add (pred 2) y)) 3)
. => (succ (#add (pred 2) 3))
. => (succ (#add 1 3))
. => ... => (succ (succ (#add 0 3)))
. => ... => (succ (succ ((fn y -> y) 3)
. => ... => 5

Of course, this version of addition really succs when it comes to performance. (It’s not even tail-recursive!) Why are we performing so much succage just to add two natural numbers? Well, this is a “least assumptions” model and it doesn’t assume, axiomatically, that one knows how to add. We derive that from succ (which we do assume). In reality, you’d almost certainly want to have a primitive called add or + that performs the addition in a more performant way. Invoking it would perform an apply on the appropriate nonsymbol, e.g.:

(+ 5 7) => apply2[{nonsymbol for addition}, 5, 7]

How does this work in the world of real computation? Well, two things are different. First of all, we actually have a lot of those primitives that we didn’t assume to exist. For example, we have addition and multiplication provided by the chip: not the entire nonsymbol, but a process that executes it over a large number of values (e.g. 64-bit integers). We don’t have to write, in general, arithmetic operations in terms of succ. Again, we could, but it’d perform extremely poorly.

On the other hand, there’s a false assumption above, which is that that eq takes constant time (one “computation step”). On an a finite set of symbols, it can be very fast but, in the real world, this doesn’t hold true. No physical object that we know of can differentiate an infinite number of states and communicate back to us, reliably, which state is observed. That’s why finite sets of symbols and Arrays become important. 

What we really have in the state of a computer is a nonsymbol that has the capability to represent a finite (but very large!) set of symbols. We can think of it as a nonsymbol for which there is some natural number N that returns _error whenever we throw anything but an integer between 0 and N – 1 at it, and returns 0 or 1 when applied to such an integer. Since that state is Arrayable (if we know N) we can imagine it as a single integer or (more usefully) as an array of bits or bytes.

Summary

Again, my purpose here isn’t to do groundbreaking computer science, and that’s not what I’m doing. None of these ideas are original to me, and the only thing I’m trying to accomplish is to create a world that presents these concepts in an accessible but still semi-rigorous way. So here’s what we have so far.

  • We have a universe of symbols and nonsymbols. Symbols allow us to compare for equality (eq) but for a nonsymbol we can never assess its full identity; we can only apply it to various arguments to compute a “return” for each. There’s a succ function on symbols that produces a new one. 
  • We have countably infinitely many symbols and a special symbol called 0; the set {0, succ[0], succ[succ[0]], …} is the natural numbers.
  • Nonsymbols can take symbols or nonsymbols as arguments, and return symbols or nonsymbols. They can be thought-of as akin to functions, but they actually operate on the whole class (nonsymbols are not a set).
  • We have a currently-informal notion of context that allows us to make observations (applications to arguments) of a nonsymbol in a principled way. Some nonsymbols don’t “live in” that context, meaning that the observations are garbage, but many do.These allow us to interpret certain classes of nonsymbols in a better light. For example, we can’t compare nonsymbols for equality ever, but we can compare them for equality as Vectors (i.e. according to observations made in the Vector context).
  • We have a special context called Array that represents a subset of natural numbers. In that context, we observe first an upper bound, and then a natural number data that is less than that.
  • A context is Arrayable if there is a principled way of converting any nonsymbol living in it into an Array, and back, with no context-relevant information lost. (Behavior of that nonsymbol that lives outside the context might, however, be lost.)
  • Languages are Arrayable contexts in which a “string-like” nonsymbol is converted into a symbol or nonsymbol “described” by it. In practice, we think of this as the conversion of a string (file contents) into a function (stateless program).

I don’t know where this is going, or if it’s interesting or useful to anyone, but this is I we have now and I hope I’ve made at least a few concepts clearer (or more attractive, or more interesting).


Fixing employment with consulting call options.

$
0
0

Here’s a radical thought that I’ve had. There are a lot of individual cases of people auctioning off percentages of their income in exchange for immediate payments, which they use to invest in education or career-improving but costly life changes like geographical moves. Someone might trade 10% of her lifetime income in exchange for $200,000 to attend college. This has a “gimmicky” feel to it as it’s set up now, and it’s something I’d be reluctant to do anything like that for the obvious reputation reasons (it seems desperate) but there’s a gem there. There’s a potential for true synergy, not only gambling or risk transfer. If a cash infusion leads a person to have better opportunities and a more successful career, then both sides win. There should be a way for individual people to engage in this sort of payment-out-of-future-prosperity that companies can easily use (it’s called finance). However, a percentage of income is too easy to scam. We need to index it to the value of that person’s time, and the best way to do that is to have the offered security represent a call option on that person’s time.

With the cash-for-percentage-of-income trade, the “Laffer curve” effect is a problem. There’s scam potential here. What if someone sells 10% of his lifetime work income for, say, $250,000, but actually finds ten buyers? Then he gets a $2.5 million infusion right away, which is enough money not to work. He also has zero incentive to work, so he won’t, and his counterparties get screwed because he has no work income. So this idea, on its own, isn’t going to go very far. The securities (shares in someone’s income) aren’t fungible, because the number of them that are outstanding has a major effect on their value.

Let’s take a different approach altogether. This one doesn’t involve a draw against someone’s income. It’s a call option on a person’s future work time. I intend it mainly for consultants and freelancers, but as the realities of the new economy push us all toward being more individualistic and entrepreneurial, it could be extended to something that applies to everyone. It’s not this gimmicky “X percent of future income” trade that doesn’t scale up to a real market (because once the trade stops novel, we can’t trust people not to sell incentive-affecting percentages of their income, and that problem naturally limits it). How does it work? Here’s a template for what such an agreement would look like.

  • Option entitles holder to T hours (typically 100; with blocks as small as 25 or as large as 2000) of seller’s time (on work that is legal) to be performed between dates S and E at a strike price of $K per hour. For a college student, typical values would be S = date of graduation and E = five years after graduation. For someone out of school, S might be set to the time of signing, and E to five years from that date. 
  • Seller must publish how many such options have been sold so buyers can properly evaluate the load (e.g. no one is allowed to sell 50,000 hours of time in the next 5 years, because that much work cannot be performed.) I would, in general, agree on a 2000-hour-per-year limit. Outstanding load is publicly available information and loads exceeding 1000 hours per year should be disclosed to future employers.
  • If the option is not exercised, then the no work is performed (but the writer still retains the value earned by selling it). If it is, seller receives an additional $K per hour. The option is exercised as a block (either all T hours or none) and buyer is responsible for travel and working costs.
  • These options are transferrable on the market. This is essential. Few people can assess their specific needs for consulting work, but it’s much easier to determine that a bright college students’ time will be worth $100/hour to someone in five years.

One thing I haven’t figured out yet is the specific scheduling policy beyond a “act in good faith” principle. If two option-holders exercise at the same time, who gets priority? How much commitment must the consultant deliver when exercise occurs (40 hours per week, making full-time employment impossible; or 10 as an upper limit, with the work then furnished over more calendar time)? Obviously, this needs to be something that the option-writer can control; buyers simply need to know what the terms are. The other issue is the ethics factor, which doesn’t apply to most of technology but would be an issue for a small class of companies. Most people would have no problem working for a meat distributor, but we’d want an escape hatch that prevents a vegan’s time from being sold to one, for example. There has to be some right to refuse work, but only based on a genuine ethical disagreement; not because a person has suddenly decided her time is worth 10x the strike price (which will almost always be lower than the predicted value of her time). The latter would defeat the point of the whole arrangement.

In spite of those problems, I think this idea can work. Why? Well, the truth is that this sort of call-option arrangement is already in place, although with an inefficient and unfair structure that leaves both sides unhappy. It’s employment.

How much is an employee’s time actually worth to the operation? Dirty secret: no one really knows. There are so many variables on each individual, each company, and each project that it’s really hard to tell. The market is opaque and extremely inefficient.  For example, I’d guess that a programmer at my level (1.7-1.9) is worth about $1000/hour as a short-term (< 20 hours) “take a look and share ideas” consultant, $250/hour as a freelance programmer, and perhaps $750,000 per year in the context of best-case full-time employment (wherein the package includes not only 2000 hours of work, but also responsibility and commitment) but well under the market salary ($100-250k, depending on location and industry) in a worst-case employment context. Almost no employer can predict where on the spectrum an employee will land between the “best-case” and “worse-case” levels of value delivery.

Employers know that for sociological reasons, a full-time employee’s observed value delivery is going to be closer to the worst-case than best-case employment potential. If you have interesting problems and a supportive environment, then a 1.5-level programmer is easily worth $300,000 per year, and a 1.8+ is worth almost a million. Most companies, though, can’t guarantee those conditions. Hostile managers and co-workers, or inappropriate projects, or just plain bad fit, all can easily shave an order of magnitude off of someone’s potential value. In fact, since doing that involves interacting with people and controlling how they treat each other, that’s seen as boundlessly expensive. If a manager has a long-standing reputation for “delivering” but is a hard-core asshole, is it worth it to unlock the $5 million per year released when he’s forced to treat his reports better, given that there is a chance of upsetting and losing him (and the “delivery” he brings, which he’s spent years making as opaque as possible)? The answer is probably yes, but the reason why he’s a manager is that he’s convinced high-level people not to take that risk. That’s how the guy got that job in the first place.

So what is employment, then? When people join a company, they’re selling their own personal financial risk. That stuff is toxic; no one wants it, so typically people offload it to the first buyer (employer) that comes along, until they’re comfortable enough to be selective (which, for most, doesn’t happen until middle age). When it comes to personal financial risk, corporations have the magic power to dissolve dogshit. They know it, and they demand favorable terms from an expected-value perspective. The employee would rather have a reliable mediocre income than a more volatile payment structure closer (in the long run) to their actual market value. So the company offers a salary somewhere around the 10th-percentile level of that person’s long-term value delivery. If the person works out well, it’s mutually beneficial. She enjoys her work, and renders to the company several times her salary in value. Since she’s happy, and since good work environments are goddamn rare and she’s not going to roll the dice and move to another (probably bad, since most are) corporate culture; a small annual raise and a bonus are enough to keep her where she is. What if she doesn’t work out? Well, she’s fired. Ultimately, then, corporate employment is a call option on the employee’s long-term ability to render value. The problem? Employee can opt out at any time. The option is contingent not merely on personal happiness, but on fulfillment. I’ll get back to that.

Why is my call-option structure better? There are a couple reasons. Obviously, everyone should have the fundamental right to opt-out of work they find objectionable. What I do want to discourage (because it would ruin the option market) is the person who refuses to work at a $75 strike because she becomes “a rockstar” and she’s now worth $1000/hour. That’s not fair to the option-holder; it’s not ethical. However, I feel like these opt-outs will be a lot rarer than job-hopping is. Why? First, everyone knows that job-hopping is a necessity in the modern economy. Almost no one gets respect, fair pay, or interesting work without creating an implicit bidding war between employers and prospective future opportunities. Sure, some manageosaurs who mistake their companies for nursing homes still enforce the stigma against job applicants with “too many jobs”, but people who weren’t born before the Fillmore administration have generally agreed that job hopping for economic reasons is an ethically OK thing to do. Two thousand hours of work per year is a gigantic commitment and exclusive of other opportunities, and almost no one would call it a career-long ethical commitment. The ethical framework (no job hopping, ever!) that enforces the call-option value (to employer) of employment is decades out of mode. It never made sense, and now it’s laughably obsolete. I would, however, say that a person who writes a call option on 100 hours of future work has an ethical responsibility to attend to it in good faith.

An equally important thought is that consulting is a generally superior arrangement to office-holding employment, except for its inability to deliver reliable income (which a robust options market could fix). Why? Well, people quit these monolithic 2000-hour-per-year office jobs all the time (often not by actually changing jobs, but by underperforming or even acting out, until they’re fired, and that takes a long time) because they don’t feel fulfilled. That’s different from being happy. A person can be happy (in the moment) doing 100 hours of boring work if he’s getting $20,000 for it. It’s not the work of “grunt work” that makes it intolerable for most people, but the social message. That’s why true consultants (not full-time contractors called such) are less likely to underperform or silently sabotage an effort when “assigned” grunt work; employees expect their careers to be nurtured in exchange for their poor conditions, while consultants get better conditions but harbor no such expectation.

On that psychology of work, I know people who can’t clean their own houses, not because the work is intolerable (it’s just mundane) but because they can’t stand the way they feel about themselves when doing such chores. However, a sufficient hourly rate will override that social message for almost anyone. How many people wouldn’t clean someone’s house, 100 hours per year, for 10 times their hourly wage? He won’t be fulfilled at such work at any price, but that’s different. It’s not hard to find someone who will be happy to perform work that most people find unpleasant. Consulting arrangements allow a price to be found. But with full-time position-holding employment, the zero/one fulfillment distinction is much harder to bring into being. People will clean, if paid to do it, but no one wants to be a cleaner forever.

The nice thing about consulting is that the middle ground between fulfillment and misery exists. You can go and do work for someone but you don’t have to be that person’s subordinate, which means that work that is neither miserable nor fulfilling (i.e. almost all of it) can be performed without severe hedonic penalty (i.e. you don’t hate that you do it). Because of modularity and the potential for multiple employment, you can refuse an undesirable project without threatening your reputation or unrelated income streams– something that doesn’t apply in regular employment, where refusing the paint that bike-shed that hideous green-brown color will have you judged as a uniform failure by your manager, even if you’re stellar on every other project. A consultant is a mercenary who does for pay, and only identifies with work if he chooses to. He sells some of his time, but not his identity. An employee, on the other hand, is forced into a monolithic, immodular 2000-hour-per-week commitment and forces identification with the work, if only because the obligation is such a massive block (yes, the image of intestinal exertion is intentional) that it dominates the person’s life, forcing identification either in submission (Stockholm Syndrome, corporate paternalism, and the long-term seething anger of dashed expectations in those for whom management doesn’t take the promised long-term interest in their careers) or in rebellion (yours truly).

So let me tie this all together rather than continuing what threatens to become a divergent rant on employment and alienation. An employee‘s main selling point is a call option written to her employer. If she matches well with the employer’s needs and its people, and if the employer continues to fulfill her desires for industrial fulfillment (which change more radically than the matter of what someone will be merely happy to do at a fair rate; the “good enough to be happy” set of work becoming broader with age, while fulfillment requirements go the other way and get steeper), and if the salary paid to her is kept within an acceptable margin (usually 20 to 40%) of her market value, she’ll deliver labor worth several times the strike price (her agreed-upon salary, plus marginal annual wage increases). Since there are a lot of ifs involved, the salary at which a company can justify employing her is several times less than her potential to render value: a mediocre salary that forces her into long-term wage-earning employment, when the value of her work at maximum potential would justify retirement after five to six years. That’s not unfair. In fact, it’s extremely fair, but an artifact of opacity and low information quality. 

Why is it like this? The truth is that the employer doesn’t participate in her long-tail upside, as it would with a genuine call option. In the worst cases, they do not exercise the option and stop employing her, but they pay transactional fees (warning time, severance, lawsuit risk, morale issues) associated with ending an employment relationship. In the mediocre cases (middling 80%) they collect some multiplier on her salary: the call option is exercised, and the company wins enough to generate a modest but uninspiring profit. In the very-good cases, she performs so well that it’s impossible to keep this from translating into macroscopic visibility and popping her market value. Since it’s not a real call option (she has no obligation at all to continue furnishing work) there is no way for the company to collect. An actual call option on some slice of her time would be superior, from the corporate perspective, because it insures them against the risk that her overperformance leads to total departure (i.e. finding another job).

How would we value such a call option? Let’s work with three model cases. One is Zach, an 18-year-old recently admitted to Stanford intending to major in computer science, with the obvious ability to complete such a course. He needs $200,000 to go to school. Let’s say that he puts the start date of the option at his rising-sophomore summer (internship) and the end date at 5 years past graduation. What’s a fair strike price? I would say that the strike price should be, in general, somewhere around 1/1500 of the person’s expected annual salary (under normal corporate employment) at the end of the exercise window. For Zach, that might be $80 per hour. The actual productive value of this time, at that point? (We can’t use a “stock price” for a Black-Scholes model, because the value of the underlying is affected by conditions including the cash infusion attendant to the sale; that’s why it’s synergistic.) I’d guess that it’s around $120, with a (multiplicative) standard deviation of 50%, which over 9 years equates to an annualized volatility of 16.7%. Using a risk-free rate of 2%, that gives the call option a Black-Scholes value of about $56. This means Zach needs to sell about 3570 hours worth of options to finance going to college. Assuming he can commit no more than 0.3 of a year for his four years of college, that’s 576 hours per year– not of free work, but of commitment to work at a potentially below-market “strike” price of $80 per hour. I think that’s a damn good deal for Zach, especially in comparison to student debt.

 

Alice is a 30-year-old programmer. She lives in Iowa City and has maxed out at an annual salary of $90,000 per year doing very senior-level work. The only way to move up is into management, which doesn’t appeal to her. She suspects that she could do a lot better in New York or San Francisco, but she can’t get jobs there because she doesn’t know anyone and resume-walls are broken– besides, how many VC-funded startups will hire a 30-year-old female making $90,000?–  and consulting (until this options market is built) is even more word-of-mouth/broken than regular employment. She knows that she’s good. She’d like to sell 7500 hours of work to the market over the next five years. Assume the option sale is enough to kick-start her career; then, her market value after five years is $250 per hour, but she sets her strike at $90. Since she’s older and her “volatility” (uncertainty in market value) is lower, let’s put her at 13% rather than Zach’s 16.7%. The fair value of her call options is $168 per hour, so she’s able to raise $1.26 million immediately: more than enough to finance her move to a new city.

Barbara is a 43-year-old stay-at-home mother whose youngest child (of five) reached six years of age. She’s no longer needed around the house, but has enough complexity in her life that full-time employment isn’t very tenable. However, she’s been intellectually active, designing websites for various local charities and organizations for a cut rate. She’s learned Python, taken a few courses on Coursera, and excelled. She wants to work on some hard programming problems, but no one will hire her because of her age and lack of “professional” experience. She decides to look for consulting work. She’s still green as a programmer, but could justify $100 per hour with access to the full market. She’s committing 1000 hours over one year, and she decides that $30/hour is the minimum hourly rate to motivate her, so she offers that as the strike. With volatility at 15% (although that’s almost irrelevant, given the low strike) she raises $71 on each option, and gets $71,000 immediately, with 1000 hours of work practically “locked in” due to the low strike price (at which anyone would retain her).

Cedar City High is a top suburban public high school in eastern Massachusetts. They’d like to have an elective course on technology entrepreneurship, and student demand is sufficient to justify two periods per day. Teaching time, including grading and preparation, will be 16 hours per week, times 40 weeks per year, for 640 hours. That’s not enough to justify a full-time teaching position, and it’d preferably be taught by someone with experience in the field. Dave is coming off yet-another startup, and has had some successes and failures but, right now, he’s decide that he wants to do something useful. He’s sick of this VC-funded, social-media nonsense. He’s not looking to get rich, but he needs to deliver some value to the community, and get paid enough for it to survive. He sets a minimum strike at $70 per hour, and he’s looking for about that 640 hours of work. Based on their assessments, Cedar City agrees to pay $15 for the options and exercise them, meaning they pay $85 per hour (or $54,400 per year, less than the cost of a full-time teacher) for the work.

Emily’s a 27-year-old investment banker who has decided that she hates the hours demanded by the industry and wants out. Her last performance review was mediocre, because the monotony of the work and the hours are starting to drain her. With her knowledge of finance and technology, she knows that she’ll be killing it in the future– if she can get out of her current career trap. However, five years of 80-hour work weeks have left her stressed-out and without a network. She’ll need a six-month break to travel, but FiDi rent (she can’t live elsewhere, given her demanding work schedule) has bled her dry and she has no savings. She realizes that the long-term five-years-out hourly value of her work– if she can get out of where she is now– is $300 per hour at median, with an annualized volatility of about 30% (she is stressed out). Unsure about her long-term career path, she offers a mere 500 hours (100 per year) with a five-year window. She sells the options at a $200/hour strike. The Black-Scholes value of them is $146 per hour, or $73,000 for the block. That gives her more than enough to finance her six months of travel, regain her normal emotional state, and find her next job.

So this is a good idea. That’s clear. What, pray tell, are we waiting for? As a generation, we need to build this damn thing.


Why an Atlas Shrugged smart people strike would never work.

$
0
0

I’m not a major fan of Ayn Rand, but one of the more attractive ideas coming out of her work is from Atlas Shrugged, written about a world in which the “people of the mind”– business leaders, artists, philosophers– go on strike. It’s an attractive idea. What would happen if those of us in the “cognitive 1 percent” decided, as a bloc, to secede from the mediocrity of Corporate America? Would we finally get our due? Would we stop having to answer to idiots? Would the dumb-dumbs come crawling to us, begging that we return?

No. That would never happen. They have as much pride as we do.

It’s an appealing concept, for sure. Individually, not one of us is substantial to society– that’s not a personal statement; no one person is that important. Any one of us could be cast into the flames with little cost to society. Yet we tend to feel like, as a group, we are critical. We’re right. am insignificant, but societies live or die based on what proportion of the few thousand people like me per generation get their ideas into implementation, and it’s only after the fact that one knows which side of the critical percentage a society is on. Atlas could shrug. Society could be brought to its knees if the most intelligent people developed a tribal identity, acted as a political bloc, were still ignored, and chose to secede. Science and the arts would stagnate, the economy would fall into decline, and society would be unable to correct for its own morale problems. The culture would crater, innovation would die, and whatever society endured such a “strike” would quickly fall to third-class status on the world stage.

That doesn’t mean we, the smart people who might threaten such a strike, would get whatever we want. Imagine trying to extort a masochist. “I’ll beat you up unless if you give me $100.” “You mean I can not give you $100 and get beaten up? For free? I’ll take that option; you’re so kind.”

I don’t mean to call society masochistic, because it isn’t so. Societies don’t make choices. People in them do, often with minimal or no concern with the upkeep of this edifice we call “civilization”. Now, the people at the top of ours (Corporatist America) are stupid, short-sighted, uncultured, and defective human beings. All of that is true. To assess them as weak because of this is inaccurate. I’m pretty sure that crocodiles don’t crack 25 on an IQ test, but I wouldn’t want to be in a physical fight with one. These people are ruthless and competitive and they’re very good at what they do– which is to acquire and hold position, even if it requires charming people (including people like us, much smarter than they are) to get it. They’d also rather reign in hell than serve in heaven. That’s why we’ll never be able to pull an Atlas Shrugged move against them. They care far more about their relative standing in society than its specific level of health. We’d be giving them exactly what they want: less competition to hold the high social status they currently have.

Also, I think that an Atlas Shrugged phenomenon is already happening in American society, with so little fanfare as to render it comically underwhelming. Smart people all over the country are underperforming, mostly not by choice, but because they are not getting opportunities to excel. Scientists spend an increasing amount of time applying for grants and lobbying their bosses for the autonomy that they had, implicitly, a generation ago. The quality of our arts has suffered substantially. Our political climate is disastrous and right-wing because a lot of intelligent people have just given up. Has the elite looked at the slow decline of the society and said, “Man, we really need to treat those smart people better, and hand our plum positions over to those who actually deserve them?” Nope. That has not happened; it would be absurd to think of it, as the current elite has too much pride. And if we scale that up from unintentional, situational underperformance to a full-fledged strike of the cognitive elite, we will be ignored for doing so. We won’t bring society to a calamitous break and get our due. We’ll see slow decay and the only people smart enough to make the connection between our strike and that degradation will be the strikers themselves. We already have a pervasively mediocre society and things still work– not well, but we haven’t seen catastrophic society-wide failures yet. It might get to that point, but it’ll be too late for the kind of action we might want.

In sum…

  • fantasy: the cognitive elite could go on “strike” and the existing elite (corporate upper class, tied together by social connections rather than anything related to excellence) would, after society fell to pieces, beg us to rejoin on our terms, inverting the power dynamic that currently exists between us and them.
  • reality: those parasitic fuckers don’t give a shit about the broad-based health of society. We’re not exactly a real competitive threat to them because they hold most of the power, but we do have some power and we’d just be making their lives easier if we withdrew from the world and gave that power up entirely.

As intellectuals, or at least as people who aspire to be such, we look at civil decline as tragic and painful. When we learn about expansive civilizations that fall into decadence and ruin, we tend to imagine it as a personal death that’s directly experienced, rather than a gradual historic change that few people notice in contrast to the day-to-day struggles of higher personal importance. So we often delude ourselves into thinking that “society” has its own will and makes “choices” according to its own interests, as opposed to the parochial interests of whatever idiots happen to be running it at the time. Thus, we believe that if “society” refuses to listen to our ideas and place us in appropriately high positions, we can withdraw as a bloc, render it ineffective, and impel it to “come crawling back” to us with better terms. We’re dead wrong in believing that this is a possibility. Yes, we can render it ineffective through underperformance (hell, it’s already arguably at that point, just based on the pervasive conformity and mediocrity that have declawed most of us) but this reorganization that we seek will never happen. We tend to overestimate the moral character– while underestimating the competitive capability (again: think crocodiles)– of our enemies. They are all about their own egos and they will gladly have society burn just to stay on top.

One concrete example of this is in software engineering, where the culture is mostly one of anti-intellectualism and mediocrity. Why is it this way? Given that an elite programmer is 10-100 times as effective as a mediocre code monkey, why do companies tailor their environments to the hiring en masse of unskilled “commodity” developers? Bad programmers are not cheap; they’re hilariously expensive. So what’s going on? The answer is that most managers don’t care about the good of the company. It’s their own egos they want to protect. A good programmer costs only 25 percent more than a mediocre one, but is 5 times as effective. Why not hire the good one, then? The answer is that the manager loses his real motivation for going to work: being the smartest guy in the room, and the unambiguous alpha male. Saving the company some money is not, to most managers, worth that price.

What we fail to realize, as the cognitive 1 percent, is that while society abstractly relies on us, the people running society think we’re huge pains in the ass and would be thrilled not to have to deal with us at all.

Do I believe that it’s time for the cognitive 1 percent to mobilize, and to take back our rightful control over society’s direction? Absolutely. In fact, I think it’s a moral responsibility, because the world is facing some problems (such as climate change) too complex for the existing elite to solve. The incapacity and mediocrity of our current corporate elite is literally an existential risk to humanity. We ought to assert ourselves, as a group, and start fixing the world. But the Atlas Shrugged model is the wrong way to go about that.


Corporate “Work” is the most elaborate victim-shaming structure ever devised.

$
0
0

There are a lot of things to hate about the institutional pogrom that the middle and working classes must suffer in the name of Work. It’s not, of course, the actual work (i.e. productive activity) that is so bad. That’s often the best part of it! At any rate, work demands at Work are pretty light. The work itself– when you’re lucky enough to actually get a real project– is the fun bit. It’s the private-sector social climbing and subordination and the pervasive and extreme narcissism of the unethical assholes who are in charge that makes it such hell. There’s a specific economic reason why it’s so horrible, and a simple enough one that I’ll be able to mention it on the way to my main topic (victim shaming). I’ll cover the economics first, and then progress to the sociological victim-shaming problems.

Why Work is starting to fail

In 2013, ignoring technological change is not an option. It affects everything. No one’s job will be the same in 20 years as it is now– and that’s a good thing. However, it is dangerous. Broadly speaking, there seem to be two schools of thought on the labor market’s predicted response to enhanced efficiency, global labor pools, and automation of work. They are labor finitism and labor progressivism.

Labor finitism is the idea that there’s a fixed pool (“lump of labor”) of work that society has decided that it is willing to pay for. If labor finitism is accurate, then technological improvements only make the situation worse for the proles: they now have to compete harder for a shrinking pool of available work. If labor finitism is true, then trade protectionism and xenophobia become necessary. Unfortunately, labor finitism means that that technological advancements will destroy the middle and working classes, as their jobs disappear forever, and they are deprived of the resources that would enable to compete for the dwindling supply of high-quality jobs.

Labor progressivism is the more utopian alternative in which the enhanced capability brought in by technology gives leverage to the workers, and rather than automating them out of jobs, enhances their capability. Rather than being pushed out of the workforce, they’re able to do more interesting stuff with their time, add more value, and therefore be better off in all ways (higher quality of work, better compensation). Labor progressivism is the favored stance of Silicon Valley technologists, but unfortunately doesn’t accurately represent the reality faced by middle-class Americans.

Which of these two opposing stances is right? Well, the actual state of society is somewhere between the two extremes, obviously. It’s a mix of the two. Labor finitism seems to be more true in the short term, while the long-term economic evolution has a progressivist feel; there seems to eventual consistency in the system, but it takes a long time for things to get to an acceptable state, and people need to eat immediately. As Keynes said, in the long run we are all dead. If we can make the convergence to labor progressivism happen faster we should.

Here’s what I’ve observed. For pre-defined, subordinate wage/salaryman work, labor finitism is correct. The jobs that built the middle classes– complacent, entrepreneurially averse, inclined to overspend rather than plan for eventual freedom– of Western societies are going away, and this is happening at a rapid pace, leaving a large number of people (who were effectively farmed by a savvier elite, but are now unneeded livestock) just screwed. However, for those who have the resources to own their lives rather than renting their existences from a boss, there’s an infinite (i.e. labor progressivism) amount of useful work to be done: building businesses and apps, freelance travel writing, building skills and trying out radically different careers. Payoffs for such work are intermittent, and discovery costs are high– your first few attempts to “break out” will typically be money-losers– but those who have the resources to stomach the income volatility have access to a much higher-quality pool of work that is not going to be automated away in the next year.

So labor finitism and progressivism are both correct to some degree, the question of which is more in force depending on one’s present resources. Those who can stomach short-term income volatility live in a labor-progressivist world. For the 99%, however, labor finitism is more accurate.

So what is corporate Work?

Corporate Work is the labor-finitist ghetto left for “the 99%”, those of us who haven’t had the luck or resources to escape into the labor-progressivist stratosphere. It’s a zero-sum world. If you get 5 times as good as you currently are at your job, then 4 people sitting near you lose their jobs. There’s only a small amount of potential work that can be performed in good graces by the company, and that work-definition function is performed by a small set of incompetent high priests called “executives” whose informational surface area is too small for them to do it well, and who tend to use their social power and credibility for corrupt and extortive purposes, rather than advancing the good of the company.

What inevitably comes out of this is that there’s very little high-quality work available in the corporation. There are plenty more things that could be done and would be useful to it, but people who invested time in those would get into trouble for blowing off their assigned projects. So, while “reality” for a theoretically profit-maximizing company might be labor-progressivist (there’s an unending stream of improvements the firm might make that would render it more profitable) the issue of executive sanction (i.e. you can only keep your job by working on the pre-defined, often uninspiring, stuff) creates a labor-finitist atmosphere in which most time is spent squabbling over the few high-quality projects that exist.

Indeed, this is the most painful thing about corporate Work. It’s a lifestyle based not on doing work, but on getting it. Excellence doesn’t matter. Only social access does. It’s all a bunch of degenerate social climbing that has nothing to do with excellence or addition of value. It’s a world run by con artists who steal the trust of powerful people; those who are busy actually trying to excel at things (i.e. actually working) never develop the social polish or credibility necessary to do that, so they end up being marginalized.

Paul Graham wrote about this, and he got some details right and some wrong, in the essay “Why Nerds Are Unpopular“. It’s worth reading in its entirety, because while Graham gets some of the finer points wrong (and I’ll discuss that) he’s extremely insightful and articulate overall.

Graham compared the stereotypically negative depiction of high school (a cruel society governed by arbitrary dominance hierarchies and all-consuming conformity, existing because there isn’t meaningful work for 17-year-olds to do) to “the adult world” as a rich man (one who truly owns his life, rather than renting it from a boss) would perceive it– a place where there’s actual work to be done, and the intermittency of real work’s rewards is tolerable because of one’s financial status.

Here are two direct quotes to show what Graham’s talking about:

In almost any group of people you’ll find hierarchy. When groups of adults form in the real world, it’s generally for some common purpose, and the leaders end up being those who are best at it. The problem with most schools is, they have no purpose. But hierarchy there must be. And so the kids make one out of nothing.

We have a phrase to describe what happens when rankings have to be created without any meaningful criteria. We say that the situation degenerates into a popularity contest. And that’s exactly what happens in most American schools. Instead of depending on some real test, one’s rank depends mostly on one’s ability to increase one’s rank. It’s like the court of Louis XIV. There is no external opponent, so the kids become one another’s opponents.

When there is some real external test of skill, it isn’t painful to be at the bottom of the hierarchy. A rookie on a football team doesn’t resent the skill of the veteran; he hopes to be like him one day and is happy to have the chance to learn from him. The veteran may in turn feel a sense of noblesse oblige. And most importantly, their status depends on how well they do against opponents, not on whether they can push the other down.

Court hierarchies are another thing entirely. This type of society debases anyone who enters it. There is neither admiration at the bottom, nor noblesse oblige at the top. It’s kill or be killed.

This is the sort of society that gets created in American secondary schools. And it happens because these schools have no real purpose beyond keeping the kids all in one place for a certain number of hours each day. What I didn’t realize at the time, and in fact didn’t realize till very recently, is that the twin horrors of school life, the cruelty and the boredom, both have the same cause.

Paul Graham depicts the American suburban high school as being a society that turns to cruelty because, with the lack of high-impact, real-world work to be done, people create a vicious status hierarchy based entirely on rank’s ability and drive to perpetuate itself. He also establishes that similar meaningless hierarchies form in prisons and among idle upper classes (“ladies-who-lunch”) and that the pattern of positional cruelty is similar. Here is, I think, where he departs a bit from reality, taking fortunate personal experience to be far more representative of “the real world” than it actually is:

Why is the real world more hospitable to nerds? It might seem that the answer is simply that it’s populated by adults, who are too mature to pick on one another. But I don’t think this is true. Adults in prison certainly pick on one another. And so, apparently, do society wives; in some parts of Manhattan, life for women sounds like a continuation of high school, with all the same petty intrigues.

I think the important thing about the real world is not that it’s populated by adults, but that it’s very large, and the things you do have real effects. That’s what school, prison, and ladies-who-lunch all lack. The inhabitants of all those worlds are trapped in little bubbles where nothing they do can have more than a local effect. Naturally these societies degenerate into savagery. They have no function for their form to follow.

When the things you do have real effects, it’s no longer enough just to be pleasing. It starts to be important to get the right answers, and that’s where nerds show to advantage. Bill Gates will of course come to mind. Though notoriously lacking in social skills, he gets the right answers, at least as measured in revenue.

So, apparently, it gets better if you have the resources to pursue work that has meaning, rather than the subordinate people-pleasing nonsense associated with high school (and, as it were, most corporate jobs). That’s what it’s like if you’re rich enough to escape corporate hell for good. Getting the right answers, rather than pleasing the right people, becomes important. If you have a typical please-your-boss subordinate position, though… guess what? Paul Graham’s depiction of high school is exactly what you’ll face in the supposedly “adult” world. The boredom and cruelty don’t end. You just get older and sicker and less able to handle it, until you’re discarded by that world and it’s called “retirement”.

The hellish social arrangement that Graham describes is the result of labor finitism; imposed artificially by the testability needs (i.e. have everyone doing the same work) of school and also the degraded economy of a prison (people intentionally separated from society, often because of psychological or moral defects) or idle ladies-who-lunch (who live in comfort, but have no power). People group together, but the lack of real work means that there’s a lot of squabbling for status. In a labor-finitist world, you have zero-sum internal competition and a social-status hierarchy that subverts any meritocracy that one might try to impose. High school students are “supposed” to care about grades and learning and doing good work; but most of them actually care more about in-group social status. That turns out to be great preparation for the corporate world, in which “performance” reviews reflect (and perpetuate) social status rather than having anything to do with the quality of a person’s work. (People who do actual work don’t get “reviewed” or, if they do, it’s a rubber-stamp formality; everyone is too busy actually doing things.) Work is a world in which grades are assigned not by teachers but by whatever group of kids happens to be popular at the time.

What Paul Graham describes as “the adult world” is what life looks like from his fortunate position. I won’t use “privileged” here– Graham’s brilliant and clearly earned every bit of his success– but it’s not typical for most people. If you have the money to own your life instead of renting from a corporate master, then labor progressivism (i.e., what “adulthood” is supposed to be, a lifestyle based on providing value to others rather than subordinating to a parochial protection-offerer called a corporate manager) is what the world actually looks like. The big question for us in technology is: how do we make a progressivist/high-autonomy world available to more people?

Trust

The biggest problem for technologists is trust. Free-floating, high-return-seeking capital is abundant in the world, but the gatekeepers (venture capitalists who’ve used internal social protocols to form an almost certainly illegal collusive phalanx, despite nominally being in competition with each other) have made it scarce. Talent finds capital to be inaccessible. Yet (somewhat insultingly) the corporate managers consistently complain about a “talent shortage”. So capital complains, with equal fervor, about a struggle to find and retain talent. Everyone’s wrong. Capital isn’t scarce, nor is talent. Both are abundant, and there’s something else keeping the two from meeting. The problem is that there’s a bilateral lack of trust.

Why do companies “acq-hire” such depressingly mediocre talent at a panic price of $3-6 million per software engineer? It’s because the value of a trusted employee is worth 10-100 times that of a typical one. So why don’t these companies, instead of shelling out billions to acq-hire mediocrity, simply trust their own people more? Well, that’s a deep sociological question that I won’t be able to explore fully. The short answer is that the modern corporation’s labor finitism (driven by closed allocation, or the right of workers only to work on projects with pre-existing executive sanction and middle-management protection) creates a society exactly like Graham’s vision of high school, which means that nasty political intrigues form and petty hatreds build in the organization, to the point that outsiders are deemed a better bet for allocation to real work than internal people, the latter having been tainted by years of inmate life. Corporate society is so dismal and mediocre and so removed from getting actual work done that people who participate in it (as opposed to the fresh-faced rich kids whose parental connections bought them VC and tech press and favorable terms of “talent acquisition”) are, even if reality is not such, perceived as being too filthy to be trusted with real assignments.

It’s not a pretty picture. Corporations hire people on the false pretense of mentorship and career development. “Yes, we’ll pay you a pittance now, and you’ll spend your first two years on the garbage work that no one wants to do; but we’ll advance your career and make you eligible for much better jobs in the future.” What they do is the opposite. They don’t reward people who “suck it up” and do years of shit-work with better projects in the future, because a person who spent two years not learning anything is less eligible for quality work than when he came in the door; they hire people with better work experience for that. Also, it’s not that they deliberately lie to hire people. It’s that they just don’t have much high-quality work to allocate, and few companies are courageous enough to try Valve-style open allocation. So what actually emerges is a society in which high-quality work either goes to political strongmen (i.e. extortionist thugs who intimidate others into supporting their own campaigns for high social status) or to outsiders that are usually either acq-hired in or started in privilged positions by investor mandate (i.e. a venture capitalist uses your company to mint executive sinecures for his underachieving friends).

Victim-shaming

How does all of this evolve into an elaborate system of victim-shaming? Well, it works like this. People are evaluated, in the world of Work, based on their previous experiences. However, because of the corruption in project allocation, a person’s “work experience” is actually a time series of his political favor, not his level of accomplishment. What that means is that people who fall out of favor are judged to be underperformers and flushed away. There is no more brazen culture of victim-shaming than the private-sector social-climbing hell we call Work.

The rules are clear: if you get robbed, it’s your fault. If your boss steals from you by abusing process, giving you undesirable work ill-matched to your talents, and ruining your career, it’s because you (not he) are a subhuman piece of shit. You deserve whatever he does to you, and he has the right to do it (that’s the perk of being a manager). He proved he was stronger, by robbing you and you letting him (as if you had a choice) and he therefore deserves everything he stole from you. You deserve nothing other than more humiliation. That’s what resumes are for: to create a distributed, global social-status hierarchy based on a person’s political-favor trajectory. That’s why job titles and dates matter but accomplishments don’t. It’s not about what you achieved; it’s about whether people saw you as threatening and strong (and gave you impressive titles) or as weak (and robbed you).

I hope the causes of ethical bankruptcy in Corporate America are visible, by now. A world in which thieves win (they stole it, thus they earned it) while the victims are treated like subhuman garbage is one in which almost no one can afford to be ethical (although the most successful people invest considerable energy in appearing ethical). For example, some people lie on their resumes. I don’t, but that’s for cosmetic reasons. While my work experience isn’t at the quality level that I deserved, it’s high enough that I prefer the complexity-reduction (a cosmetic concern) of sticking to the truth rather than the gains associated with a status-inflating lie that might fail and lower my status more than telling the truth would. I, personally, don’t lie; but I fully support those who do. They are being more ethical than the system they are deceiving. They express their honesty– in the form of contempt for an evil system–through deceiving that system’s officers.

The corporate culture we call “Work” is one where people are kicked when they’re down. It has no morals or ethics and there’s no point in pretending otherwise– not at this point, at least. When people lie on their resumes in harmless ways (quack doctors are criminals and should be jailed; but there’s no good reason to feel anything negative toward someone who self-assigns the title “VP” when he was a mere Director) I support them. Defective structures must be burned down, and I support the barbarians at the gate. If those who’ve been robbed for generations respond by stealing back what was taken from them, I think that it’s the best thing that can happen from here.


One-month break from Hacker News.

$
0
0

I don’t have much use for TechCrunch– it’s symptomatic of the many things that are wrong with this weird advanced-marketing industry, hijacked by fired/disgraced finance guys who’ve reinvented themselves as “startup founders”, that we still call “tech”– but I read this article, posted today, about Hacker News. Here’s the piece that struck me:

One of Graham’s biggest pain points is the “schoolyard quarrels” he finds on the site on a daily basis, and wishes “users would stop misbehaving.” He cites the example of users organizing voting rings to purposefully vote up stories, which caused Graham to develop additional software to detect this. He adds that more users are trolling under newly created accounts, and are deliberately starting flame wars on the site.

“I wish I could get people to stop posting comments that are stupid or mean,” he says. “It takes only one or two negative comments and a discussion turns into a flame war.”

Graham adds that he gets a lot of vitriol from users personally with accusations of bias or censoring. He clarifies that he, and the other human editor, rarely take links down unless they are dupes. Even with tabloid or gossip stories that surface, Graham will not take them down. Users with high karma points tend to flag these stories, he adds, and they can then be taken down.

“Hacker News makes me sad a lot,” says Graham. “I wish the community would behave the way they did when it was a little village.”

I think I am, mostly, a good contributor to Hacker News, but there’s been a decline in the quality of my posts lately. Maybe I’m part of that problem. Perhaps it would be good for me to take a hiatus.

I have an anger problem. It’s made worse by the fact that most of the things that anger me genuinely deserve to be hated. That makes me right in opposing them. The software industry is in a fucked-up state and we (the technologists who should be running it, instead of the smooth-talking assholes who don’t love– or even understand the first thing about– technology, problem-solving, or code) ought to stop letting ourselves be a conquered people. All that is true. I am fighting a good fight. But do I need to fight it all the time? I’m not sure, and certainly I should not inject so much anger with such frequency into one of the best discussion forums currently on the Internet.

I’m taking a break. One month, and then I’ll decide what to do from there.



What Ayn Rand got right and wrong

$
0
0

Ayn Rand is a polarizing figure, and it should be pretty clear that I’m not her biggest fan. I find her views on gender repulsive and her metaphysics laughable. I tend to be on the economic left; she heads to the far right. She and I have one crucial thing in common– extreme political passions rooted in emotionally damaging battles with militant mediocrity– but our conclusions are very different. Her nemesis was authoritarian leftism; mine is corporate capitalism. Of course, an evolved mind in 2013 will recognize that, while both of these forces are evil, there isn’t an either/or dichotomy between them. We don’t need authoritarian leftism or corporate capitalism, and both deserve to be reject out of hand.

What did Rand get right?

As much as I dislike Ayn Rand’s worldview, it’s hard to say that it isn’t a charismatic one, which explains her legions of acolytes. There are a few things she got right, and in a way that few people had the courage to espouse. Namely, she depicted authoritarianism as a process through which the weak (which she likened to vermin) gang up on, and destroy, the strong. She understood the fundamental human problem of her (and our) time: militant mediocrity.

Parasitism, in my view, isn’t such a bad thing. (I probably disagree with Rand on that.) After all, each of us spends nine months as a literal biological parasite. I am actually perfectly fine with much of humanity persisting in a “parasitic” lifestyle wherein they receive more sustenance from society than they would earn on the market. I’m fine with that. It’s a small cost to society, and the long-term benefits (especially including the ability for some people to escape parasitism and become productive) outweigh it. What angers me is when the parasites on the opposite end (the high one) of the socioeconomic spectrum behave as if their fortune and social connections entitle them to tell their intellectual superiors (most viscerally, when that intellectual superior is me) what to do.

Rand’s view was harsh and far from democratic. She conceived of humanity consisting of a small set of “people of the mind” and a much larger set of parasitic mediocrities. In her mind, there was no distinction between (a) average people, who neither stand out in terms of accomplishment or militancy, and (b) the aggressive, anti-intellectual, and authoritarian true parasites against which society must continually defend itself. That was strike one: it just seemed bitchy and mean-spirited to decry the majority of humanity as worthless. (I can’t stand with her on that, either. We’re all mediocre most of the time; it’s militant mediocrity that’s our adversary.) Yet most good ideas seem radical when first voiced, and their proponents are invariably first attacked for their tone and attitude rather than substance, a dynamic that means “bitchiness” is often positively correlated with quality of ideas. I think much of why Rand’s philosophy caught on is that it was so socially unacceptable in the era or the American Middle Class; and intellectuals understand all too well that great ideas often begin as rejected ones.

To understand Ayn Rand further, keep in mind the context of the time during which she rose to fame: the American post-war period. Even the good kinds of greed were socially unacceptable. So a lot of people found her “new elitism” (which was a dressing-up of the old kind) to be refreshing and– in a world that tried to make reality look very different from what it was (see: 1950s television)– honest. By 1980, there was a strong current of opinion that the inclusive capitalism and corporate paternalism had failed, and elitism became sexy again.

Where was the value in this very ugly (but charismatic) philosophy? I’d say that there are a few things Ayn Rand got completely right, as proven by experience at the forefront of software technology:

  1. Most progress comes from a small set of people. Pareto’s “80/20″ is far too generous. It’s more like 80/3. In programming, we call this the “10x” effect, because good programmers are 10 times as effective as average ones (and the top software engineers are 10 times as effective as the merely-good ones like me). Speaking on the specific case of software, it’s pretty clear that 10x is not driven by talent alone. That’s a factor, but a small one. More relevant are work ethic, experience, project/person fit, and team synergies. There isn’t a “10x programmer” gene out there; a number of things come into play. It’s not always the same people who are “10x-ers”, and this “10x” superiority is far from intrinsic to the person, having as much to do with circumstance. That said, there are 10x differences in effectiveness all over the place when at the forefront.
  2. Humanity is plagued by authoritarian mediocrity. If you excel, you become a target. It is not true that the entire rest of humanity will despise you for being exceptionally intelligent, creative, industrious, or effective. In fact, many people will support you. However, there are some (especially in positions of power, who must maintain them) who harbor jealous hatred, and they tend to focus on a small number of people. In authoritarian leftism, they attack those who have economic success. In corporate capitalism, they attack their intellectual superiors.
  3. Social consensus is often driven by the mediocre. The excellent have a tendency to do first and sell later. Left to their own devices, they’d rather build something great and seek forgiveness than try to get permission, which will never come if sought at the front door. The mediocre, on the other hand, generate no new ideas and therefore have never felt that irresistible desire to take that kind of social risk. They quickly learn a different set of skills: how to figure out who’s influential and who’s ignored, what the influential people want, and how to make their own self-serving conceptions (which are never far-fetched, being only designed to advance the proponent, because there is otherwise no idea in them) seem like the objective common consensus.

A bit of context

Ayn Rand’s view of authoritarian leftism was spot-on. Much of that movement’s brutality was rooted in a jealous hatred that we know as militant mediocrity. Its failure to become anything like true communism (or even successful leftism) proved this. Militant mediocrity is blindly leftist when poor and out-of-power and rabidly conservative when rich and established. Of course, in the Soviet case, it never became “rich” so much as it made everyone poor. This enabled it to keep a leftish veneer even as it became reactionary.

Rand’s experiences with toxic leftism were so damaging that when she came to the United States, she continued to advance her philosophy of extreme egoism. This dovetailed with the story of the American social elite. Circa 1960, they felt themselves to be a humiliated set of people. Before 1930, they lived in elaborate mansions and lived opulent, sophisticated lifestyles. After the Great Depression, which they caused, they fell into fear and reservation; that is why, to this day, the “old money” rich prefer to live in houses not visible from the road. They remained quite wealthy but, socially, they retreated. They were no longer the darlings at the ball, because there was no ball. It wasn’t until their grandchildren’s generation came forward that they had the audacity to reassert themselves.

While this society’s parasitic elite was in social exile, paternalistic, pay-it-forward capitalism (“Theory Y”) replaced the old, meaner industrial elite, and the existing upper class found themselves increasingly de-fanged as the social distance between them and the rising middle class shrunk. It was around 1980 that they began to fight back with a force that society couldn’t ignore. The failed, impractical Boomer revolutions of the late 1960s were met, about 10 to 15 years later, with a far more effective “yuppie” counterrevolution that won. Randism became its guiding philosophy. And, boy, did it prove to be wrong about many things.

What did Rand get wrong?

Ayn Rand died in 1982, before she was able to see any of her ideas in implementation. Her vision was of the individual capitalist as heroic and excellent. What we got, instead, was these guys.

Ayn Rand interpreted capitalism using a nostalgic view of industrial capitalism, when it was already well into its decline. The alpha-male she imagined running a large industrial operation no longer existed; the frontier had closed, and the easy wins available to risk-seeking but rational egoists (as opposed to social-climbing bureaucrats) had already been taken. The world was in full swing to corporate capitalism, which has been taking an increasingly collectivist character on for the past forty years.

Corporatism turns out to have the worst of both systems between capitalism and socialism. Transportation, in 2013, is a perfect microcosm of this. Ticket prices are volatile and fare-setting strategies are clearly exploitative– the worst of capitalism– while service rendered is of the quality you might expect from a disengaged socialist bureaucracy; flying an airplane today is certainly not the experience one would get from a triumphant capitalistic enterprise.

Suburbia also has a “worst of both worlds” flavor, but of a more vicious nature, being more obvious in how it merges two formerly separate patterns of life to benefit one class of people and harm another. By the peak of U.S. suburbanization, almost everyone (rich and poor) lived in a suburb, and this was deemed the essence of middle-class life. Suburbia is well-understood as a combination of urban and rural life– an opportunity for people to hold high-paying urban jobs, but live in more spacious rural settings. What’s missed is that, for the rich, it combines the best of both lifestyles– it gives them social access, but protects them from urban life’s negatives; for the poor, it holds the worst of both– urban crime and violence, rural isolation.

This brings us directly to the true nature of corporate capitalism. It’s not really about “making money”. Old-style industrial capitalism was about the multiplication of resources (conveniently measured in dollar amounts). New-style corporate capitalism is about social relationships (many of those being overtly extortive) and “connections”. It’s about providing the best of two systems– capitalism and socialism– for a well-connected elite. They get the outsized profit opportunities (“performance” bonuses during favorable market trends that should more honestly be appreciated as luck) of capitalism, but the cushy assured favoritism and placement (acq-hires and “entrepreneur-in-residence” gigs) of socialism. Everyone else is stuck with the worst of both systems: a rigged and conformist corporate capitalism that will gladly punish them for failure, but that will retard their successes via its continual demands for social permission.

What’s ultimately fatal to Rand’s ideology– and she did not live long enough to see it play out this way– was the fact that the entrepreneurial alpha males she was so in love with (and who probably never existed, in the form she imagined) never came back. In the 1980s, the world was sold to emasculated, influence-peddling, social-climbing private-sector bureaucrats, and not heroic industrialists. Whoops!

What we now have is a world that claims to be (and is) capitalistic, but is run by the sorts of parasitic, denial-focused, militantly mediocre position-holders that Rand railed against. This establishes her ideology as a failed one, and the elitism-is-cool-again “yuppie” counterrevolution of the 1980s has thus been shown to be just as impractical and vacuous as the 1960s “hippie” movement and the authoritarian leftism of the “Weathermen”. Unfortunately, it was a far more effective– and, thus, more damaging– one, and we’ll probably be spending the next 15 years cleaning up its messes.


A guess at why people hate paying for (certain) things… and a possible solution.

$
0
0

I’ve been thinking a lot about paywalls and why people are so averse to paying for things they use on the Internet. People don’t mind putting quarters into a vending machine to get a snack at 4:00, or handing a couple of one-dollar bills to get coffee, but put a 50-cent charge on an article, and your savvier readers will try to find it for free, while your less savvy ones will just find another distraction. People hate paywalls, and it’s not clear why. The time people spend trying to get around copyrights in a safe and reliable way is often worth more than the money that would be spent just paying for the content. Economically, it’s hard to make sense of it. The time spent reading a news article is a lot more expensive than the amount being asked-for (i.e. paid content is often only 10-20% more costly at worst, including the time) so why are paywalls so controversial? What’s the issue here?

Personally, I think it goes back to childhood. If you were in a hotel room, you didn’t touch the “Pay Channels” (perhaps as much because of what they were as their price) or you’d be in trouble. You watched the Free Channels only. You could make a few local calls, but long-distance was a no-no (when I was growing up, long distance rates were over 50 cents per minute) except on Sunday nights to relatives. For a child, things that cost what adults would recognize (given the technology of the time) as fair prices were exclusionary at the time, simply because children (and for good reasons) aren’t given a lot of money.

Most of us started using an internet at a time when the symbol $ meant that you couldn’t continue on, or you’d at least have to explain to your parents why you needed the $7/month deluxe version of the game you were playing, because you couldn’t exactly pay in cash. Sure, they’d be happy to take it out of your allowance, but just having your parents know was often too much. They’d often disapprove. “Is that stupid game really worth $7?” On to something else.

Or maybe I’m just personally stingy. It’s not that I object to spending small amounts of money. If I know I’m going to get value out of something, I spend money for it. On the other hand, I have plenty of small irritating recurring payments that I mean to get around to clearing up; with that experience, I’m unlikely to take on another one. It’s not that it’s $15 per month that gets to me; it’s that I’ll be bled for $180 per year until I remember, “oh, yeah, that fucking thing” and go through whatever hoops are involved in canceling my membership.

What I mean to get around to, however, is that we haven’t figured out how to pay for a lot of important services (and plenty of not-so-important ones, too). People have a lot of weird emotions about money, often divorced from the actual amounts. A paywall reminds people of childhood and feels exclusive, even when the amount of money involved is trivial. People also have a very justified dislike of recurring payments, given how unreasonably difficult it can sometimes be to get rid of them.

One thought I’ve had, for the web, is to set up a passive-payment ecosystem. This could apply to blogs, games, and discussion forums in a way that doesn’t mandate the individual content providers ask for money. People set a payment level somewhere in the neighborhood of $0.00 to $1.50 per hour and pay the provider of whatever they are watching or using, on a minute-per-minute basis, as they go. (The benefit of setting a higher passive-pay level is that you are served fewer ads and receive faster communication.) What’s nice about the system is that (a) the payment level is voluntary and intended to be trivial in comparison to the value of the time spent online, and (b) this has the potential to be more lucrative, for content providers, than advertising. Most importantly, though, the decision overload associated with paywalls, tip jars, recurring payments, and all of the other stuff involved in asking people for money goes away; if someone sets his payment rate at 75 cents per hour and spends 15 minutes on a site, then 18.75 cents is automatically sent to the owner of the site.

Passive payment is an interesting idea. I’m not sure where it’s going, but it’s worth exploring.


Much of “startup” is denial

$
0
0

I’m getting to the point where I think we should add startup to the list of banned words. “High-risk small company”? That’s fine. “Experimental niche endeavor”? Go for it. However, I think it’s time to retire both the word and the baggage associated with “startup”. It implies the existence of a world that isn’t there anymore, and has been grafted on to an undesirable social arrangement: the gradual loss of status for a generation and an important but socially marginal set of people: the most talented technologists. “Startup” used to mean something. It meant, at one time, “what we’re doing requires so much autonomy that we had to spawn our own company.” Now, “startup” is often just an excuse for shitty management, stagnant compensation, unreasonable deadlines, and high turnover, “because we’re a startup.” Or, worse yet, a (shudder) lean startup. 

The Golden Age?

I have mixed views about nostalgia in general. It’s a powerful rhetorical appeal to claim that things were better in the past, while people leave out various conditions that make the comparison more balanced (and, usually, favorable to the present). For example, it’s easy to say that college has degraded into an anxiety-fest for young careerists, who mix pre-professional social positioning with empty, forget-the-world debauchery. Sure, that’s true. It’s also easy to compare that to a more genteel time when college students had assured sunny futures, and graduation was a positive event. What’s dishonest about that comparison is the fact that, in “the Golden Age”, very few people had access to college in the first place. It brought access to a better life than it does now, but one had to already be pretty well set to get in.

Generally, when people make nostalgic comparisons, one of two things is happening. The first is that they’re comparing their existences as average peasants to the lives of elite people in the past; that’s obviously an oblique comparison that will produce inaccuracy. The alternative, and less common, possibility, is that they’re missing certain conditions of the earlier time that make comparison difficult. For example, Northern California in the 1970s had a lot more in the way of opportunity than it does now; in large part, that’s because the area was less developed, but also because people were less mobile than they are now. There was a time when only executives did national job searches; now, even grunts do, and the result is that the market equalizes, the superiority of opportunity in one geographical area vanishes, just as an arbitrage does. Now, I’m pretty sure that this pattern of behavior (national job searches for mediocre positions) is too unhealthy to last forever. It seems to be an artifact of a long-standing terrible economic situation and a generational crisis. Cross-country moves have too much of a mental health load for people to continue making them without genuine career progress. Either our economy will heal (making that movement unnecessary, reducing real estate prices, and gradually bringing the country back to prosperity) or it will sicken (leading to discouragement, social decline, and widespread disengagement) and either way that pattern won’t live forever.

The most important thing to remember about Silicon Valley’s “Golden Age” (1970s to mid-1990s) is that the region was only a “hotspot” for a certain set of people. Those who couldn’t stand to work in stifling, hierarchical companies and accept compromises on technical excellence, design, and autonomy gravitated toward a part of the country that, at the time, had a very low cost of living and an educated populace, much like the U.S. Midwest today. The East Coast establishment saw them as weirdos and misfits, and this social disapproval was the “missing” factor that nostalgia overlooks. That’s why so few people gravitated toward what turned out to be an amazing opportunity. In 1975, few financiers would have preferred dealing with nerds out in California over stitching together billion-dollar companies. The former was seen a career sand-trap and a dead end; the latter was what almost everyone wanted. Accent on almost. There are the genuine weirdos (and I mean that positively) out there who can only motivate themselves with the absolute most interesting work. Those were the people who built Silicon Valley– before it devolved into a parody of itself.

A contemporary taxonomy of the StartupFactory

What made startups great, in their Golden Age? And why has this “startup ecosystem” engine ceased producing great things, instead being an HR pipeline for large, dying corporations?

The fundamental trait of startups is that they’re small, but that they believe they have the potential to be large businesses, or even “movements”. They often develop a cult-like narrative surrounding how they will transform the world when they are Google-sized entities. Why are these businesses small? What’s the signal sent by their size? Well, it can mean one of four things, so below is the outline for a taxonomy of so-called “startups”:

  1. The company requires very high levels of specialized talent, making personnel growth a limiting factor, or wishes to maintain a high-quality culture (cf. Valve) that is incompatible with rapid growth.
  2. The company is extremely new and has not had an opportunity to prove itself yet.
  3. The company exists in a niche where business opportunities are limited.
  4. The company is poorly capitalized because no corporation would accept the idea, and its backers have little faith in it, forcing it to operate on a shoestring budget relative to its ambitions.

Most of the positive associations with “startup” come from Type-1 small companies: those that haven’t grown rapidly because the work they are doing is of very high quality, requiring selectivity in personnel. These startups generally hire conservatively, but they pay well and retain a maker-focused culture for a very long time. If you want to do the best work of your life, choose a Type-1 startup. The downside is that Type-1 startups can rarely grow faster than 30% per year in terms of personnel. There are no hard limits on their economic (e.g. revenue) growth, but this would usually mean a workload that grows faster than the ability to hire.

Next are the Type-2 small companies, which are the genuine prospects. Unlike a Type-1 startup, the work isn’t off-the-charts interesting. In fact, most of it will be mundane and painful. Also, a type-2 startup hasn’t yet been seen by investors, so it usually entails living off of savings. However, the rewards of success are very high. A Type-1 startup will usually evolve into a lifestyle business; a type-2 startup might end up going public.

Type-2 startups will generally evolve into one of a few states after engaging with the market. Some fail, some turn into type-1 “boutique” firms that trade on the high average quality (at a level impossible for a large enterprise to maintain) of their people. More often, investors turn them into Type-4 startups. I’ll get to those, later on.

Type-3 startups are generally not considered to be startups at all. Investors often deride them as “lifestyle businesses”. It might be worth it to work for one for the learning opportunities, or if there is a chance of converting it into a type-1 business; but with a typical subordinate position, use it for recovery if the pace is relaxed, and jump ship if it’s not (don’t work for a “lifestyle business” that expects you to pull all-nighters).

The Type-4 small business is what most so-called “startups” in the VC-funded world are. These are unproven businesses whose ambitions exceed their resources, which means they have to operate on a shoestring budget. They’re constantly working to tight deadlines, asking investors for more money, and piling on technical debt while investing nothing in the career growth of their personnel (except executives, who are setting themselves up to be founders and investors in the next gig).

It might seem like a stretch to call an unproven type-4 business that has raised $50 million “undercapitalized”, or to say it’s on a shoestring budget when it’s paying software engineers 80 percent of what they’d earn on the market, so let me explain what I mean by that. The problem is that, in order to raise such an amount of money, the company has to make promises to investors that would be stretches even with $500 million on hand. Venture capitalists have created a world in which, for all positive values of X, a company cannot raise $X without making promises that leave it disastrously undercapitalized relative to those ambitions, forcing the company into a state of rich poverty. Of course, most of these businesses don’t deserve to exist at all, so my statement that they’re “undercapitalized” is not a normative one.

What kind to work for?

Each of the four types of startups has positives and negatives, so let me get into those for each in assessing the personal decision of whether to work for a small company (“startup”).

Type-1 startups are very selective in hiring, and usually can only afford to hire when there’s a very high degree of fit. You won’t get a job at one of those companies as a generalist or an unproven software engineer, so they’re better places to look once you’ve established your career, and not in the building years. I wish it were otherwise, but the truth is that type-1 startups usually can’t afford (in the way that large corporations can) to hire generalist talent and train it in-house. So the biggest drawback to the type-1 startups is the difficulty of getting in. Even when such a company likes you, you can expect to wait months on a client deal or a change in the business environment before it can take you on.

Type-2 startups pose a different and severe risk. They can’t pay you. You’ll work for nothing and, if the company fails, you’ll have a meaningless name on your resume and your level of credibility is lower than it was when you started. If you’re not a founder, don’t get involved. If you are a founder, make sure you can afford to lose whatever capital you’ve set aside for the effort.

Type-3 small companies don’t have the rapid growth potential that is typically associated with “startup” and, because of this, they’re rarely going to make a person rich. The partners might draw enough passive income to maintain an upper-middle-class lifestyle, but rising from the non-partner ranks to partnership is very rare, and even salary growth will be modest. The negative of these companies is that you tend to hit a ceiling pretty quickly, because the company can’t expand enough to make room for everyone.

So, between types 1, 2, and 3 of “startup”, I’d argue that there’s no reliable way to make a career out of these. Type-1 startups require specialized skills that might not be in demand 10 years from now. Type-2 startups munch your savings, unless they succeed with so much fanfare that you never have to work again, and you’re beating away invitations to venture capital funds. Type-3 startups, unless they have the prestige to make your career, are not an avenue toward growth; you’re best to get in, learn what you can, and move up-and-out because “up” doesn’t happen much in those.

What about type-4 startups, the ecosystem around which I’ve called “VC-istan”? Well, it is possible to make a career (if a declining one, with age) out of those. You can hop from one to another as you wish. That’s the good news. Why? Because they’re always hiring– even when stagnant, they have turnover– and not especially selective. There will always be jobs at the type-4 “startups”. The bad news? They’ll ruin your career, and they have all the negatives of “startup” (poor management, division-of-labor issues, low job security) but none of the benefits. Type-4 startups aren’t small because they’re elite (type 1) or brand-new (type 2) or in niche businesses (type 3). They’re small and “scrappy” (read: poor) because they’re shitty companies working on things no one wants.

Denial

If these type-4 startups are so awful, then why do so many good people end up working for them? The answer is that the U.S. technology economy is already in an undesirable state, and without those crappy companies, could be a lot worse. There just wouldn’t be enough jobs without them there. They don’t do much good for the world or their employees (who often fail to get rich even in the off chances of their startups succeeding) aside from one thing, which is that they prop up salaries for talent in a country that might otherwise be (and I emphasize might because no one really knows) in another Great Depression. If people stopped pumping capital into bad ideas, would they start funding people with good ones? That’s a possibility, but I wouldn’t bet the world on it. It’s equally feasible to believe that they’d just stop funding new businesses, and the economy would fall to pieces.

The “startup scene” is mostly one of denial. Corporate employment has become undesirable for a variety of cultural reasons but largely rooted in one issue: low-level employees (even some with impressive titles and seniorities) don’t get room to experiment anymore. The “intrapreneur” is dead, and basic research funding has been in calamitous decline for quite some time. The “startup scene” is being filled by various classes of people for whom the corporate world no longer affords a place, and who therefore have nowhere else to go. Some of them are the good kinds of unemployable people (eccentric geniuses) and some are the bad kind (arrogant, incapable product managers who become “founders”). Many people are in the type-4 “startup scene” because they have no other options.

As corporate jobs disappear and the industrial edifice that built Middle Class America starts to break off and fall into the ocean, there are a lot of people left wondering what to do with themselves. This could be a really good thing for society. After all, the reason those middle-class jobs are disappearing is that technology has automated most of that work. However, there’s the money problem. If people don’t have income, their lives go to shit rapidly. That’s how it seems to have worked for most of history, and this time period isn’t an exception. What usually happens when people are scared (as they are now, thanks to their enormous psychological and industrial investment in a declining regime) is that those who can offer “protection” set themselves up as the new nobility. In the type-4 startup world, those are the people who can raise funding (enough to pay salaries) for terrible ideas, on account of their connections. That’s where we are, that’s what we’re facing.

The fundamental problem, however, is that none of this self-referential hokum generated by the type-4 startups generates anything that people really want. The only people it’s useful to are other VC-istan social climbers. Because of its self-referential nature, it’s not going to last. It doesn’t have much of a future.

It’s time to talk, instead, about what comes next.


Wrong places, wrong times, decline, prestige, and what it all might mean.

$
0
0

Here’s a deceptively simple question: why would a person be at the wrong place in the wrong time?

People who use this sort of description about places and times to describe “luck” in the business world. Someone’s success is written off as, “he was just in the right place at the right time”. I’m starting to doubt that this can really be ascribed to a lack of merit. Some people know where “right places” are, and some people don’t. It’s not pure luck. There is skill in it; it’s just a very difficult skill to measure or even detect.

I’ve had to contend with this myself. I’m almost 30, and I was one of those people about whom, when I was younger, everyone said that I’d either be successful or dead by 30. Well, I didn’t get either. Breakout success was the gold, noble death was the silver, and I’m stuck with the bronze. Well, that’s depressing. Something I realized recently when looking over my career choices is that I made a lot of decisions that would seemed good taking a timing-independent approach, but that I often made the worst choice for the given time, almost as if it were a habit. So I have to ask myself, because I’m too old to pretend these things aren’t showing a pattern, why would I be in the wrong place at the wrong time? 

I’m pretty sure this is a common issue for people. Timing is just very important. Living in Detroit in 1965 is dramatically different from living there in 1980. However, in society, we tend to evaluate peoples’ choices morally. People who are successful made good choices (and vice versa). The problem is that we also view morality in absolute terms, and quality of choices (especially economic ones) is extremely time-dependent. That often leads us to make inaccurate conclusions about ourselves but also about the decisions we must make. Ignoring timing is an error that’s often catastrophic.

For example, choosing to work at Google is one of the biggest mistakes I’ve made; but it would have been a great choice in another time. I’ve wasted a lot of emotional energy being angry at my (truly awful) manager when I was at Google. However, bad managers are a fact of life, and people survive them. What really went wrong is that I joined Google in May 2011. If you join a company while it’s great and have a bad manager, there’s a way to move around that problem. You’re in a company that wants to succeed and will make a way for you to contribute something great. If you join a company in decline, however, you’re stuck. The firm’s demand for greatness is minimal; now it wants stability, and the game is about using social polish to compete for dwindling visibility and opportunity. In technology, closed allocation is the surest sign of a declining firm. In truth, I shouldn’t be angry at my ex-boss (for being awful, but some are) or at Google (for declining, even though no company wishes to decline) but at myself for picking that company while it was in decline. That’s on me. No one forced me to do that.

What attracts people like me to decline? I think there are four explanatory causes for why people tend to put themselves in formerly-right places at wrong times.

1. Prevailing decline

This one’s not our generation’s fault. Most of American society is in decline. Perhaps one wouldn’t know it from the Silicon Valley buzz, which trumpets successes while hiding failures. Plenty of people say things like, “Why should I care about Flint, Michigan when software engineer salaries keep rising? There will always be jobs for us.” Doing what, pray tell? We’re much more interdependent than people like to believe, so this attitude infuriates me. Decay often ends up hurting everyone. Rural poverty in the 1920s turned into the 1930s Great Depression. Poverty isn’t wayward people getting bitter medicine; it’s a cancer that shuts a society down.

It’s easy to end up picking a string of declining companies when there’s so much decline to go around. That’s a big part of why it’s so hard for our generation to get established. That said, this is the least useful place to focus because no one reading this can do anything about the problem, at least not individually.

2. Nostalgia

People are more prone to nostalgia than they like to admit. This leads people to attempt to replicate former successes and sprints of progress that are no longer available. Businesses change. A person who goes into investment banking based on the movie Wall Street is going to have a rude awakening, because the Gordon Gekkos aren’t taking 24-year-old proteges underwing, but trying to protect their own asses in a harsher regulatory climate. The same is true of Silicon Valley. Is there money to be made there? Of course there is, but the easy wins of the 1990s are gone. It’s no longer enough to be “in the scene”, and people who are just getting established will probably not find themselves eligible for the best opportunities until those are gone, leaving scraps.

What’s unusual in the case of suboptimal career moves is that it’s often oblique nostalgia. People aren’t trying to relive their own good times, but to get in on a previous generation’s golden age (when that generation, having long ago recognized the closed opportunity window, has mostly left). This can actually be one of the more effective ways to play, for reasons explained in the next item.

3. Risk aversion and prestige. 

Most things that are “prestigious” are actually in decline. For example, most of the smartest undergraduates attempt graduate school. It’s what you to do to show that you’re not one of those pre-professional idiots. Academia has been in brutal decline for almost 30 years! Those tenured professorships are not coming back. Wall Street and VC-istan are also very scarce in opportunities for those who aren’t already established, yet have a lot of prestige. Prestige, alas, matters. If you were an analyst at Goldman Sachs, every VC will go out of his way to fund you; even though analyst programs have very little value (except for the proof that a person can survive punishing hours) at this point.

If you look at the opportunities for new entrants, Wall Street, Google, and VC-istan are all quite dry. There are plenty of people getting rich, but it takes years to position oneself and the opportunity will probably have moved elsewhere by the time one is able to take advantage. However, prestige offers a benefit. No one can predict where the real prize (true opportunity) will show up next, but prestigious employers help a person find some kind of position– a consolation prize– in whatever comes next by offering social proof. They provide the validation that a person was smart enough. If you got into Google or Goldman Sachs in 2007, you’re probably not rich, but you’ve proven that you’re good enough to be rich; i.e. you’re not a total loser. People will often tolerate these second-place finishes while they build up the credibility to be in the running when real opportunities come about. But is that a good strategy? I’m not so sure that it is, anymore. If you join a declining technology company, you’ll face closed-allocation and stack-ranking and bland projects, which will hurt your career. Prestige is important, but so is quality.

4. Saprophytism

Some people, but very few, have a knack for turning decay into opportunity. When they see decline, they turn it into profit. This is not a socially acceptable behavior, but it exists and for some people it works. Is it common? I have no idea. For a worker, it’s very hard to pull off. Since the worst fruits of organizational decay always fall onto the least established (“shit flows downhill”) it’s unclear how a low-level employee would be able to reliably turn decay into a win. I’m sure some people have that talent and motivation, but I’m not one of them.

How would a person profit from the decay of Corporate America? Some people answer, “A startup!”, but that’s a really bad response. VC-istan is Corporate America with better marketing, and non VC-funded startups are reliant on clients which means they’re still dependent on this decaying ecosystem. No one wins when there is so much loss to go around. I’m sure there are some financial plays (informed short-selling) that would work, but I can’t think of a great career play for a young person trying to get established. Perhaps it’s a great time to be in the so-called “tech press”; they seem to enjoy themselves when things go to hell.

Concluding thoughts

The above are small explanatory features of the problem. Why do so many intelligent people consistently put themselves in wrong place/wrong time situations that inhibit success? What’s the systematic problem? I think that risk aversion and prestige are major components, worthy of further study, except for the fact that we all cognitively know this. We know that reputation is a lagging indicator of low value in a dynamic world, but we cling to it because other people do and because we rarely have better ideas.

I don’t think that individual nostalgia is a major player, but the collective form of it is prestige, and that often creates bizarre inconsistencies. For example, the prestige of the Ivy League has nothing to do with the adderall-fueled teenagers applying to twenty colleges and test-prepping at the expense of a normal or healthy adolescence– decidedly unprestigious behavior by people who are (but very slowly) eroding the prestige of those institutions– but, rather, that prestige exists because of things that happened long before those kids were even born. The prestige of those places comes out of a time when the psychotic ratrace around admissions didn’t exist, but those colleges were accessible only to a well-connected elite, because what our society really values is legacy and wealth, not talent or striving.

Prevailing decline makes this whole game harder, of course. What I think is really at the heart of it is that it’s hard to impossible to predict the future. People go to places of past excellence for the association (prestige) in order to take advantage of the halo effect and gain social superiority in the beauty contests necessary to win at organizational life. It works, because of the nostalgia held by most people now in power, but not well enough to counteract the overall tenor of decay. Even the people who succeed find themselves bitterly unhappy, because they compare what they get out of their path to what those who traveled before them go; it turns out that a Harvard degree in 2013 is still powerful, but not the golden ticket that it was in 1970, and that joining Google now is not the same thing as joining it in 2001.

So where is the future? I don’t know. If I did, I’d be somewhere other than where I am right now. Alan Kay said, “The best way to invent the future is to create it.” I like this sentiment, but I’m not sure how practical it is. None of us who need to know where the future is have the resources to do that. Those resources (wealth, connections, power) all live with citizens of the past. It’s this fact that keeps drawing generations, one after another like crashing waves, toward the false light or prestige: the hope of getting some scrap metal out of decaying edifices so as to have the right materials when the opportunity comes to make something new. But where (and when) is the time for building?


Gervais / MacLeod 24: Fundamental Theorem of Employment

$
0
0

In analyzing the economics and sociology of office-style Work, an inefficient set of institutional patterns that affects hundreds of millions of people, I’ve often had to ask the question, “Why are so many jobs so bad?” Plenty of positions are inaccurately or dishonestly advertised, many shouldn’t exist at all, and job openings that should exist often don’t. What’s going on with all this? And how should an individual person choose jobs, in light of the inefficient market? I’ve come to a conclusion that, despite the complexity of these issues, is refreshingly simple and, while failing to capture all cases, surprisingly powerful and appropriate to the vast majority of jobs. I might call it the Fundamental Theorem of Employment (FTOE).

A person is hired to do work that the hiring person (a) cannot do for himself, or (b) does not want to do.

Corollary: It is extremely important to know which of the two is the case.

These are, in general, two different cases. A person hires a maid to do undesirable work of which most people are capable, while he hires a doctor to do work that he can’t do for himself. It’s essential for each person to know which of the two cases applies to his or her job. Most jobs can be clearly delineated as one or the other. We’ll call the first category of jobs– a person is hired to bring expertise, skill, or capacity that the hiring manager does not have– “Type 1″; and the “boss doesn’t want to do” jobs, “Type 2″.

In a Type 1 job, you have leverage and you get respect because you’re delivering labor that the manager (a) does not have the ability to render himself, and (b) much more importantly, cannot accurately evaluate. Your boss is forced to trust you. Often, he will trust you just to reduce his own cognitive dissonance. In a Type 2 job, you rarely get any respect; you’re just there to do the worst of the work. You’re not trusted very far, and your manager thinks he can do your job just as well and twice as fast. In the career game, getting stuck in the Type 2 world is a losing proposition.

That seems simple enough, and the advice derived from it is fairly traditional. Build skills. Develop expertise. Become a “unicorn” (a person whose combination of skills makes her unusually rare and confers leverage). Get Type 1 jobs. The real world, of course, isn’t quite so simple; and it might be hard for an individual to tell which of the two possibilities applies to her job. I’m here to tackle some of the more complex cases that pop up in reality, and analyze which dynamic of behavior is more accurate to each.

Below are some cases that don’t necessarily fall into a clear Type 1 vs. Type 2 delineation, and require further analysis.

Excess capacity. Most large companies don’t hire for a specific role, so much as they increase (or decrease) their total headcount based on business needs, cash flow, and economic projections. Companies”don’t hire specifically for Type 1 or Type 2 work; they’re concerned with the economics, not sociology. Most people, in truth, are hired into firms to serve as “excess capacity”; that is, hired into a general-purpose labor pool so there is some slack in the schedule and there are internal candidates for vacancies. Whether a person ends up in Type 1 or Type 2 work isn’t driven by some abstract “general will” of the firm but by the needs of specific managers where that person lands. Unfortunately, this often puts a person into Type 2 work by default.

Depending on the company, the manager of the new employee’s team might not have had any input into the hiring of that person. Sometimes, the company just says, “here are some guys”, and that tends to result in a lot of undesirable work being offloaded onto them. Or, that person may have been hired for a position that was shortly after filled internally, or made redundant, leading to a need to make work for the new hire. The point of all this is that if you can’t identify (and preferably quickly) some X for which (a) a manager needs X, (b) the managers knows he needs X, and (c) you’re very good at X; you just become a fresh hire looking for something to do.

Simply being “excess capacity” isn’t necessarily bad. If there’s honestly about the fact, then management can set an appropriate arrangement. “You can work on whatever you want most of the time, but when you’re needed, you’re expected to be available.” Then, a person has the time and allowance to seek Type 1 work where he or she will add more value. Some companies explicitly set aside time for self-directed work (e.g. 20% time) in acknowledgment of the need for slack in the schedule. Others do not, and fall into a Type-2-driven default pattern of rippling delegation.

In large companies, people are hired for macroeconomic reasons that don’t conform to the Type 1 vs. 2 delineation explicitly, leaving the question unanswered: does the employee become a respected advisor whose expertise confers a certain automatic credibility, or a grunt to which the worst work is delegated?

Automation

Especially relevant to technical work is the role of automation. If work is undesirable, someone will try to “kill” it by programming a computer to do it faster and more reliably than a human. For many business processes, this is easy. For some, it’s quite hard. For example, it took years of research into machine learning before computers could accurately read hand-written addresses. At any rate, computers turn out to be perfect repositories for the worst of the Type 2 work that no one wants to do. They do it without complaint, and much faster. They’re cheap, as well. This is winning for everyone.

Computer programming has its own weird interaction with the FTOE. Business problems were traditionally solved with lots of low-paid and ill-respected manpower, so corporate growth mostly came down to the delegation of Type-2 labor as the beast grew. However, the magic of software engineering is that a small bit of more challenging, more fun work (automating painful processes so that the task is complete forever before the novelty of the new job wears off) can replace a larger amount of bland, tedious work. Most of business growth is about Type-2 hiring: bringing in more people to do the work that the bosses don’t want to do. A competent software engineer can take on the Type-1 task of automating all that junk work– if management trusts her to do so.

Management doesn’t, in general, care how the mountain of traditionally undesirable work is done. If it’s done well by ten bored humans who occasionally quit or fail but are easy enough to replace, that’s the familiar “devil you know”. If someone else can come along and perform the much more enjoyable task of automating that work for good, that’s better because it saves a lot of money and pain. Sort-of. There’s a problem here, and it’s one that every software engineer and software manager must understand.

The relationship between software engineers and management is fraught with conflict. There are few industries where there is more tribal dislike between workers and management than in software, and the problem isn’t the people so much as the interaction of incentives and risks. Software itself (like any industry) generates a lot of undesirable (Type 2) work; but in software, there’s almost always a way of automating the bland work away– a hard, Type 1, sort of job. The danger of that is that the automation of undesirable work might take more time than simply completing it, while the engineer’s impulse (which is almost irresistible) is automate immediately and without regard to cost.

This provides two very different paths to completion: one that is low in variability but boring, the other being more fruitful but riskier. What goes wrong? Without diverging into another subtopic, management participates more fully in an employee’s downside than upside risks– if the engineer does great work, it reflects on that engineer; but if the engineer fails expensively, it reflects on the management– so managers tend to favor low-risk strategies for that reason alone. It’s not that software engineers or managers or bad people; the risks are just improperly aligned.

Solving this problem– aligning incentives and structuring companies to take advantage of opportunities for automation, which almost always improve the firm’s success in the long term– would require another essay.

Defensive rejection

Above, I’ve proposed that people hire others to do work in one of two cases: undesirable work, and work that the person doing the hiring can’t perform. There isn’t always such a clean-cut distinction. Most people don’t have the humility to recognize their limitations, and so they tend to overestimate their ability to perform work that they know little about. The extreme case of this is defensive rejection, in which a person denigrates a class of work as being menial, unimportant, or trivial to compensate for a lack of knowledge about it.

Many software engineers are going to recognize that the attitude of “the business” toward their work is often a case of defensive rejection, and that’s right. But we, as a group, are far from innocent on that front. We tend to take the same attitude toward marketing and business people. The truth is that the good ones are highly capable in ways that most of us are not; most of us just lack the basic competence to separate the good ones from the bad. When one lacks visibility into a field of work, one tends to associate all people who do it with the average competence of the group, which usually leads to an unflattering stereotype for any high position (because most people in it are, in fact, unqualified to hold it). That leads to the incorrect conclusion (also seen with politicians, of whom the average performance is poor) that “none of them are any good”. 

When defensive rejection is in play, the underlying truth is that the manager is hiring in type 1; the employee is brought on to do work that the manager can’t do for himself. Unfortunately, the manager’s insecurity and hubris generate a type-2 context of “I could do that stuff if I wanted to”. The subtext becomes that the work is bland, detail-oriented dreck that the manager is too important to learn. This is the most frustrating type of job to be in; one where the boss thinks he can do your job but actually can’t. It means you have to deal with unreasonable expectations despite low overall status and perceived value to him and to the company as a whole. That’s horrible, but it’s also freakishly common as far as scenarios go, and it leads to the engineer feeling set up to fail– asked to do impossible things, then treated poorly when inevitable failure occurs.

Apprentice systems

There’s one other scenario that doesn’t fit nicely into the Type 1 vs. 2 delineation: the apprentice (or protege) context. At first thought, apprenticeship might seem to be strictly Type 2, since most of the work that apprentices spend their time on is make-work that has ceased being interesting to superior craftsmen. However, apprentices bring a Type-1 function by being able to do one thing the master cannot: perpetuate the work (and, more importantly, the upkeep of a valued tradition or institution) through time. If you’re sixty years old, a twenty-year-old apprentice can continue the work forty years (on average) longer than you can.

Modern private-sector corporations don’t have much use for apprentice structures and guild cultures, because they no longer see that far into the future. No CEO gets job security by setting up a culture of mentorship that might yield excellence ten years down the road. In this next-quarter culture, apprentice systems have mostly been thrown overboard. Long-term vision is far out of style for most modern corporations.

That said, there’s a value in understanding this old-style system. Why? Because even managers are uncomfortable with the naked parasitism of Type-2 employment (e.g. “I’m just hiring you to do the crap I don’t want to do, while I fill my time with the career-building and fun work”) and often attempt to recast the role as an apprenticeship opportunity. That is, at least, how every subordinate job is presented; an opportunity to learn the skills necessary to get to the next step. There are varying degrees of earnestness in this– some managers truly see their reports as proteges, while others see them as mere subordinates.

In negotiation theory, this is sometimes called a standard: a promise that is understood not to be fully delivered (most people realize that most bosses just see their reports as repositories for undesirable work, and that the apprentice metaphor is mostly rhetorical) but that may still be cited in policy to get an arrangement more in accord with that standard than one might otherwise get (“appealing to the standard”). Even people in power are uncomfortable explicitly departing (“breaking the standard”) from something previously promised.

If you want to move from Type-2 to Type-1 employment (and, believe me, you should) then the first thing you have to do is get qualified for that kind of work; the best way to make sure your boss gives you appropriate work (to gain that qualification and validation) is to continually appeal to the standard of the master/apprentice relationship– and hope that your manager doesn’t have the audacity to break the standard.

Why is FTOE important?

It’s important to understand the Fundamental Theorem (and being trained as a mathematician, I know it’s not actually a theorem so much as an observation) of Employment, above, because people tend to discuss conceptions of “the job market” as if they were forces of nature. They’re not. A job exists because someone needs or wants another person to perform work, and the expensiveness of that generally means that one of two cases applies: the person doesn’t want to do that work, or the person cannot do that work. Regardless of the work itself, the social contexts that arise from those two subcategories could not be more different. It’s very important to know which one applies.

The advice that comes out of this is to find a way to qualify oneself for the Type 1 work. That’s harder than it looks. Becoming good at highly-skilled work is the first half of the battle, but there’s a social component that can’t be ignored. Software engineering is a prime example of that. The whole point of the bastardization of “object oriented programming” (which, by the way, has become the exact opposite of Alan Kay’s vision of it) that has grown up in the enterprise is to coerce software engineering into Type 2 commodity work. Having generating scads of low-quality, brittle code, it can be called a failure. Yet that mentality persists in the world of corporate software engineering, and it will be a while before the business starts to recognize software as Type 1 work.

While one is progressing through the validation process that is more drawn-out than building the skill set, I think there are two key strategic necessities. The first, again, is to appeal to the standard (as above) and re-cast any Type 2 social context in employment as a mentor/protege role. The second, and more importantly, is to always drive toward a Type 1 context. The question should be asked: “What am I here to deliver that no one else can?”


Viewing all 304 articles
Browse latest View live