Quantcast
Channel: michaelochurch – Michael O. Church
Viewing all articles
Browse latest Browse all 304

Techxit (Part 1 of 2)

$
0
0

(For Part 2, go here.)

Nazis are bad. This is going to be a plot point, much later in this essay, so if you weren’t aware of the fact, write it down: Nazis are bad.

Chapter 1: A Kind of Reckoning

Some stories start with mistakes. This one does. In the summer of 2008, I left a lucrative career in finance to join a technology startup.

At the time I did this, I believed strongly in technological capitalism. I figured we were 20–40 years away from a post-scarcity society in which to be “poor” meant sitting on a two-week waiting list to go to the Moon. We, the programmers who implement human progress, were the good guys.

Our record shows that we’re not. We created fake news. The companies we create–– and, because our purpose is to unemploy people, those are often profitable and draw attention–– have juvenile, toxic cultures. We’ve normalized witch hunts over trivialities, and people lose jobs over jokes about devices called dongles. We’ve built this so-called “new economy” in which recessions destroy workers’ finances and careers, but recoveries are jobless.

Our major contribution to the world, as private-sector programmers, is to push the balance of power between takers (capital) and makers (labor) in the wrong direction.

We have built an empire of garbage. It has not been pleasant for me, in my 30s, to come to the realization that I have unwittingly chosen a career path in opposition to the welfare of society.

What I plan to do with my life, that’s for another day. I’d like to have Farisa’s Crossing ready for publication in early 2021. The project’s been a lot of fun, a lot of work, and I can’t wait to have a finished product. I should be honest about its prospects, though. It’s a very high-potential book, but some of the best writers I know (people who will be remembered, I am sure of it, in 100 years) are still unable to subsist on book sales. So, I have kept my mathematical and computational skills sharp. I have no intention to abandon those. I enjoy programming quite a lot, and I’m still good at it, so long as I’m working on a real project rather than Jira tickets.

The software industry itself? I’ll be honest. I’d rather die of coronavirus than work in another company where “Agile” is taken seriously. It’s not that I imagine COVID-19 to be a lot of fun; but at least I’d only have to do it once, not every two weeks.

I have written about 250,000 lines of code in my career in at least 20 different programming languages, and in spite all this, the sum contribution of my work to society comes out in the red. It doesn’t matter what technology can do. It matters what it does. We need to stop fantasizing about our 200-line open-source monad tutorials somehow advancing the state of human knowledge enough to cancel out the harm done by the WMU’s (weapons of mass unemployment) we build at our day jobs.

Over the past 30 years, the balance of power in our society has shifted toward capital and away from labor, toward employers and away from workers. We can’t blame all of this on politics; someone taught the machines how to run the dystopia. This means: we’re the bad guys.

Chapter 2: Understanding Automation

We have been here before. Ill-managed prosperity caused the Great Depression, and it caused the rise of fascism.

In the first two decades of the 20th century, we became far better than we’d ever been at making food (nitrogen fixation). A boon, right? What could go wrong? Capitalism does not handle boons well. In 1920s North America, the pattern unfolded like this: prices for agricultural commodities declined, bankrupting farmers and communities that served them, leading to cascading rural poverty, which eventually reduced demand for the products of industry, and finally this became known as “the Great Depression” when it tanked the stock market (October 1929) and thereby hit rich people in the cities.

What happened to farming, to agricultural goods, in the 1920s… is happening again to all human labor.

The global rich prosper. Everyone else suffers under malevolent mismanagement and a concentration of power that would not be possible without the tools that we, as programmers, have built.

Not all markets have legible, objective moral states. I do not think it is of great ethical importance whether a tube of toothpaste costs $3.49 or $3.59–– it seems that supply and demand can be trusted to figure that out. If God exists, she likely has no opinion on what should be the price of palladium or platinum. We are not entitled by divide fiat to a $47 price on a barrel of combustible hydrocarbons that, in any case, we ought to be using less of. Markets determine exchange rates of various assets–– how much of one thing is worth how much of another–– and most of these exchange rates do not carry primary moral weight. But one does.

The exchange rate between capital and labor–– between property and talent, between past and future, between the whims of the dead and the needs of the living–– has a clear, objective, morally favored direction. That’s the one, if God exists and has a will aligned with the health of human culture, that matters.

As I’ve said, quite a number of new technologies push the balance of power to favor employers, not workers. This is objectively evil work.

When I look at how life has changed in the past 20 years, I don’t think the smartphone is more than in incremental improvement, and I’m not impressed by the eggplant emoji or the $1,500 embarrassment that was Google Glass. What most people have experienced is an increase in their feeling squeezed, and it’s not just a feeling. The major contribution of private-sector technology to daily life has been a slew of surveillance tools sold to health insurers, authoritarian governments, and employers.

There is an open hypocrisy at play in the workplace. A worker in constant search for better options will be disliked–– he’s “not a team player”. That seems fair. No one likes someone who’s only out for himself. Yet, companies expend a considerable share of resources to figure out which workers can be replaced and how quickly. There are people in our society who collect a salary by finding ways to take salaries from others in the companies where they work. Doubly weird is the expectation, within the so-called corporate “family”, to treat these people as teammates rather than adversaries.

A worker who changes jobs as soon as a better offer comes along is a “job hopper”. He’ll get bad references and rumors will spread that he failed up or was fired. Yet, our employers spend a significant fraction of their funds (wealth we generated) looking at us from every angle to see whether we can be replaced.

Social media has played a central role in this dystopia. We now live in a world where one needs a public reputation–– an asset that 99.9 percent of people should not want, because reputation is an asset easily destroyed by some of the world’s worst people–– to get a job. Gone are the days when anyone able to speak in complete sentences could call up a CEO and talk his way into a high-paying position. In today’s world, it’s impossible for workers to reinvent themselves–– every detail can be checked, and people who opt out (who don’t have “a Facebook” or “a LinkedIn”) are assumed to have something to hide.

Social media promises a path to influence, but for employers its purpose is to ratify the lack of influence that most people have. In the old world, a terminated employee got three months of severance and glowing references, because a boss never knew if he was letting go someone who had powerful friends and could bring the pain back. In the new world, an employer can look up a target’s Twitter feed, see a lack of blue-check followers, and confidently presume that person to be in the 99 percent of people who can safely be treated like garbage.

Chapter 3: Tech–– Not Even Once

I mentioned before that I left a lucrative career in finance, in 2008, to join “the tech industry”. This was, financially, a seven-figure mistake. Possibly eight. It was the stupidest decision I ever made, and I assure you there’s a lot of competition for that distinction.

Private sector technology (“tech”) is not a career. There is no stability in it. You are only as good as your last job; your job is only as good as your last “sprint”. Unless you become a successful founder, you will not be respected. You’re a thousand times more likely to end up like me–– 36 years old with no clear path to where I want to be–– than even to become a modest millionaire.

You might think, like I did, that you’re going to beat the odds because you’re smarter than the average hoser. Not so. Compared to the people in charge of this industry, I’m a black swan seven-wingèd eidolon of merit. It does not fucking matter, how smart you are.

Your IQ doesn’t matter because you’re not going to be using machine learning to cure cancer. You’re going to be working on Jira tickets to build a product that corporate executives will use to unemploy fellow proletarians. Any idiot can do that kind of work. Furthermore, at a salary higher than idiots can get elsewhere, many idiots will try. Unless you are 21 and have no obligations, quite a few of those idiots will be able to work longer hours than you.

Private-sector technology is not “meritocracy”. It’s a fart in a cave that has not ceased to echo.

I’ve had the whole spectrum of tech-industry experiences. I’ve worked at companies that have failed. I’ve also worked at companies that succeeded, whose founders went on to fail those who got ’em there. At a “Big 4”, I worked for a manager with an 8-year track record of using phony performance issues to tease out people’s personal health issues, which he would blab about to colleagues. (I was told that he was fired for this, but after a five-year absence, he returned to that company.) As a middle manager, I sat and listened as two executives threatened physical violence on someone who reported to me (someone who was, in truth, quite good at his job) because of unavoidable delays on a project. One of my favorite people (of note, a black female) was harassed out of a company–– her manager, a personal friend of the CEO, was not fired, and went on to be a VP in his next job. I’ve seen tech companies offer the same leadership role–– title and responsibilities the same–– two multiple new hires with the intention of their fighting for the job they were promised. In March 2012, I was fired for refusing to commit a felony that would have cost its victims hundreds of thousands of dollars. In the mid- and late 2010s, I got death threats related to this blog–– and (as a public leftist) my name became known to some scary far-right fuckers–– a topic I’ll cover at length in Part 2.

All of this, and for what? Nothing.

Yes, I know how to program. I have taste and I have the (rare, apparently) skill of knowing how to do it right. I can talk a great game about functional programming, artificial intelligence, and programming language design. I have a solid understanding of what the various abstraction layers (e.g., operating system) are doing. Here’s the problem. My peers, in the middling years of legitimate careers, are able to buy houses and start families. They’re in the position to move about the economy at least the upper-middle. Me? I’m stuck in a trade where even people with “senior” and “principal” in their title have to interview daily for their own jobs. What a fucking joke.

I left Wall Street, and joined this career, because I bought into the Paul Graham Lie: that if you join a startup and it fails, it won’t hurt you, because you’ll be respected for being “entrepreneurial”. You won’t get your IPO today, and you’ll “have to settle” for a $500,000-per-year VP position at a FaceGoog, or an EIR role at a venture fund, but you can use your time out to recover your finances and energies until you’re ready to play again.

There is no truth in the Paul Graham Lie. There are too many failing startups and most of them do not become VPs at FaceGoogs. They get regular crappy jobs.

I found no meritocracy in the technology industry. I had a slew of intensely negative experiences. I must be honest on this, though: I got exactly what I deserved.

Whether I’m a good man, that’s not for me to say; it is true (and perhaps a weakness) that I lack the stomach for evil. Yes, I am a person of merit. Compared to the people running the tech industry, I am seven-S-Tier merit. However, I entered a line of work that, in the final analysis, has dedicated itself to the advancement of the power held by employers over my fellow human beings. Failure is what I deserved. Misery is what I deserved.

My youthful self-deception about the true nature of corporate capitalism is no excuse. When one who desires to be a good man, nonetheless, works for the baddies… what else can be expected?

Chapter 4: Artificial Stupidity

The last thing I intend here is to tell a pity-me story. Until 2018, none of my experiences with injustice stepped outside the range that is typical. I’ve seen people smarter and better than I am get screwed far worse than I ever have.

Do not pity me, because I don’t pity myself. Learn from my experiences and make better choices than I did. The takeaway from all this should be that, if a person of eminent merit can have a terrible time in the tech industry, it can happen to anyone. Most people get screwed; few have the private privileges I have that enable them to talk about it.

There cannot be “meritocracy” in private-sector technology, because we serve a purpose without merit. We can opt for self-deception and tell ourselves that our work is advancing the state of knowledge about database indexing, but if our work’s real purpose is to allow the rich to “disrupt” the poor out of their incomes, then a negative multiplier applies to our efforts, and diligence only means we drive fast in the wrong direction.

John Steinbeck made a brilliant comment on American false consciousness–– that socialism never took hold here because of self-deceptive workers who see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires. Having worked in technology, I understand the private-sector software programmer’s mind pretty well. We see ourselves as temporarily embarrassed research scientists, philanthropists, public intellectuals, and scholars. We assume there is an exponential growth curve to our production and therefore it is immaterial what we’re doing now, because in 20 years, when we’re calling the shots, we’ll make moral choices.

Employers indulge our wounded egos with the promise that, if we programmers put their heads down and plow through some ugly work–– just up to this next “milestone”, guys!–– we’ll eventually be restored to glory. That’s the promise used to pull some of the best minds of my generation (and, to be honest, quite a few not-best minds) into socially detrimental work–– performance surveillance employers use to squeeze workers, propaganda machines for capitalists and authoritarians, and weapons of mass unemployment.

I’d like to talk about artificial intelligence. I’ve been studying it since the early 2000s, when the field was considered a land of misfit toys, a bucket of ideas that didn’t work–– when neural networks were considered a bad joke ill-told. I don’t consider myself an actual expert in this very deep field, but I’ll note that quite a number of the “data science” consultants earning $350 per hour come to me for advice. (I left a doctoral program at one year, so I don’t have the paperwork to get such jobs.) There has been, in the 2010s, a plethora of startups raising venture capital on the claim that they do “artificial intelligence research”. In the vast majority of cases, they’re not.

I’ve been in more than one of these fake-news AI startups. Usually, the AI approach doesn’t work–– at least, it doesn’t scale up to real-world problems on a timeframe investors or clients will accept. The founder starts with an idea that’s usually an expansion of ideas from a college thesis (sometimes his own) and pulls a family connection to get seed funding, then hires a few rent-a-nerds to implement his “brilliant” idea. When the AI approach fails–– genuine AI research is demanding, expensive, and intermittent–– the company “pivots” away from the original project and moves into business process automation. The startup becomes a portable back office–– it failed to automate an ugly task, but by squeezing extra hours out of H1-Bs, it manages to make the work cheaper.

This switcheroo isn’t a surprise to investors. In fact, they’re usually the first ones to step in and tell the spring chicken founders that it’s time to put away childish things. Once founders realize their job is to delegate, rather than do, the work, they don’t really object to the notion of pivoting to something more mundane.

It is not immoral, of course, for a business to change its strategy. The issue here is in the continuing deception. These companies claim to be doing “next-generation machine learning” when they’re actually running on cheap manual labor. Clients buy into something that appears to have more long-term upside than it actually does–– they take the early adoption risk of something that’s unlikely to merit it.

The biggest losers in the fake-news AI con, though, are employees. It’s hard to get smart people to work at no-name companies for below-market salaries on the low-status, boring line-of-business problems encountered by a startup serving as a portable back office. The trick is to tell these programmers that if they bear down and endure 6–12 months of drudgery, they’ll graduate into the research positions they were originally promised. In reality, what lives at the end of that 6–12 months of drudgery is a middle manager saying “we just need” 6–12 months more.

I’ve worked on Wall Street. I’ve worked for venture-funded companies. I’ll say without reservation that the ethics on Wall Street are far better. Often, VC-funded founders are people with MBAs who failed out of Wall Street (if you can believe this) for being too toxic and unethical.

Say what you will about finance. There are plenty of things to dislike about its culture. I’m no fan of the noisy environments, or of the constant wagering on everything, or of the occasional encounter with openly-asshole politics of someone who read Ayn Rand at too young an age to get the joke, or of the sense–– though, I assure you, financial workers are treated better than tech workers–– that the job is still paperwork for rich people. To suggest that Wall Street is some workplace utopia would murder my credibility. It isn’t. I only mean to say that the ethical and intellectual qualify of people in finance is higher, on average, than in the private-sector technology world.

Why does Wall Street, then, have a worse reputation than Silicon Valley? Finance, unlike Jira tickets, is for adults. Ethical failures on Wall Street make news. When a bank collapse or a market fails, people learn about it. In my experience, traders are no more or less honest than the general population. The major difference is that traders are smart enough, at least when it comes to careers, to play the long game. The narrow-minded taskmasters who run daily operations in technology, for a contrast, think in terms of two-week “sprints”.

The person who promises you the moon but, three weeks after you’ve moved across the country to join his operation, changes your job description and puts you on sprint work, that guy’s going to be a techie.

Chapter 5: Teabagged by an Agile Scrotum–– Or, Why Programming Is a Dead Career

The non-career of private-sector programming calls itself “software engineering” to give itself the aura of being a profession. It isn’t one.

A profession is a set of traditions and institutions setting forth (that is, professing) ethical obligations that supersede managerial authority and the short-term expediency. That is only possible–– because professionals aren’t any better or worse than anyone else, and the need to survive will push anyone to extremes–– if those who work in the profession are protected from compromising positions.

For example, a doctor must obey the Hippocratic Oath, even if it requires him to defy superior orders. This is only tenable if the medical profession makes it so a doctor can survive losing his job–– he can get another one; he is still a doctor–– and that will only be the case if entry is limited, lifting all professionals above the daily (and ethically compromising) need to survive. The profession puts a floor on wages by limiting entry to the qualified, and it puts a floor on credibility by giving its workers institutional support.

If a profession collapses and any hungry loser can get in, the cheapest people drive out the skilled. Workers lose, clients lose, and society as a whole loses. The only winners are employers. They benefit from de-professionalization because a professional executive’s real trade is the buying and selling of others’ work, and a debased talent pool enables higher trading volume.

Software engineering has been thoroughly de-professionalized. Highly-compensated specialists have been driven out in favor of rent-a-coders who don’t understand computing or mathematics, but will accept two-week sprints and tolerate the daily “interview for your own job” meetings. I’ve referred to Agile as Beer Goggles Software Management–– the unemployable 2’s become marginally productive 4’s, but the 6+ see a drunken loser and want nothing to do with it–– but I’ve realized, over time, that the Agile Beer Goggles are here to stay. The software business has successfully refit itself to run on low-grade talent; this will not be reversed.

A boss’s incentive isn’t to hire the best people; it’s to stay in charge. Daily status meetings remind the plebeians that they’re not trusted professionals, and that they can’t invest in their own development “on the clock” but should think of themselves as day laborers who will be replaced–– there’s an army of hungry losers lined up outside the door–– as soon as their “story points” per week (or per “sprint”) drop below a certain threshold.

I tried to save this industry from this Agile madness, but I failed.

Chapter 6: This Story Peaks Early, Guys

I wrote a few posts in 2012–13 about the startup economy, although I was still figuring it out myself at the time. One concept in which I invested a lot of hope is open allocation–– the notion that workers are better at judging the relative merits of projects that they’ll let on in an authoritarian, command-driven company; that, therefore, trusting them to vote with their feet makes excellence more likely. I didn’t invent the concept, but I named and evangelized it. I still believe that open allocation fundamentally works, but I have no hope in its eventual adoption. The genuine malevolence that exists in global corporate capitalism, since 2015, has been so evident in such force lately that issues greater than inefficient allocation of talent dominate my concern.

Still, I was thrilled to see my theories on open allocation get traction. Tom Preston-Werner quoted me at Oscon 2013 (go to 13:37). This blog, in 2013–15, began to get hundreds, then thousands, of unique views per day. On my best days, I broke 100,000; my Alexa ranking in the San Francisco metropolitan area was, for a long time, in the four digits.

There were stressful moments during this time. A mistake I made in 2011 got more publicity than it deserved, for reasons largely my fault. My left-leaning (and, increasingly, fully leftist) politics attracted death threats from various far-right elements–– a topic we’ll return to in Part 2. I’ve been doxxed so many times and in so many different ways, I assume I have no secrets–– but, then again, I have nothing to hide. Still, on the whole, the good outweighed bad.

One place where I achieved prominence was Quora. Today, we know Quora as a buggy, name-walled Yahoo! Answer clone that generates privacy violations as reliably as summer humidity generates swamp ass. In 2015, however, Quora had (in spite of itself) an excellent community. It showed flashes of potential that, in the end, it would never really meet–– but, from 2013–15, there was a high quality of questions posted, and a high quality of people answering them.

I achieved the “Top Writer” distinction in 2013, 2014, and 2015. I was frequently consulted by the site’s moderators on policy and community management. I had about 8,500 followers. I don’t know what that number means now, but at the time it ranked me third or fourth among non-celebrities (depending on what we call a “celebrity”–– I should be forgiven for having fewer followers than Barack Obama) and first (breaking seven figures, some weeks) for answer views. A number of my responses, mini-essays in which I’d sometimes invest several hours, were published by partner sites such as ForbesTime magazine, and the BBC’s online edition.

On June 15, 2015, I was an “It Programmer”, as much as one can exist. (There turns out to be a low ceiling on a non-founder’s status; by stepping above it, I got myself in trouble.) People all over the world reached out, sight unseen, and offered to fly me out to discuss positions at their companies. Often, I was called “the conscience of Silicon Valley,” even though I never lived there.

The next day (June 16) an event occurred that has nothing directly to do with me, and involves a man who probably does not know that I exist.

I lived in Chicago. Seven hundred and fifteen miles east-and-slightly-south, on a cloudy Manhattan morning, a deranged real estate baron descended an escalator, like Kefka in the last battle of Final Fantasy VI, and gave a circuitous, self-promoting, and racist speech in which he announced a presidential campaign that would ultimately be successful.

I’ll talk later on–– this story gets dark, my friends–– about fascism and whether I think Donald Trump constitutes or fits into a credible fascist threat to this country. Some people consider Trump a fascist; others view him as a mere opportunist. For now, observe that there were, at the least, coincidences in timing. Trump’s rise to power occurred as the far right, or “alt-right”–– a morass of tribalism, pseudo-academic racism, and might-makes-right attitudes toward topics ranging from international relations to corporate conduct–– evolved from an incel affectation into a full-fledged, mainstream political movement. The private attitudes of venture-funded tech founders were now finding public voice in a presidential candidate.

I did not expect Trump to become president. I remember a conversation well with some friends about him, in late 2015. Most people said he had no chance of becoming president. I gave him a 1-in-250 shot, but I would have given him a 4-in-5 shot, even then, of performing well enough at the primary to speak at the convention in Cleveland.

It wasn’t hard to see what Trump was doing. His schlock about Mexican “rapists” was old-school miscegenation panic. The left blames societal failures caused by corporate capitalism on corporate capitalism; the right blames societal failures caused by corporate capitalism on women, minorities, and immigrants. Trump played the demagogue game disgustingly well. His victory, I did not expect, but I knew that Trumpism was going to be with us for a long time, even if he lost in November 2016. Having worked in the tech industry, I saw it coming.

Chapter 7: The Man Who Killed Paul Graham… Is Screwed

No, I didn’t murder Paul Graham. As far as I know, he’s very much alive. He’s only “dead” insofar as his relevancy (like, by my own choice, mine) has taken a precipitous dive.

I take credit in jest. Substantial evidence exists that his decision, in February 2014, to step down as president of Y Combinator, and thereby reduce his relevance in the tech industry, was driven in part by his dislike for skepticism he faced among the public and media. Though I was a tipster and source for a Gawker story he disliked, I did not intend to “kill” Paul Graham. Most of this happened by accident. Still, I know based on private conversations with people in shared circles, that my work contributed to his decision.

One of the worst things about fame or even semi-fame is the Carly Simon Problem. “You’re so vain,” she sang, “you probably think this song about you.” In this case, there was a person intended as the target of the song, so he would be correct in believing the song to be about him. That’s not the issue here. The Carly Simon Problem exists because some people, as I’ve observed, think all songs are about them. People see themselves where they aren’t, they get butthurt, and then they fuck up your life.

When I publish Farisa’s Crossing, I am terrified about ex-girlfriends from the Bush Era coming out to say Farisa is based on them. Let me address that now: she isn’t. What Farisa represents, that’s a secret I’ll take to my grave.

I’ve been burned by the Carly Simon Problem more than once. I’ll give two examples here.

Number one: an ex-colleague managed a successful return to finance–– he got a job as the head quant at a hedge fund. I considered this guy a friend; we played board games together on multiple occasions, and I’ve been over to his house to have dinner with his wife and kids. For a position at a Chicago hedge fund, I used him as a reference.

Little did I know that he had read one of my blog posts and believed it to be about a place where we had worked. He found it to be “bad form” to write about our shared prior employer–– to be clear, I wasn’t. The post in question was about a 1990s corporate meltdown I studied in my research on open allocation.

I got shanked. He gave me a bad reference and I didn’t get a job.

I grew up in central Pennsylvania. Unlike these soft-faced preppies who dominate the upper echelons of the corporate world, I grew up understanding the notion of respect. You fight; or, you shut up and walk away. There is absolutely no shame in walking away from a fight. Almost always, walking away is what you want to do, because most serious fights don’t have any winners. One should, for similar reasons, avoid the conduct (such as throwing around bad references) that necessitates a physical fight. This being the case, I have zero patience for white-collar, lily-livered, passive-aggressive failmen who pretend to be a friend, but throw around bad references and sink people’s job prospects. Don’t like what I have to say? Confront me. I’ll stick to words as long as you do–– no one needs to know, either, what we argued about, or that we ever argued.

I can respect a wide variety of people, but I cannot respect a craven crud-ball who thinks that an acceptable response to an anodyne blog post is give bad job references like a fucking dirtbag. If I ever get cancer, I will name it after this guy.

The Carly Simon Problem is one of the main reason I nearly quit writing in 2016. I’m more than willing to go the distance in a fair fight, if that’s where we are. I cannot tolerate being stabbed in the back by cowards–– especially cowards who weren’t even in a conversation, but who took offense to it on the incorrect assumption of a song being about them. Sometimes, the song is about someone else. Sometimes, the song is about no one. Sometimes, the song is just a damn song.

The second major encounter I had with the Carly Simon Problem involves Paul Graham.

I know, I know: I promised Nazis, and here I am talking about Paul Graham. (I don’t think Paul Graham is a Nazi.) There’s some back story, some buildup. Unfortunately, this means I have to get into events that sound like petty drama, but that will in fact lead into something major and criminal.

Even now, I don’t harbor strong opinions about Paul Graham. I would be happy to mend fences with him, if he apologized for the conduct described below, almost all of which was committed not by him but by his subordinates at Y Combinator.

There is a lot to like and respect about the man. For a start, in his prime he wrote some excellent books on the programming language, Lisp. He got more right than he got wrong. Unlike me, he won the birth-year lottery and walked away from Viaweb (Yahoo! Store) with a boatload of Boomer Bucks. He’s an above-average writer and, although I haven’t always agreed with what he’s had to say, his contributions to technology discussions have, at times, been insightful.

A business model that thrived in the 1990s technology boom was the so-called “startup incubator”, which made small investments in tiny companies and thereby made a diversified wager on the startup economy as a whole. Incubators always had a seedy reputation–– they promised mentorship and introduction to venture capitalists, while rarely providing more than office space and coffee–– but the business model isn’t prima facie evil.

After the 2001 tech crash, internet startups developed the reputation of being a goofy 1990s fad that would never return–– the “new economy”, conventional wisdom said, was a short con that had failed. Incubators, as well, went out of fashion and became a symbol of 1990s clownery.

Paul Graham, having become rich enough to retire in the 1990s, continued to evangelize the startup economy while the rest of the world’s faith in it sat at a nadir. He cheer-led the notion of a small technology company when no one else would. In 2005, he opened up an incubator called Y Combinator–– named after a computer science construct discovered by a distant relative of mine–– or “YC”.

I dislike Y Combinator. I think it has done more harm than good to the world, because it has exacerbated the ageism and clubby classism of the technology industry, and because it has inadvertently given credence to “new economy” ideas that actually haven’t worked very well. This being said, I don’t think Y Combinator is the typical, seedy incubator. I’ve researched Paul Graham and his operation, and everything convinces me that he makes good-faith efforts to truly back the companies he picks–– and quite a number have gone on to be successful. We can debate another time whether Y Combinator’s strong track record proves its merit or its founders’ social connections, but his incubator became unique among the pack in developing a prestige that no other one has.

I met Mr. Graham in person once (March 2007). No one had any reason then to know who I was, so I doubt he remembers me. He seemed like a nice guy, I liked him and, until 2015, I still liked him, even though we disagreed on many things.

So why, in late 2013, did he suddenly dislike me? Again, it’s the Carly Simon Problem, because of course it is.

Chapter 8: There Are Chickenhawks Among Us

A logic puzzle goes like so. One hundred people live on an island; ninety have brown eyes and ten have blue eyes. No mirrors exist and no one talks about eye color, because there’s a rule that, while it is not illegal to have blue eyes, anyone who knows he has blue eyes must, at dawn the next day, leave the island forever.

They live in peace, until one day, an outsider (“oracle”) known never to lie comes to the island, calls an assembly of all hundred inhabitants, and says, “At least one of you has blue eyes.”

What happens? You would think: Nothing. No new information is introduced, so you would imagine that the oracle has no effect.

The answer is: 10 days later, all 10 blue-eyed people leave the island. The oracle introduces something they know (since everyone sees either 9 or 10 blue-eyed people) into common knowledge and that changes everything. For a full explanation, click the link above.

In this way, saying something that everyone knows (introducing no new knowledge) can have a social effect.

In December 2013, I wrote a blog post about chickenhawking. A chickenhawk is a business executive who expresses his midlife crisis not by purchasing a sports car or having an affair, but by investing in the career of a younger man–– usually, for reasons that will be discussed, a certain type of younger man–– and living vicariously through him.

A chickenhawk gives his young protege (or “chicken”) rapid career advancement and a high income, in exchange for exciting stories. There is a revenge drive in play; the “hawk” punishes women who rejected him 20 years ago by inflating the economic virility of a sociopath who will–– as I then put it, capable even in barely-trigenarian literary infancy of the occasional limit break–– “tear through party girls like a late-April tornado”.

A fictional example occurs in The Office. The show has to stay PG-rated and humorous, so there’s a lot left unsaid, but Michael Scott harbors a vaguely homoerotic (and non-reciprocated) obsession with subordinate Ryan Howard, one that leads him to assist the latter’s career (and eventually be surpassed). He takes an interest in his protege’s personal life; he lives out his midlife crisis through a younger man with the social skills, courage, and resources (due to the hawk’s support of the chicken’s career) to things that, in the hawk’s twenties, he couldn’t pull off.

Silicon Valley is ageist and sexist. VCs “pattern match” to a certain type of person–– a young, unattached, usually heterosexual, male sociopath–– and one cannot understand the venture-funded software industry without an understanding of why. Sand Hill Road ought to be renamed Chickenhawk Alley.

Of course, this isn’t unique to technology. The corporate system’s raison d’être is to funnel sexual access to unattractive, rapacious men who have nothing to offer women outside of the social status induced by their control of resources. Without this motivation in play, the corporate system would have likely collapsed, leading to socialism, several decades ago. The rich do not hold on to the corporate system because they enjoy TPS reports; they do it because it gives them an advantage over other men (especially younger men) and thins out the competition. Chickenhawks tend to be too timid to abuse their control of resources in the way a more typical corporate executive would; they do it vicariously through someone else.

Paul Graham took offense to my December 2013 about chickenhawking–– but what does chickenhawking have to do with Paul Graham? I don’t know. I still don’t know. I don’t think he is a chickenhawk. I do not accuse him of being one. I never have. That song was never, ever about him.

No evidence exists of Paul Graham being a chickenhawk. Nor is there evidence of him being pro-chickenhawk.

Except what follows.

Chapter 9: The Vultures Chickenhawks Attack!

I make this analysis in good faith. In discussing Paul Graham’s personality, I find common ground. What could be called faults are traits I share.

I’ve been told on good authority that, at least at one time, he spent 6 hours per day on Hacker News, a news aggregator and community created around Y Combinator. Obsessive? I am not one to talk here–– I have also suffered unhealthy addictions to internet communities that consumed similar quantities of my free time. It takes a sort of obsessive mind to excel at detail-oriented crafts like programming and writing.

Creative people have another flaw: we tend to take criticism and skepticism around our ideas personally. It would not surprise me to learn that others’ skepticism of him was a primary reason for (a) his actions in 2013–15, to be discussed, and (b) his decision to step down as president of Y Combinator in early 2014.

My writing got to him. As I said before, Paul Graham is an above-average writer who won the birth-year lottery and whose optimism about the startup economy played a major role in restoring public faith in it. Some time later, I showed up on the scene. I’m also an above-average writer, but I did not win the birth-year lottery and I did not make millions for showing up at a place. My experiences in 2008–15 (detailed above) led me to conclude that the “new economy” was an ersatz replica of the old one. As my skepticism grew, I did not hesitate to express it.

My comments frequently rose to the top on Hacker News. Whether this means I was right, or merely wrote well, I shan’t say. I’ll only observe that often I achieved top comment.

And then, because I had the nerve to say something everyone already knew–– that there are chickenhawks in Silicon Valley–– I suffered the dreaded Hacker News “rankban”.

What the fuck’s a Lommy rankban? In a less stupid world, you wouldn’t have to care about this sort of thing. In today’s world, though, where opaque algorithms determine the placement and implied social proof of user-created content, and in which these reputation measurements make the difference between “influence” and unemployable obscurity, this kind of thing matters.

As I said, Hacker News (or “HN”) is a news aggregator and discussion hub for private-sector programmers. Even to be in the running for serious programming jobs–– not low-end rent-a-coder sprint work where you’re competing with sweatshop workers–– you need a pre-existing reputation. Hacker News is where a lot of people go to build one.

Y Combinator, a startup incubator, owns it. The conflict of interest should be obvious. It is a news aggregator owned by a baby-league venture capitalist. It is a PR organ that papers the reputations of YC-backed companies. It punishes those who express skepticism of these startups, or of the (defective) ecosystem in which they exist.

Someone banned from Hacker News is not notified of his offense (and there is no appeal). He does not even know he is banned, in most cases. He’s “hellbanned”, which means that his comments and posts are visible to him but no one else. This is contraindicated by the psychiatric community–– it’s a form of gaslighting. Less drastic is the “slowban”, by which a site performs poorly. You see this a lot in the venture-funded world–– in real estate and personal finance, there are a number of venture-funded companies using slowbanning to redline. Rankban, most insidious, exists when a site’s opaque content ranking algorithms systematically degrade one’s posts and comments–– if the content is successful, it is still represented as unsuccessful, and suffers reduced readership.

An anonymous tipster, in January 2014, informed me that I had been put on slowban and rankban by Paul Graham. I did not believe it at first–– I thought better of the man, and failed to see why he would have a strong opinion of me–– but these were relatively easy to test. Slowban, I verified by comparing response times on HTTP requests when logged in versus logged out. Rankban was harder to prove–– this I tested by digging up old high-performing posts and verifying that (years later) they had fallen to the bottom, where they would go unread.

I’ll confess that this is minor shit–– I only bring it up to prove that Paul Graham held an animus toward me as early as 2013 because of my anti-chickenhawk stance.

Rather than bog you down, dear reader, in more petty drama, let’s catch up to 2015 and the rise of Trump–– of note is that his increasing success (long before he won the presidency) validated a certain might-makes-right attitude toward publicity and business; long before November 2016, corporate executives were taking note.

In August 2015, I suggested, based on things Travis Kalanick said about his own motivations for starting Uber, that the company likely had a toxic culture. (Two years later….) This got me banned–– actually banned–– from Hacker News.

Banned from Hacker News! By this, I was truly, deeply… sorry, it is still too much….

Nah. It didn’t bother me. I was 32 at the time; I had outgrown the Hacker News community and the mentality it serves. Being banned from that place was no big deal–– a liberation of time, to be honest about it. The only issue was that Dan Gackle misrepresented the reason for banning me, taking an entirely different comment out of context in a way that any court in the U.S. would classify as defamatory.

Perhaps a week later, Paul Buchheit, a man who jokes about gun violence as a means of handling business negotiations, attacked me on Quora.

Worth noting is that Y Combinator bought a piece of Quora in May 2014 at a fire-sale price. It seemed an odd deal at the time, and still does, but I think both parties saw themselves as getting the better end of that one. Quora got to claim it was “YC” at the peak of the incubator’s prestige. Y Combinator, at the same time, gained the ability to “moderate” Quora’s community and content so as to favor YC-backed companies.

After this nonsense–– the “rankban”; Dan Gackle libeling me on Hacker News after banning me; the bizarre personal attacks from Paul Buchheit; and various other factors I shan’t get into–– I could tell there was a pattern. If nothing else, Paul Graham was doing a poor job of controlling his puppies.

I challenged Paul Graham to (wait for it) a rap duel. I’m not a stellar rapper; I did some freestyle in college and I’m half-decent for a white guy, nothing to write home about, but I felt confident that I could beat Paul Graham. I was, on one hand, extending an olive branch. Not having anything against Paul Graham himself–– he was negligent in failing to call off his puppies, but that could be fixed–– I felt that a public rap battle would be an opportunity to show that, despite our differences, we could respect each other well enough to put on a mutually beneficial (and entertaining) show. At the same time, I needed to make it clear that, if Paul Graham couldn’t control his puppies and the embarrassment they were causing, I would continue to demonstrate this incapacity.

On September 4, in retaliation for the rap-duel challenge, YC-backed Quora banned me–– again, in a common pattern, on false pretenses. My account, which had more than 8,500 followers, had been turned into a defamation page with a bright red text block saying, “This user has been banned.”

Mucho internet drama. I won’t blame you if your eyes glazed over. You’d think such things wouldn’t matter in the real world. You’d think. Don’t worry–– the stakes are about to go up, and the Nazis aren’t far behind.

Chapter 10: When Nonsense Decides To Matter

I interviewed for a job in January 2016 where it came up–– not as a stupid thing to laugh about, but as a serious concern–– that I’d been banned from Hacker News. A Chicago-based hedge fund decided not to hire me for a quant role because–– as I have from a headhunter who was decent enough to give me the real reason–– an MD observed that, “apparently this Paul Graham fellow doesn’t like him.”

This is an objective moral fact: internet drama like that should never affect someone’s ability to earn an income.

Unfortunately, the world has a surfeit of immature, deficient men who, on the basis of something as minuscule as a website ban, will close doors–– even, if not especially, doors that are not theirs to close.

I have seen all sides of this Black Mirror–level idiocy. I’ve been a manager. I’ve been involved in hiring decisions. I’ve made calls; I’ve defended people; I’ve also failed at defending people.

More than once, I’ve seen irrelevant internet activity–– as minor as rumors on sites like the blessedly-defunct Juicy Campus–– come up as cause to deny candidates jobs, reduce their offers on the assumption of lesser leverage, or to fire otherwise excellent employees.

Also, though I never cared about job candidate’s politics, this is not a difficult matter for employers to discern. It’s something they care about for “cultural fit” reasons, but not in the ways one might expect. I’ve never seen anyone hosed for being a Republican or Democrat, or for supporting a mainstream presidential candidate–– it’s possible that it happens; I just haven’t seen it–– but I have frequently seen people denied opportunites for “being political”, and it is almost always the left that is penalized.

Overt racism will get someone dinged, true, but if the candidate’s a white guy who retweets Breitbart articles, an executive will always step in and say, “We don’t know that he supports those views.” On the other hand, someone who’s anti-racist–– say she’s active in Black Lives Matter–– will get similarly dinged, not for her politics per se, but for the fear of hiring a “troublemaker”. Once I overheard a conversation in which an executive described a colleague as “terminal” (not promotable into management) because “you can never trust a male feminist”.

Corporates don’t show their far-right colors often, but anti-leftism is the payload of their aversion to “the political”. They’ll fire a racist because it’s good for publicity, but their real fear is of the left–– of truth and justice.

Chapter 11: Morality

Does God exist?

That’s the easiest question there is. Yes. God–– the God of the Torah, the Bible, the Quran–– exists. Zeus also exists. Osiris exists. Iago, in Shakespeare’s Othello, exists. Farisa will exist, once I finish the damn story. They exist as much as the number 2 or the color “magenta”. They may exist only in our minds, but they exist as concepts.

The harder question is: are there supernatural humanoids who interfere with the observed laws of physics? On that one, I’ve seen absolutely no evidence, so I’m going to profess non-belief. More interesting is: is there an afterlife? I’m on the happier side of 50–50, on that one. My reason would require another essay, but I find accessible reasons to believe there is one–– and while I might be wrong; if I am, I won’t have to bear the disappointment, since I won’t exist.

Does absolute morality exist? I think so. Most ethical mandates are situational and relative, but their underlying reasons for existence seem less flexible. I am unable to articulate precisely the moral principles of existence, but I believe they exist.

I’m not a nihilist, and I go further. I don’t believe nihilists exist. At least, I don’t think a person can stay nihilistic for very long. Meaning vacuums get filled.

Let’s say someone who considers himself a nihilist, but who is a good person, is offered $5,000 to torture a kitten. He’ll refuse, because some actions he accepts and others he finds repulsive. Meaning is a weird term. Perhaps “purpose” or “value” is better. I would not torture the kitten, not because I expect the kitten to “mean” anything, but because I value the creature’s existence and welfare.

Nihilism is dangerous because it’s unstable. The meaning void will fill itself with something, but not always something good. Ultra-nihilistic villains like the Joker (Batman franchise) or Kefka (Final Fantasy VI) fill it with hatred and blood lust. Fascism, an outgrowth of might-makes-right nihilism, sells itself to the masses by presenting itself as aggressively anti-nihilistic–– thereby disavowing the decadence of which it is a culmination.

A person doesn’t stay nihilistic for long; but systems can be nihilistic. Corporate capitalism is a belligerent nihilism machine. It does not hate its victims; it simply does not value their subjective experience. A tree will be cut down unless it can pay not to be cut down.

Chapter 12: The Two-Stroke Nihilism Engine

Global corporate capitalism was not designed, technically speaking, but I cannot think of a better way to design an economic system to destroy things humans value–– a self-replicating monument to nihilism, a belligerent anti-meaning device.

The first thing to understand about global corporate capitalism is that it’s totalitarian. If the people in one nation are unfree, others must compete on wages and working conditions and will be unfree. It’s important to discuss economic totalitarianism, because while leftism has had a bad run for the past 35 years, almost all of the negativity directed at “communism” is more accurately blamed on left-wing economic totalitarianism (old-style tankie socialism). Right-wing economic totalitarianism is no better.

We’ve been pushed, over previous decades, to accept corporate rule on account of disingenuous claims that “communism killed 100 million people“. Did it? Not really. Mao Zedong’s incompetence killed some, Stalinist repression killed some, and anticommunist reaction (including fascism and World War II) killed a lot of people–– deaths that have been blamed on “communism”, even though none of those societies were communist.

A difference at issue is that capitalism has no memory and takes no responsibility; socialism, to the ill-health of its image, has far too much memory and responsibility. Americans who were unable to secure health insurance, and Pakistanis who were “freedomed” by drones, are not considered to be killed “by capitalism”. There’s a whole lot of dishonest accounting that goes on; the truth is that capitalism’s record is just as bad, if not worse.

In either case, the true enemy isn’t an economic system’s baseline principles, but totalitarian application. Global corporate capitalism is totalitarian because the employer is not happy to make a modest profit. It must make the highest profit, at any moral cost. It must have the worker’s indivisible loyalty. It takes everything it can get.

Global corporate capitalism wants for all things humans value to be “converted into dollars”. Who gets to live by the lake. The highest bidder. A “view” created by God or by Nature becomes just another form of money. Who gets the bulk of people’s time and attention? The people and organizations (often, authoritarian organizations) who specialized in the buying and selling of others–– employers. People’s friends and families get the leftovers.

Cultural influence, educational experiences, and personal relationships become nothing but “capital” in new forms. Everything gets converted into money, and if it resists such conversion, it’s marginalized to the point of nonexistence. Rebellions get bought. Sexual and cultural expressions of marginalized people are exoticized and appropriated by the rich. Social media, for a concrete example, has become a mechanism through which corporate marketing departments can buy the perception of grassroots authenticity.

Corporate capitalism’s first move is to convert all things humans value–– sexuality, social connectedness, leisure, culture, opportunity–– into an abstract quantity called money, measured in units called dollars or euros or yen. That’s the nihilism engine’s first stroke.

The second stroke is: to find the place of least utility for the dollars (euros, yen) and put there as many as possible. The rich get richer; the poor get poorer. The well-resourced have full-time staff to manufacture their reputations and appearances, so they present themselves as cosmopolitan ubermenschen (when they are, in fact, as provincial as the yokels they despise) while the poor become socially and culturally isolated.

If all things humans value are “converted into dollars”, all things humans value will go to those who have the dollars.

What is a dollar’s value? Of course, it’s not a constant. One dollar represents 8 minutes of a minimum wage worker’s time, but only half a second of a CEO’s time. If a dollar’s parked in the garage of someone who already has a billion, it’s being put where it isn’t needed. Its value is being minimized.

This shows that corporate capitalism seeks to turn all things humans value into a tradable form (money) and then to put every dollar of the money into the coffers of a person or corporation who does not need it. Since they have an excess of it, they use it to buy not things they need, but a future excess of money. This is a belligerent, nearly unstoppable utility minimizer–– an ever-advancing nothingness and pointlessness.

In 2011, Marc Andreessen said that “software is eating the world”. Having worked in the software industry during that time, I can refine this observation: corporate capitalism continues to be what’s eating the world. Software is merely what it shits out.

Technological growth of a kind that would benefit everyone has disappeared. We don’t have flying cars or robot maids. We have time-tracking software. We have Jira. The major innovations of our time have been surveillance technologies (weapons) for the benefit of health insurers, despotic governments, and authoritarian employers. That’s who’s buying this stuff.

Employers used to fear their workers, at least a little, but these days they share information (contrary to law) about suspected unionists. Workers in the trades–– in the “blue-collar” jobs displaced office workers are told to consider–– often suffer belligerent performance tracking enabled by devices running code written by people like me. Retail workers often have less than 24 hours notice of when they will work, because their shifts are determined algorithmically. The working world has gotten worse, has gotten more fascistic, and it’s our fault as private-sector programmers.

I mentioned the “Agile” garbage that makes a typical programmer’s life hell. It’s not only that we implement the weapon designs of psychopaths who profit by immiserating workers. We are also the first subjects of many such experiments, the first to taste the poisons (and stupid/earnest enough to refine them) before they are rolled out into the broader economy. “Scrum” is the same malevolent performance management applied to truck drivers and factory workers, but using that name when applied to low-status programmers. Nowhere is it written in the Cannibal Bible that a cannibal cannot be consumed by other cannibals.

Part 2 is here.

End of Part 1–– What’s to Come in Part 2

So far, we’ve covered the technology industry during 2008–15 and my experiences within it. We know of the emergence of might-makes-right politics (Trump) and we can see that it is a natural extension of global corporate capitalism.

In the first half of this exploration, I told a story with political, moral, and personal threads, all of which have diverged. In the second, we’ll arrive at the convergence. We’ll discuss the acceleration of capitalistic disease under Trump. We’ll cover purposes of the technology industry (and the Silicon Valley business model) of which most people are unaware. We’ll deepen our understanding of fascism–– what it is, why it emerges, and my own experiences in the fight against it. At the end, I’ll present why I believe the probability of a violent conflict, with fascist elements that exist within our society right now, is high.

There is much that has happened in the past five years that must be revealed. I will establish (with verifying details) something heinous about an organization of middling profile but high importance. In so doing, I may put my life in danger, but public service demands it. Names will be named; events will be explained.


Viewing all articles
Browse latest Browse all 304

Trending Articles