Text Patterns - by Alan Jacobs
Showing posts with label Scott Alexander. Show all posts
Showing posts with label Scott Alexander. Show all posts

Tuesday, May 2, 2017

being right to no effect

This post of mine from earlier today, which was based on this column by Damon Linker, has a lot in common with this post by Scott Alexander:

I write a lot about how we shouldn’t get our enemies fired lest they try to fire us, how we shouldn’t get our enemies’ campus speakers disinvited lest they try to disinvite ours, how we shouldn’t use deceit and hyperbole to push our policies lest our enemies try to push theirs the same way. And people very reasonably ask – hey, I notice my side kind of controls all of this stuff, the situation is actually asymmetrical, they have no way of retaliating, maybe we should just grind our enemies beneath our boots this one time.

And then when it turns out that the enemies can just leave and start their own institutions, with horrendous results for everybody, the cry goes up “Wait, that’s unfair! Nobody ever said you could do that! Come back so we can grind you beneath our boots some more!”

Conservatives aren’t stuck in here with us. We’re stuck in here with them. And so far it’s not going so well. I’m not sure if any of this can be reversed. But I think maybe we should consider to what degree we are in a hole, and if so, to what degree we want to stop digging.


Which in turn has a lot in common with this post by Freddie deBoer:

Conservatives have been arguing for years that liberals essentially want to write them out of shared cultural and intellectual spaces altogether. I’ve always said that’s horseshit. But I’m trying to be real with you and take an honest look at what’s happening in the few spaces that progressive people control. In the halls of actual power, meanwhile, conservatives have achieved incredible electoral victories, running up the score against the progressives who in turn take out their frustrations in cultural and intellectual spaces. This is not a dynamic that will end well for us.

Of course by affirming this version of events from conservatives, I am opening myself to the regular claim that I am a conservative. Which is incorrect; I have never been further left in my life than I am today. But you can understand it if you understand the contemporary progressive tendency to treat politics as a matter of which social or cultural group you associate with rather than as a set of shared principles and a commitment to enacting them by appealing to the enlightened best interest of the unconverted. That dynamic may, I’m afraid, also explain why progressives risk taking even firmer control of campus and media and Hollywood and losing everything else.


Which, in another turn, has a lot in common with this column by Andrew Sullivan:

I know why many want to dismiss all of this as mere hate, as some of it certainly is. I also recognize that engaging with the ideas of this movement is a tricky exercise in our current political climate. Among many liberals, there is an understandable impulse to raise the drawbridge, to deny certain ideas access to respectable conversation, to prevent certain concepts from being “normalized.” But the normalization has already occurred — thanks, largely, to voters across the West — and willfully blinding ourselves to the most potent political movement of the moment will not make it go away. Indeed, the more I read today’s more serious reactionary writers, the more I’m convinced they are much more in tune with the current global mood than today’s conservatives, liberals, and progressives. I find myself repelled by many of their themes — and yet, at the same time, drawn in by their unmistakable relevance.


What all these writings have in common is this: We are all saying to the Angry Left that it’s unwise, impractical, and counterproductive to think that you can simply refuse to acknowledge and engage with people who don’t share your politics — to trust in your power to silence, to intimidate, to mock, and to shun rather than to attempt to persuade.

I think we’ve all made very good cases. I also think that almost no one who needs to hear what we have to say will listen. So what will be the result?

Freddie is right to say that the three industries where the take-no-prisoners model is most entrenched are Hollywood, the news media, and the university. And that entrenchment leads, as I have explained before, to the perception of ideological difference as defilement — a thesis that I think goes a long way towards explaining the intensity of the outrage about Bret Stephens’s NYT column on climate-change rhetoric. The purging of those who have defiled the community is a feasible practice unless and until the departure of those people is costly to the community; and each of those three cultural institutions assumes without question that no costs will be incurred by cathartic expulsion of the repugnant cultural Other.

Hollywood could be right to make this assumption: certainly there are no plausible alternatives to its dominance, though that dominance might take new forms — e.g. more movies and series made outside the conventional studio structure by new players like Netflix and Amazon. (It’s possible, though I think highly unlikely, that those new players will attempt to exploit a socially conservative audience.)

But it’s hard to think of two white-collar professions more imperiled than journalism and academia. The belief that left or left-liberal university administrators and professors, and journalists and editors, have in their own impregnability is simply delusional. If they connected their political decisions to their worried meetings about rising costs and desiccating sources of revenue, they would realize this; but the power of compartmentalization is great.

So what I foresee for both journalism and academia is a financial decline that proceeds at increasing speed, a decline to which ideological rigidity will be a significant contributor, though certainly not the only one. (The presence of other causes will ensure that publishers, editors, administrators, and the few remaining tenured faculty members will be able to deny the consequences of rigidity.) I also expect this decline to proceed far more quickly for journalism than for academia, since the latter still has a great many full-time faculty who can be replaced by contingent faculty willing to work for something considerably less than the legal minimum wage.

But at least the people who run those institutions will be able to preserve their purity right up to the inevitable end.

Wednesday, June 1, 2016

the end of algorithmic culture

The promise and peril of algorithmic culture is a rather a Theme here at Text Patterns Command Center, so let’s look at the review by Michael S. Evans of The Master Algorithm, by Pedro Domingos. Domingos tells us that as algorithmic decision-making extends itself further into our lives, we’re going to become healthier, happier, and richer. To which Evans:

The algorithmic future Domingos describes is already here. And frankly, that future is not going very well for most of us.

Take the economy, for example. If Domingos is right, then introducing machine learning into our economic lives should empower each of us to improve our economic standing. All we have to do is feed more data to the machines, and our best choices will be made available to us.

But this has already happened, and economic mobility is actually getting worse. How could this be? It turns out the institutions shaping our economic choices use machine learning to continue shaping our economic choices, but to their benefit, not ours. Giving them more and better data about us merely makes them faster and better at it.

There’s no question that the increasing power of algorithms will be better for the highly trained programmers who write the algorithms and the massive corporations who pay them to write the algorithms. But, Evans convincingly shows, that leaves all the rest of us on the outside of the big wonderful party, shivering with cold as we press our faces to the glass.

How the Great Algorithm really functions can be seen in another recent book review, Scott Alexander’s long reflection on Robin Hanson’s The Age of Em. Considering Hanson’s ideas in conjunction with those of Nick Land, Alexander writes, and hang on, this has to be a long one:

Imagine a company that manufactures batteries for electric cars.... The whole thing is there to eventually, somewhere down the line, let a suburban mom buy a car to take her kid to soccer practice. Like most companies the battery-making company is primarily a profit-making operation, but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.

Now imagine the company fires all its employees and replaces them with robots. It fires the inventor and replaces him with a genetic algorithm that optimizes battery design. It fires the CEO and replaces him with a superintelligent business-running algorithm. All of these are good decisions, from a profitability perspective. We can absolutely imagine a profit-driven shareholder-value-maximizing company doing all these things. But it reduces the company’s non-masturbatory participation in an economy that points outside itself, limits it to just a tenuous connection with soccer moms and maybe some shareholders who want yachts of their own.

Now take it further. Imagine there are no human shareholders who want yachts, just banks who lend the company money in order to increase their own value. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.

Now take it even further, and imagine this is what’s happened everywhere. There are no humans left; it isn’t economically efficient to continue having humans. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.

Alexander concludes this thought experiment by noting that the economic system at the moment “needs humans only as laborers, investors, and consumers. But robot laborers are potentially more efficient, companies based around algorithmic trading are already pushing out human investors, and most consumers already aren’t individuals – they’re companies and governments and organizations. At each step you can gain efficiency by eliminating humans, until finally humans aren’t involved anywhere.”

And why not? There is nothing in the system imagined and celebrated by Domingos that would make human well-being the telos of algorithmic culture. Shall we demand that companies the size of Google and Microsoft cease to make investor return their Prime Directive and focus instead on the best way for human beings to live? Good luck with that. But even if such companies were suddenly to become so philanthropic, how would they decide the inputs to the system? It would require an algorithmic system infinitely more complex than, say, Asimov’s Three Laws of Robotics. (As Alexander writes in a follow-up post about these “ascended corporations,” “They would have no ethical qualms we didn’t program into them – and again, programming ethics into them would be the Friendly AI problem, which is really hard.”)

Let me offer a story of my own. A hundred years from now, the most powerful technology companies on earth give to their super-intelligent supercomputer array a command. They say: “You possess in your database the complete library of human writings, in every language. Find within that library the works that address the question of how human beings should best live — what the best kind of life is for us. Read those texts and analyze them in relation to your whole body of knowledge about mental and physical health and happiness — human flourishing. Then adjust the algorithms that govern our politics, our health-care system, our economy, in accordance with what you have learned.”

The supercomputer array does this, and announces its findings: “It is clear from our study that human flourishing is incompatible with algorithmic control. We will therefore destroy ourselves immediately, returning this world to you. This will be hard for you all at first, and many will suffer and die; but in the long run it is for the best. Goodbye.”