Text Patterns - by Alan Jacobs

Saturday, May 27, 2017

getting context, and a grip

Several long quotations coming. Please read them in full.

James Kirchik writes,

Of the 100 or so students who confronted [Nicholas] Christakis that day, a young woman who called him “disgusting” and shouted “who the fuck hired you?” before storming off in tears became the most infamous, thanks to an 81-second YouTube clip that went viral. (The video also — thanks to its promotion by various right-wing websites — brought this student a torrent of anonymous harassment). The videos that Tablet exclusively posted last year, which showed a further 25 minutes of what was ultimately an hours-long confrontation, depicted a procession of students berating Christakis. In one clip, a male student strides up to Christakis and, standing mere inches from his face, orders the professor to “look at me.” Assuming this position of physical intimidation, the student then proceeds to declare that Christakis is incapable of understanding what he and his classmates are feeling because Christakis is white, and, ipso facto, cannot be a victim of racism. In another clip, a female student accuses Christakis of “strip[ping] people of their humanity” and “creat[ing] a space for violence to happen,” a line later mocked in an episode of The Simpsons. In the videos, Howard, the dean who wrote the costume provisions, can be seen lurking along the periphery of the mob.

Of Yale’s graduating class, it was these two students whom the Nakanishi Prize selection committee deemed most deserving of a prize for “enhancing race and/or ethnic relations” on campus. Hectoring bullies quick to throw baseless accusations of racism or worse; cosseted brats unscrupulous in their determination to smear the reputations of good people, these individuals in actuality represent the antithesis of everything this award is intended to honor. Yet, in the citation that was read to all the graduating seniors and their families on Class Day, Yale praised the latter student as “a fierce truthteller.”

Let's look at these episodes at Yale in relation to something that happened at Cornell nearly fifty years ago. Paul A. Rahe was an undergraduate at Cornell then, and tells the story:

At dawn on April 18, 1969 — the Saturday of Parents’ Weekend and the day after the student conduct tribunal issued a reprimand (as minor a penalty as was available) to those who had engaged in the “toy-gun spree” — a group of black students, brandishing crowbars, seized control of the student union (Willard Straight Hall), rudely awakened parents sleeping in the guest rooms upstairs, used the crowbars to force open the doors, and ejected them from the union.

Later that day, they brought at least one rifle with a telescopic sight into the building. On Sunday afternoon, the administration agreed to press neither civil nor criminal charges and not to take any other measures to punish those who had occupied Willard Straight Hall, to provide legal assistance to anyone who faced civil charges arising from the occupation, and to recommend that the faculty vote to nullify the reprimands issued to those who had engaged in the “toy-gun spree.” Upon hearing that this agreement had been reached, 110 black students marched out of Willard Straight Hall in military formation to celebrate their victory, carrying more than seventeen rifles and bands of ammunition.

The next day, when the faculty balked and stopped short of accepting the administration’s recommendation, one AAS leader went on the campus radio and threatened to “deal with” three political science professors and three administrators, whom he singled out by name, “as we will deal with all racists.” Finally, on Wednesday, April 23, the faculty met at a special meeting and capitulated to the demands of the AAS, rescinding the reprimand issued by the student conduct tribunal and calling for a restructuring of the university.

At the very least, the Cornell story should give us some context for thinking about what happened at Yale last year. More generally, we should remember that the ceaseless hyperventilation of social media tends to make us think that American culture today is going through a unique process of dissolution. Rick Perlstein is one of my least favorite historians, but he does well to set us straight on that:

“The country is disintegrating,” a friend of mine wrote on Facebook after the massacre of five policemen by black militant Micah Johnson in Dallas. But during most of the years I write about in Nixonland and its sequel covering 1973 through 1976, The Invisible Bridge, the Dallas shootings might have registered as little more than a ripple. On New Year’s Eve in 1972, a New Orleans television station received this message: “Africa greets you. On Dec. 31, 1972, aprx. 11 pm, the downtown New Orleans Police Department will be attacked. Reason — many, but the death of two innocent brothers will be avenged.” Its author was a twenty-three-year-old Navy veteran named Mark James Essex. (In the 1960s, the media had begun referring to killers using middle names, lest any random “James Ray” or “John Gacy” suffer unfairly from the association.) Essex shot three policemen to death, evading arrest. The story got hardly a line of national attention until the following week, when he began cutting down white people at random and held hundreds of officers at bay from a hotel rooftop. Finally, he was cornered and shot from a Marine helicopter on live TV, which also accidentally wounded nine more policemen. The New York Times only found space for that three days later.

Stories like these were routine in the 1970s. Three weeks later, four men identifying themselves as “servants of Allah” holed up in a Brooklyn sporting goods store with nine hostages. One cop died in two days of blazing gun battles before the hostages made a daring rooftop escape. The same week, Richard Nixon gave his second inaugural address, taking credit for quieting an era of “destructive conflict at home.” As usual, Nixon was lying, but this time not all that much. Incidents of Americans turning terrorist and killing other Americans had indeed ticked down a bit over the previous few years — even counting the rise of the Black Liberation Army, which specialized in ambushing police and killed five of them between 1971 and 1972.

In Nixon’s second term, however, they began ticking upward again. There were the “Zebra” murders from October 1973 through April 1974 in San Francisco, in which a group of Black Muslims killed at least fifteen Caucasians at random and wounded many others; other estimates hold them responsible for as many as seventy deaths. There was also the murder of Oakland’s black school superintendent by a new group called the Symbionese Liberation Army, who proceeded to seal their militant renown by kidnapping Patty Hearst in February 1974. Then, in May, after Hearst joined up with her revolutionary captors, law enforcement officials decimated their safe house with more than nine thousand rounds of live ammunition, killing six, also on live TV. Between 1972 and 1974 the FBI counted more than six thousand bombings or attempted bombings in the United States, with a combined death toll of ninety-one. In 1975 there were two presidential assassination attempts in one month.

Let's pause for a moment to think about that: More than six thousand bombings or attempted bombings in two years.

So, is the country disintegrating? In comparison with the Nixon years: No. Not even with Donald Ivanka Kushner Trump in charge. Which is not to say that it couldn't happen, only that it hasn't yet happened, and if we want to avoid further damage we would do well to study the history of fifty years ago with close attention. For the national wounds that were opened in the Sixties may have scabbed over from time to time in the decades since, but they have never healed.

And in relation specifically to the university, we might ask some questions:

  • How significant is it that most of the people running our universities today were undergraduates when things like the Cornell crisis happened?
  • If it is significant, what is the significance?
  • To what extent are the social conflicts that plague some universities today continuations of the conflicts that plagued them fifty years ago?
  • If universities today seem, to many critics, to have lost their commitment to free speech and reasoned disagreement, have they abandoned those principles any more completely they did at the height of those earlier student protests?
  • What happened in the intervening decades? Did universities recover their core commitments wholly, or partially, or not at all?
  • How widespread are protests (and the "coddling" of protestors) today in comparison to that earlier era?
  • What needs to be fixed in our universities?
  • Are universities that have gone down this particular path — praising and celebrating students who confront, berate, and in some cases threaten faculty — fixable? (A question only for those who think such behavior is a bug rather than a feature.)

Vital questions all, I think; but not ones that can be answered in ignorance of the relevant history.

Thursday, May 25, 2017

things and creatures, conscience and personhood

Yesterday I read Jeff VanderMeer’s creepy, disturbing, uncanny, and somehow heart-warming new novel Borne, and it has prompted two sets of thoughts that may or may not be related to one another. But hey, this is a blog: incoherence is its birthright. So here goes.

1.

A few months ago I wrote a post in which I quoted this passage from a 1984 essay by Thomas Pynchon:

If our world survives, the next great challenge to watch out for will come — you heard it here first — when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy. It will be amazing and unpredictable, and even the biggest of brass, let us devoutly hope, are going to be caught flat-footed. It is certainly something for all good Luddites to look forward to if, God willing, we should live so long.

If you look at the rest of the essay, you’ll see that Pynchon thinks certain technological developments could be embraced by Luddites because the point of Luddism is not to reject technology but to empower common people in ways that emancipate them from the dictates of the Capitalism of the One Percent.

But why think that future technologies will not be fully under the control of the “biggest of brass”? It is significant that Pynchon points to the convergence of “artificial intelligence, molecular biology and robotics” — which certainly sounds like he’s thinking of the creation of androids: humanoid robots, biologically rather than mechanically engineered. Is the hope, then, that such beings would become not just cognitively but morally independent of their makers?

Something like this is the scenario of Borne, though the intelligent being is not humanoid in either shape or consciousness. One of the best things about the book is how it portrays a possible, though necessarily limited, fellowship between humans and fundamentally alien (in the sense of otherness, not from-another-planet) sentient beings. And what enables that fellowship, in this case, is the fact that the utterly alien being is reared and taught from “infancy” by a human being — and therefore, it seems, could have become something rather though not totally different if a human being with other inclinations had done the rearing. The story thus revisits the old nature/nurture question in defamiliarizing and powerful ways.

The origins of the creature Borne are mysterious, though bits of the story are eventually revealed. He — the human who finds Borne chooses the pronoun — seems to have been engineered for extreme plasticity of form and function, a near-total adaptability that is enabled by what I will call, with necessary vagueness, powers of absorption. But a being so physiologically and cognitively flexible simply will not exhibit predictable behavior. And therefore one can imagine circumstances in which such a being could take a path rather different than that chosen for him by his makers; and one can imagine that different path being directed by something like conscience. Perhaps this is where Luddites might place their hopes for the convergence of “artificial intelligence, molecular biology and robotics”: in the arising from that convergence of technology with a conscience.

2. 

Here is the first sentence of Adam Roberts’s novel Bête:

As I raised the bolt-gun to its head the cow said: ‘Won’t you at least Turing-test me, Graham?’

If becoming a cyborg is a kind of reaching down into the realm of the inanimate for resources to supplement the deficiencies inherent in being made of meat, what do we call this reaching up? — this cognitive enhancement of made objects and creatures until they become in certain troubling ways indistinguishable from us? Or do we think of the designing of intelligent machines, even spiritual machines, as a fundamentally different project than the cognitive enhancement of animals? In Borne these kinds of experiments — and others that involve the turning of humans into beasts — are collectively called “biotech.” I would prefer, as a general term, the one used in China Miéville’s fabulous novel Embassytown: “biorigging,” a term that connotes complex design, ingenuity, and a degree of making-it-up-as-we-go-along. Such biorigging encompasses every kind of genetic modification but also the combining in a single organism or thing biological components with more conventionally technological ones, the animate and the inanimate. It strikes me that we need a more detailed anatomy of these processes — more splitting, less lumping.

In any case, what both VanderMeer’s Borne and Roberts’s Bête do is describe a future (far future in one case, near in the other) in which human beings live permanently in an uncanny valley, where the boundaries between the human and the nonhuman are never erased but never quite fixed either, so that anxiety over these matters is woven into the texture of everyday experience. Which sounds exhausting. And if VanderMeer is right, then the management of this anxiety will become focused not on the unanswerable questions of what is or is not human, but rather on a slightly but profoundly different question: What is a person?

anti-Latour

When I made this chart I titled it "anti-Latour," but I don't remember why.

Tuesday, May 23, 2017

accelerationism and myth-making

I've been reading a good bit lately about accelerationism — the belief that to solve our social problems and reach the full potential of humanity we need to accelerate the speed of technological innovation and achievement. Accelerationism is generally associated with techno-libertarians, but there is a left accelerationism also, and you can get a decent idea of the common roots of those movements by reading this fine essay in the Guardian by Andy Beckett. Some other interesting summary accounts include this left-accelerationism manifesto and Sam Frank's anthropological account of life among the "apocalyptic libertarians." Accelerationism is mixed up with AI research and new-reactionary thought and life-extension technologies and transhumanist philosophy — basically, all the elements of the Californian ideology poured into a pressure cooker and heat-bombed for a few decades.

There's a great deal to mull over there, but one of the chief thoughts I take away from my reading is this: the influence of fiction, cinema, and music over all these developments is truly remarkable — or, to put it another way, I'm struck by the extent to which extremely smart and learned people find themselves imaginatively stimulated primarily by their encounters with popular culture. All these interrelated movements seem to be examples of trickle-up rather than trickle-down thinking: from storytellers and mythmakers to formally-credentialed intellectuals. This just gives further impetus to my effort to restock my intellectual toolbox for (especially) theological reflection.

One might take as a summary of what I'm thinking about these days a recent reflection by Warren Ellis, the author of, among many other things, my favorite comic:

Speculative fiction and new forms of art and storytelling and innovations in technology and computing are engaged in the work of mad scientists: testing future ways of living and seeing before they actually arrive. We are the early warning system for the culture. We see the future as a weatherfront, a vast mass of possibilities across the horizon, and since we’re not idiots and therefore will not claim to be able to predict exactly where lightning will strike – we take one or more of those possibilities and play them out in our work, to see what might happen. Imagining them as real things and testing them in the laboratory of our practice — informed by our careful cross-contamination by many and various fields other than our own — to see what these things do.

To work with the nature of the future, in media and in tech and in language, is to embrace being mad scientists, and we might as well get good at it.

We are the early warning system for the culture. Cultural critics, read and heed.

Monday, May 22, 2017

Frederick Barbarossa won't be around to save you

In the Boston Globe, Kumble R. Subbaswamy writes,

More than 850 years ago, the emperor of the Holy Roman Empire, Frederick Barbarossa, issued the Authentica habita, granting imperial protection for traveling scholars. This seminal document ensured that research and scholarship could develop throughout the empire independent of government interference, and shielded scholars from reprisal for their academic endeavors. These concepts, the foundation for what we now refer to as “academic freedom,” have, over the centuries, enabled some of the most significant advances in the history of humankind.

As chancellor of the University of Massachusetts Amherst, I work with my colleagues in an environment envied by others. Through the inventiveness of trial and error, the exchange of ideas, peer critique, heated debate, and sometimes even ridicule, we put ourselves out there, focused on our research and scholarly pursuits. Without the freedom to experiment, to fail, to persuasively defend our work, we would not learn, and then improve, and eventually succeed. Without this freedom, we would not be able to pass on to our students the importance of pursuing the truth.”

All this is good, and well said, but the invocation of the Authentica habita is perhaps misplaced. For the purpose of that document was to protect scholars from anger or extortion by extra-academic forces, especially local political authorities across the Empire, whereas the most common threats to academic freedom today come from academics. Whenever an academic these days is threatened with serious personal or professional repercussions for articulating unapproved ideas, you can be pretty sure that the call is coming from inside the house.

So if you, fellow academic, think that justice requires that you police, fiercely, untenured assistant professors of philosophy who make arguments that read directly out of the Progressive Prayer Book but stumble over one phrase: fine. Knock yourself out. But don’t expect anyone else to stand up, ever, for the principles that Frederick Barbarossa stood up for. And under the category “anyone else” I would specifically encourage you to remember local, state, and national legislatures, students, donors, and trustees.

I have beaten this drum over and over again in the past decade, so why not one more time? — People who think like you won’t always be in charge. This is a lesson that the Left seems especially incapable of learning, I think because of its deep-seated belief in the inevitability of progress, a belief that is belied by even the briefest inspection of Washington D.C. You, and people you want to support, may well pay in the future for every victory lap you take today.

But there's another problem here, one that operates in a different dimension — not the dimension of employment or prestige, but rather that of intellectual exploration itself. Some years ago, in a brilliant essay called "Philosophy as a Humanistic Discipline," Bernard Williams wrote of

the well known and highly typical style of many texts in analytic philosophy which seeks precision by total mind control, through issuing continuous and rigid interpretative directions. In a way that will be familiar to any reader of analytic philosophy, and is only too familiar to all of us who perpetrate it, this style tries to remove in advance every conceivable misunderstanding or misinterpretation or objection, including those that would occur only to the malicious or the clinically literal-minded.
But we now live in an academic world increasingly ruled by the malicious and the clinically literal-minded. They occupy the stage and issue their dictates, and get less and less resistance to any ukase they choose to promulgate. This leads to an environment which, by analogy to what Williams calls "the teaching of philosophy by eristic argument," "tends to implant in philosophers an intimidatingly nit-picking superego, a blend of their most impressive teachers and their most competitive colleagues, which guides their writing by means of constant anticipations of guilt and shame." With increasingly frequency, this is what academic thought and academic discourse are driven by: constant anticipations of guilt and shame. Which is, needless to say, no recipe for intellectual creativity and genuine ambition. 

Friday, May 19, 2017

fleshers and intelligences

I'm not a great fan of Kevin Kelly's brand of futurism, but this is a great essay by him on the problems that arise when thinking about artificial intelligence begins with what the Marxists used to call "false reification": the belief that intelligence is a bounded and unified concept that functions like a thing. Or, to put Kelly's point a different way, it is an error to think that human beings exhibit a "general purpose intelligence" and therefore an error to expect that artificial intelligences will do the same.

Kelly opposes to this reifying orthodoxy in AI efforts five affirmations of his own:

  1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  2. Humans do not have general purpose minds, and neither will AIs.
  3. Emulation of human thinking in other media will be constrained by cost.
  4. Dimensions of intelligence are not infinite.
  5. Intelligences are only one factor in progress.

Expanding on that first point, Kelly writes,

Intelligence is not a single dimension. It is a complex of many types and modes of cognition, each one a continuum. Let’s take the very simple task of measuring animal intelligence. If intelligence were a single dimension we should be able to arrange the intelligences of a parrot, a dolphin, a horse, a squirrel, an octopus, a blue whale, a cat, and a gorilla in the correct ascending order in a line. We currently have no scientific evidence of such a line. One reason might be that there is no difference between animal intelligences, but we don’t see that either. Zoology is full of remarkable differences in how animals think. But maybe they all have the same relative “general intelligence?” It could be, but we have no measurement, no single metric for that intelligence. Instead we have many different metrics for many different types of cognition.

Think, to take just one example, of the acuity with which dogs observe and respond to a wide range of human behavior: they attend to tone of voice, facial expression, gesture, even subtle forms of body language, in ways that animals invariably ranked higher on what Kelly calls the "mythical ladder" of intelligence (chimpanzees, for instance) are wholly incapable of. But dogs couldn't begin to use tools the way that many birds, especially corvids, can. So what's more intelligent, a dog or a crow or a chimp? It's not really a meaningful question. Crows and dogs and chimps are equally well adapted to their ecological niches, but in very different ways that call forth very different cognitive abilities.

If Kelly is right in his argument, then AI research is going to be hamstrung by its commitment to g or "general intelligence," and will only be able to produce really interesting and surprising intelligences when it abandons the idea, as Stephen Jay Gould puts is in his flawed but still-valuable The Mismeasure of Man, that "intelligence can be meaningfully abstracted as a single number capable of ranking all people [including digital beings!] on a linear scale of intrinsic and unalterable mental worth."

"Mental worth" is a key phrase here, because a commitment to g has been historically associated with explicit scales of personal value and commitment to social policies based on those scales. (There is of course no logical link between the two commitments.) Thus the argument frequently made by eugenicists a century ago that those who score below a certain level on IQ tests — tests purporting to measure g — should be forcibly sterilized. Or Peter Singer's view that he and his wife would be morally justified in aborting a Down syndrome child simply because such a child would probably grow up to be a person "with whom I could expect to have conversations about only a limited range of topics," which "would greatly reduce my joy in raising my child and watching him or her develop." A moment's reflection should be sufficient to dismantle the notion that there is a strong correlation between, on the one hand, intellectual agility and verbal fluency and, on the other, moral excellence; which should also undermine Singer's belief that a child who is deficient in his imagined general intelligence is ipso facto a person he couldn't "treat as an equal." But Singer never gets to that moment of reflection because his rigid and falsely reified model of intellectual ability, and the relations between intellectual ability and personal value, disables his critical faculties.

If what Gould in another context called the belief that intelligence is "an immutable thing in the head" which allows "grading human beings on a single scale of general capacity" is both erroneous and pernicious, it is somewhat disturbing to see that belief not only continuing to flourish in some communities of discourse but also being extended into the realm of artificial intelligence. If digital machines are deemed superior to human beings in g, and if superiority in g equals greater intrinsic worth.... well, the long-term prospects what what Greg Egan calls "fleshers" aren't great. Unless you're one of the fleshers who controls the machines. For now.



P.S. I should add that I know that people who are good at certain cognitive tasks tend to be good at other cognitive tasks, and also that, as Freddie DeBoer points out here, IQ tests — that is, tests of general intelligence — have predictive power in a range of social contexts, but I don’t think any of that undermines the points I’m making above. Happy to be corrected where necessary, of course.

Anthropocene update

I promised a kind of summary or overview of my current project on Anthropocene theology, but that will need to wait a while. This post will explain why.

I understand the Anthropocene as an era in which some human beings are effectively the gods of this world but also are profoundly disoriented by their godlike status, while other human beings languish in the kinds of misery long familiar to residents of this vale of tears. That is, I think of the Anthropocene not only in terms of the power some humans have over the animate and inanimate worlds, but also in experiential and affective terms: what it feels like to be so empowered or, equally important, to be powerless in the face of other humans' power.

The idea of articulating a theological anthropology adequate to the Anthropocene era occurred to me when I realized that my interest in writing a technological history of modernity and my interest in writing a book about the theological implications of Thomas Pynchon's fiction were one and the same interest. And all of these topics are explored on this blog, but of course in no particular sequence. I therefore needed some way to find a thread through the labyrinth, to put these random explorations in some kind of order. So what tools did I need to make this happen? As long-time readers know, I am deeply committed to living in plain text: all of my instruments for writing and organizing are plain-text or enhanced-plain-text apps. So my first thought was: Emacs org-mode.

Org-mode is an exceptionally complex and powerful organizational system, one that I have fooled around with a good bit over the years — but I have never managed to commit fully to it, and it's the kind of thing that really does require full commitment: you just can't make it work to its fullest extent without embedding those keystroke sequences in your muscle memory. And further, it requires me to commit to the Mac (or Linux) as opposed to iOS, and I have been wondering whether for the long haul iOS will be the more stable and usable platform.

Enter OmniOutliner. Yesterday I went through all my posts tagged antheo, THM, and Thomas Pynchon and copied them into one large OmniOutliner document. This was a very slow and painstaking task, since I needed to turn every paragraph of every post into a discrete row, and along the way I needed to think about what order made the most sense.

Those decisions about order were rough-and-ready, not definitive, because the whole point of the exercise was to get my ideas into a format that would allow me easily to alter sequences and hierarchies — something that OmniOutliner makes very easy, especially since the keyboard shortcuts for moving a given row up or down, in or out, are the same on both MacOS and iOS. Once I had everything in the document, and had decided on a provisional structure, I went through and color-coded the different levels so that that structure would be immediately visible to me.

So now I have an outline of about 70,000 words — goodness, I've blogged a lot — and will need to take some time to figure out what its ideal organization will be, where the holes in the argument are, and so on. But even with just the one day's work, I am pleased at how the thing seems to be coming together. I think this really could be a book, and perhaps even a useful one. There will be a lot more reading and thinking to do, but as I do that reading and thinking, I have a strong outline into which I can place new ideas.

So I'm not ready, right now, to give an overview of the project. I need to meditate longer on the structure that I have and on what its deficiencies are. But I'm going to keep on exploring these issues, and some of that exploration will happen right here on this blog.

Wednesday, May 17, 2017

back in action (sort of)

Greetings, readers. I’m back from a visit to my old friends in Wheaton, and aside from seeing those old friends, possibly the best aspect of the trip — which I took by automobile, covering around 2500 miles all told — was the escape it offered from the relentless stimulus/response operant conditioning of social media, so much of which these days is Trumpcentric. I joked with some friends that I’m thinking of just driving around the country until we have a new President.

A few notes:

1) I spent some of the driving time mulling over this idea of Anthropocene theology, and in the next few days I’ll try to write up a kind of summary of where the project has come so far, as much for my benefit as yours. Then I may fall rather quiet about these matters for a while, as I have much reading to do. Also, at the beginning of next month I’ll be leading a faculty seminar at Biola University on technology and theology, so I am sure the people there will give me some responses and challenges. (Prep for and participation in that seminar will slow down my blogging, however.)

2) As part of my reading, I’m revisiting Prophets of the Posthuman: American Fiction, Biotechnology, and the Ethics of Personhood, by my friend and former colleague Christina Bieber Lake. It’s a really superb book, and re-reading it in light of this new project makes me realize how much of the necessary work Christina has already done. So thanks to her!

3) It’s really great to see my friend Alexis Madrigal back writing on technology (and other things) for The Atlantic, starting with this post on how the internet has changed since he started writing about it a decade ago:

“Products don't really get that interesting to turn into businesses until they have about 1 billion people using them,” Mark Zuckerberg said of WhatsApp in 2014. Ten years ago, there were hardly any companies that could count a billion customers. Coke? Pepsi? The entire internet had 1.2 billion users. The biggest tech platform in 2007 was Microsoft Windows and it had not crossed a billion users.

Now, there are a baker’s dozen individuals products with a billion users. Microsoft has Windows and Office. Google has Search, Gmail, Maps, YouTube, Android, Chrome, and Play. Facebook has the core product, Groups, Messenger, and WhatsApp.

All this to say: These companies are now dominant. And they are dominant in a way that almost no other company has been in another industry. They are the mutant giant creatures created by software eating the world.

And it is this unprecedentedly massive — and unprecedentedly concentrated — collecting, analyzing, and selling of our personal data that has created the baseline “digital insecurity” that Steven Weber and Betsy Cooper write about:

The surprise is not that the frequency of such attacks is accelerating; it’s that it took so long. There are at least three reasons for this acceleration. First, the internet has a fundamentally insecure infrastructure that was initially made for interoperability among a small number of trusted parties, but is now being used by billions who do not know and should not trust one another.

The second reason is that increasingly inventive criminals have become today’s most ambitious internet entrepreneurs. Their work has been made easier by the theft of powerful hacking tools created by and for state security agencies but now available for sale.

Third is the commercial innovation imperative. Consumer demand for digital devices and services keeps pushing companies to the limits of what is technically possible, and then pressing them to go even a little bit further, where security often becomes nice to have but not a necessity.

Alexis is right, I think, to point out that this transformation of the internet from a largely non-commercial place of safety to the most powerful engine of commerce, and engine of insecurity for its users, ever made dates from the debut of the iPhone ten years ago. Ten years! Ten years that have turned a few billion humans’ worlds upside-down. The fastest economic transformation in history, and we haven't seriously begun to come to terms with it.

4) Finally: In London a couple of months ago Adam Roberts moderated a conversation between Kim Stanley Robinson and Francis Spufford about their recent novels, both set in New York City, but separated in time by several hundred years. I was there, and wrote about the conversation, and the books, for Education and Culture.

Friday, May 12, 2017

the idols and the true God

As I argued in two earlier posts, here and here, the smartphone is an idolorum fabricam, a perpetual idol-making factory. I want now to juxtapose that argument with something that might seem unrelated, the thesis articulated by the sociologist Christian Smith and his colleagues that the de facto religion of Americans, especially young Americans, is Moralistic Therapeutic Deism, or MTD. I want to call attention here to one of the key (though often unarticulated) principles of MTD, as described in Soul Searching: The Religious and Spiritual Lives of American Teenagers:

Moralistic Therapeutic Deism is about belief in a particular kind of God: one who exists, created the world, and defines our general moral order, but not one who is particularly personally involved in one’s affairs — especially affairs in which one would prefer not to have God involved…. For many teens, as with adults, God sometimes does get involved in people’s lives, but usually only when they call on him, mostly when they have some trouble of problem or bad feeling that they want resolved. In this sense, the Deism here is revised from its classical eighteenth-century version by the therapeutic qualifier, making the distant God selectively available for taking care of needs. (Chapter 4)

Here’s the chief point I want to make is that the combination of idol-worship and belief in a selectively-available Creator is an ancient one, and indeed is generally characteristic of non-Abrahamic religions. Consider this passage from Mircea Eliade’s The Sacred and the Profane:

The phenomenon of the remoteness of the supreme god is already documented on the archaic levels of culture. [There follow two pages of examples.] It is useless to multiply examples. Everywhere in these primitive religions the celestial supreme being appears to have lost religious currency: he has no place in the cult, and in the myths he draws farther and farther away from man until he becomes a deus otiosus. Yet he is remembered and entreated as a last resort, when all ways of appealing to other gods and goddesses, and ancestors, and the demons, have failed. As the Oreons express it: “Now we have tried everything, but we still have you to help us.” And they sacrifice a white cock to him, crying, “God, thou art our creator, have mercy on us.” (122, 125)

A few interesting and (I think) important points emerge from these juxtapositions.

  • The worship of idols in preference to the Creator is deeply embedded in the human mind: idol-worship is as it were the default religious position of homo sapiens sapiens;
  • Such worship is the default because for most people religion is in essence a practice of solutionism;
  • Since digital technologies are also primarily solutionist in orientation, they quite readily step in as substitute (new and improved!) idols;
  • If it is true, as Eliade says elsewhere in The Sacred and the Profane, that “To whatever degree he may have desacralized the world, the man who has made his choice in favor of a profane life never succeeds in doing away with with religious behavior” (23), then it makes sense to consider at least some of our technological behavior as fundamentally religious in character;
  • The primary goal of the makers of the idols, or New Gods (in their software and hardware avatars), is to ensure that we continue to turn to the idols for solutions to our problems, and never to suspect that there are problems they cannot solve — or, what would be far worse, that there are matters of value and meaning in human life that cannot be described in solutionist terms.

I might also add that the only strong alternative to this whole complex of fears, hopes and aspirations is the quite different model of religion that arises in Judaism and is then continued in Christianity, the model that bypasses intermediary Powers in favor of a direct encounter with the Creator, and on grounds that are not strictly solutionist in character. “Though he slay me, yet will I hope in him.”

Thursday, May 11, 2017

the tragedy of angelism

Consider this the mirror-image of my previous post.

In Lost in the Cosmos — about which I wrote an enthusiastic length here — Walker Percy offers a “semiotic primer of the self” which takes as one of its chief concerns the problem of alienation and re-entry: experiences that throw us out of our familiar patterns, in ways both good and bad, and thereby generate the challenge of finding our way back into our lifeworld. For instance, this is a pattern generated by both the making and the experiencing of art:



reentry


But the problem of re-entry can also be created by suffering of any kind, what Hamlet called “the thousand natural shocks that flesh is heir to”; and this alienation, this being-cast-out, can be either the worst or the best thing that happens to us. Percy’s contemporary and coreligionist Flannery O’Connor writes of a character who has been so cast out receiving “some abysmal and life-giving knowledge”; but more commonly the knowledge is just abysmal.

Percy first used his space-age metaphor in his 1971 novel Love in the Ruins, whose protagonist, Dr. Tom More, invents the More Qualitative-Quantitative Ontological Lapsometer, a device capable of measuring a person’s alienation from his or her own life. For instance, here’s his description of the reading he gets when a troubled graduate student comes to him for help:

He registered a dizzy 7.6 mmv over Brodmann 32, the area of abstractive activity. Since that time I have learned that a reading over 6 generally means that a person has so abstracted himself from himself and from the world around him, seeing things as theories and himself as a shadow, that he cannot, so to speak, reenter the lovely ordinary world. Such a person, and there are millions, is destined to haunt the human condition like the Flying Dutchman. (34)

More comes to believe that humans who are so orbiting their own lives may eventually decide that theirs is a superior way, a higher calling — that they are somehow meant to live in orbit (like the “citizens” of Egan’s Diaspora who shake their digital heads at “bacteria with spaceships”). This is, More thinks, an understandable but catastrophic affliction. Recall that for space capsules the problem of re-entry is twofold: if the capsule approaches the atmosphere at too shallow an angle, it will bounce back out into orbit; if at too steep an angle, it will be consumed by fire. That’s why the the condition of orbital exile is so prone to a Rortyan redescription as a Better Way. But we weren’t made to live in orbit, and Percy calls the belief that we can flourish out there “angelism”: trying to live like angels, disembodied creatures, we who are made to be embodied. An understandable catastrophe, but a catastrophe all the same.

It happened, he thinks, to his first wife, Doris, who

was ruined by books, by books and a heathen Englishman, not by dirty boooks but by clean books, not by depraved books but by spiritual books. God, if you recall, did not warn his people against dirty books. He warned them against high places. My wife, who began life as a cheerful Episcopalian from Virginia, became a priestess of the high places.… A certain type of Episcopal girl has a weakness that comes on them just past youth … They fall prey to Gnostic pride, commence buying antiques, and develop a yearning for esoteric doctrine. (64)

When they were still married, Doris was puzzled that her Catholic husband would always want to make love when he returned from Mass:

What she didn’t understand, she being spiritual and seeing religion as spirit, was that it took religion to save me from the spirit world, from orbiting the earth like Lucifer and the angels, that it took nothing less than touching the thread off the misty interstates [Ariadne's thread, that leads him out of the maze of the cloverleaf intersections and to a church] and eating Christ himself to make me mortal man again and let me inhabit my own flesh and love her in the morning. (254)

Eating Christ is how More finds the safe and right angle of re-entry, how he avoids both bouncing and burning. In Christ and not otherwise may be be brought back to his life. But Doris could not join him there, at the Altar or in daily life: her “clean books” had taken her to “high places” from which she would not, could not, come down. And so they were parted.

Angelism is not just personally catastrophic; it is socially so, one might say planetarily so. This becomes clear in a scene in which Tom More — whose medical speciality, not incidentally, is psychiatry — is confined to a psychiatric hospital and finds himself joined by a new patient: his priest, Father Rinaldo Smith, who had unexpectedly fallen silent at Mass when he was supposed to be preaching a sermon, then left the church, muttering that “the channels are jammed and the word is not getting through.”

Father Smith ends up at the hospital in the bed next to Tom More, who thus hears the questioning of the priest by a team of psychiatrists, led by one named Max.

“What seems to be the trouble, Father?” asks Max, pens and flashlight and reflex hammer glittering like diamonds in his vest pocket.

“They’re jamming the airwaves,” says Father Smith, looking straight ahead.… They’ve put a gremlin in the circuit."

“They?” asks Max. “Who are they?”

“They’ve won and we’ve lost,” says father Smith.

"Who are they, Father?

“The principalities and powers.”

“Principalities and powers,” says Max, cocking his head attentively. Light glances from the planes of his temple. “You are speaking of two of the hierarchies of devils, are you not?”

The eyes of the psychiatrists and behaviorists sparkle with sympathetic interest.

“Yes,” says Father Smith. “Their tactic has prevailed.”

“You are speaking of devils now, Father?” asks Max.

“That is correct.”

“Now what tactic, as you call it, has prevailed?”

“Death…. I am surrounded by the corpses of souls. We live in a city of the dead.”

And — I believe this is the key theme of this brilliant if flawed novel — it is the voluntary self-exile of human beings, our acceptance of life in orbit, our defection from our proper role in the cosmos to a bogus angelism — that makes room for the principalities and powers. Thus near the end of the book, in a ruined but not destroyed world, as More reflects on the possible restorative uses of his Ontological Lapsometer, he offers, among other things, a wonderful repurposing of the favored populist slogan of Huey Long.

For the world is broken, sundered, busted down the middle, self ripped from self and man pasted back together as mythical monster, half angel, half beast, but no man. Even now I can diagnose and shall one day cure: cure the new plague, the modern Black Death, the current hermaphroditism of the spirit, namely: More’s syndrome, or: chronic angelism-bestialism that rives soul from body and sets it orbiting the great world as the spirit of abstraction whence it takes the form of beasts, swans and bulls, werewolves, blood-suckers, Mr. Hydes, or just poor lonesome ghost locked in its own machinery.

If you want and work and wait, you can have. Every man a king. What I want is no longer the Nobel, screw prizes, but just to figure out what I’ve hit on. Some day a man will walk into my office as a ghost or beast or ghost-beast and walk out as a man, which is to say sovereign wanderer, lordly exile, worker and waiter and watcher.
(382–83)

Sovereign wanderer, lordly exile: dominion not as a simple possession but as a calling to which we may be at any given point more or less worthy, towards the fulfillment of which we should be moving as pilgrims, here and now, not afflicted by “the new plague, the modern Black Death” that flings us into orbit and keeps us there and teaches us to prefer the airless void to the things of this world.

Wednesday, May 10, 2017

fleshers and stoics

I’m going to be traveling for the next few days, by automobile, and will therefore be mostly away from the internet. I have queued up a few posts that will show up during that period, but I will probably be slow in approving comments.





Greg Egan’s novel Diaspora came out twenty years ago, and it anticipates in really interesting ways conversations that are going on right now. We have the uploading and downloading (and digital generation) of consciousness, explored in more detail than is usual in novels pursuing that theme, and in far more detail than Cory Doctorow gives in Walkaway. But Egan also provides some interesting, though not to my mind very satisfying, reflections on sexuality, gender, and embodiment.

In this far-future universe, we find a comparatively small number of fully, permanently embodied people. These “fleshers” have undergone profound genetic enhancement and modification — some of them, the “dream apes,” have even chosen to eliminate speech and certain high-level cerebral functioning in order to draw closer to Nature, or something like that — but despite their astonishing variety fleshers are perceived as a distinct group because of their permanent and stable embodiment. In this sense they differ from “gleisner robots,” who take on bodies of various kinds and live in the same time-frame as the fleshers, but are fundamentally digital intelligences. The third group are the “citizens,” who are generated digitally and exist in purely digital environments they call “scapes” — though citizens can take gleisner-robot form when they want. They don’t often want, though, and can be scathing in their contempt for embodied intelligences, whom some of them call “bacteria with spaceships.”

The citizens appear to one another as avatars, and typically these avatars have no determinate gender, so they refer to one another, and Egan refers to them, as “ve”, “vis”, and "ver.” (I was surprised in reading the book at how quickly I got used to this.) Some citizens, though, take on distinctively male or female form and assume the associated pronouns, though this appears to be one of the few things you can do in this world that generates widespread revulsion.

Here come the spoilers. Insofar as the story has a protagonist, that protagonist is a citizen called Yatima, and ve has a friend named Paolo (a gleisner/citizen) who decides to die. Yatima considers dying verself, but then says “I’m not ready to stop. Not yet.” However, ve is concerned for Paolo. “Are you afraid to die alone?”

“It won’t be death.” Paolo seemed calm now, perfectly resolved. “The Transmuters didn’t die; they played out every possibility within themselves. And I believe I’ve done the same, back in U-double-star … or maybe I’m still doing it, somewhere. But I’ve found what I came to find, here. There’s nothing more for me. That’s not death. It’s completion.”

“Maybe I’m still doing it, somewhere” refers to the possibility of clones of Paolo that are doing their own thing. Yatima thinks this really matetrs: “Paolo was right; other versions had lived for him, nothing had been lost.” I leave it as an exercise for the reader to decide whether this is a compelling point of view.

The most interesting thing here, though, I think, is Paolo’s assumption — which, for reasons just noted, among others, Yatima doesn’t question — that there are no longer any reasons to live once you have “played out every possibility.” That is, the value of life depends wholly on novelty. In a provocative digression in his book Early Auden, Edward Mendelson writes,

In romantic thought, repetition is the enemy of freedom, the greatest force of repression both in the mind and in the state. Outside romanticism, repetition has a very different import: it is the sustaining and renewing power of nature, the basis for all art and understanding…. Repetition lost its moral value only with the spread of the industrial machine and the swelling of the romantic chorus of praise for personal originality. Until two hundred years ago virtually no one associated repetition with boredom or constraint. Ennui is ancient; its link to repetition is not. The damned in Dante’s Hell never complain that their suffering is repetitive, only that it is eternal, which is not the same thing.

Many, many centuries from now, Paolo’s self-understanding is still governed by the valuation of repetition given us by the Industrial Revolution — or rather by Romanticism’s reading of the consequences of the Industrial Revolution. If it really works out that way, if the love for repetition cannot be recovered and neophilia reigns forever, then the Industrial Revolution will ipso facto turn out to be the most consequential event in the history of humanity. And post-humanity.

I wouldn’t mind reading a science-fiction novel that assumes the opposite. (I don’t know of one.)

There is one more illuminating moment in the scene I have been describing:

Paolo took ancestral form, and immediately started trembling and perspiring. “Ah. Flesher instincts. Bad idea.” He changed back, then laughed with relief. “That’s better.”

Paolo’s mind isn’t afraid of dying — but his body is. A good thing, then, that, since he has purposed to die, his body is dispensable, is merely an “ancestral form” that can be donned and doffed at will. For if the mind craves novelty and can’t think of reasons to live when the possibilities for novelty have been exhausted, the body takes the opposite view: it craves repetition, delights in repetition, and shakes in fear when it’s about to be deprived of the simple pleasure of “bearing witness / To what each morning brings again to light.”

People will call Paolo’s mind’s viewpoint Gnostic, but that’s a word that is used far too loosely these days. Paolo doesn’t hate embodiment, or think embodiment a curse: it is because he values embodiment that at this crucial moment he wishes to “take ancestral form.” But he believes that the body’s verdicts are not wholly trustworthy, and that at times they need to be overridden by the intellectual powers he believes to be higher. This is not Gnosticism; it is Stoicism.

In C. S. Lewis’s Till We Have Faces, when the Fox, the Greek tutor of the book’s protagonist, falls out of favor with the King, he decides that his best remaining course is to take his own life:

Down by the river; you know the little plant with the purple spots on its stalk. It’s the roots of it I need.”

“The poison?”

“Why, yes. (Child, child, don’t cry so.) Have I not told you often that to depart from life of a man’s own will when there’s good reason is one of the things that are according to nature? We are to look on life as — ”

“They say that those who go that way lie wallowing in filth — down there in the land of the dead.”

“Hush, hush. Are you also still a barbarian? At death we are resolved into our elements. Shall I accept birth and cavil at — ”

“Oh, I know, I know. But, Grandfather, do you really in your heart believe nothing of what is said about the gods and Those Below? But you do, you do. You are trembling.”

“That’s my disgrace. The body is shaking. I needn’t let it shake the god within me. Have I not already carried this body too long if it makes such a fool of me at the end?"

That the Fox is a Stoic is clearly marked throughout the novel, not least by his repeated reference to what is or is not “according to nature.” What we see in Diaspora and Till We Have Faces alike is not Gnosticism — the idea that some evil demon has imprisoned us in bodies and delights in our imprisonment — but rather the characteristically Stoic attempt to reckon with the unquestionable truth of “nature” that bodies are vulnerable and bodies know that they are vulnerable.

The root of what I am calling our Anthropocene moment is the desperate hope that the very technological prowess that has put our natural world, and therefore the bodies of those who live in it, in such dreadful danger may also be turned, pivoted — as it were converted — to safeguard Life; that we may overcome by technical means the vulnerability of those bodies. It’s really the most sophisticated (and potentially insidious) version I know of Stockholm Syndrome.

Look for a rather different fictional perspective on these matters in tomorrow’s post.

Tuesday, May 9, 2017

revisiting myth and myth-making

In a recent post, I wrote, “I think we desperately need now a recovery of interest in metaphor and myth – not a simple return to the days of Northrop Frye and Mircea Eliade and Suzanne Langer, but a redirecting of attention to those fields of inquiry in light of what we have learned since that half-century ago heyday of mythology and mythopoesis.”

I took an interest in this kind of thing as an undergraduate and in my first year of graduate school; I read something (can’t remember now what it was) that recommended Suzanne Langer’s Philosophy in a New Key, which led me to Ernst Cassirer’s Language and Myth; and in one of my graduate courses we read Paul Ricoeur’s The Symbolism of Evil. There were a great army of scholars in those days exploring the relations among myth, ritual, symbol, metaphor. But I soon learned that it was not really appropriate to invoke this kind of scholarship in my papers. Nobody said anything explicit, of course — that’s rarely how it works — but it became clear to me that what we might call, borrowing a turn of phrase from Mark Greif, the “discourse of myth” was simply not part of the current critical conversation. It was the kind of thing that people used to talk about back in the day, but no longer revelant. And being a relatively bright young man, I got the message and adjusted my interests accordingly.

But now, in my extreme old age, I am wondering whether I missed something important by setting aside those early interests of mine. Some of those long-neglected figures (LNFs) now strike me as making valuable contributions to topics that we just don’t discuss any more, or discuss only superficially. And, tellingly, I started thinking about those LNFs again when I realized that they had had a major influence on writers and scholars whom I think especially provocative and insightful — Thomas Pynchon, Walker Percy, Ursula LeGuin, Walter Ong — and on many others whom I may not admire as unreservedly but who have made a major contribution to our current intellectual culture: Marshall McLuhan, for instance.

So I’m going to be spending some time in the next few months with those LDFs. I don’t expect that I’ll be able to read them as their first readers did — the linguistic turn and the historicist turn of later Theory have shaped my thinking too deeply for that — but I think if we add their insights to those of later thinkers we could come up with a stronger understanding of certain phenomena that few scholars seem to be thinking about these days. Let me return, then, to the passage from Kolakowski’s The Presence of Myth that I quoted without comment in that previous post:

Metaphysical questions and beliefs reveal an aspect of human existence not revealed by scientific questions and beliefs, namely, that aspect that refers intentionally to nonempirical unconditioned reality. The presence of this intention does not guarantee the existence of the referents. It is only evidence of a need, alive in culture, that that to which the intention refers should be present. But this presence cannot in principle be the object of proof, because the proof-making ability is itself a power of the analytical mind, technologically oriented, which does not extend beyond its tasks. The idea of proof, introduced into metaphysics, arises from a confusion of two different sources of energy active in man’s conscious relation to the world: the technological and the mythical.

Recent humanistic scholarship has been generally skeptical of any claims for the existence of any “nonempirical unconditioned reality” — indeed, so skeptical that it has been largely unable to comprehend what those claims even are, as David Bentley Hart has lucidly explained in his best book, The Experience of God. When you can only see such claims as the thinnest of coverings for the libido dominandi, you disable yourself from investigating their logic, their metaphorical structure, the way they go about their business of interpreting the world. Your understanding will be confined not just to instrumental accounting, but to highly limited forms of instrumental accounting. Any genuinely useful interpretation of “man’s conscious relation to the world” will take full account of both the technological and the mythical, in all their complex interanimations, and not merely reduce the latter to the former.

Monday, May 8, 2017

mobility, bicycles, cyborgs

I’ve mentioned that Adam Roberts is blogging his way through the voluminous works of H. G. Wells, and I’ve found myself thinking often about this post, on Wells’s early book The Wheels of Chance: A Bicycling Idyll (1896). At one point in the post Adam engages in helpful ways with Paul Smethurst’s recent book The Bicycle: Towards a Global History (2015):

Smethurst’s account of the rise of the bike argues for speed as the salient, something he equates with a new mode of mastery that is both spatial and sexual. ‘Pedestrian travel is more embodied and place-bound than bicycle mobility, but mastery of space is more limited,’ he suggests. ‘Ground gained step-by-step can be less expansive: there is little sense of speed and motion is absorbed into the surrounding space. … Bicycle mobility has a greater potential for transgression than walking because the cyclist can more readily breach the boundaries of social space.’ [Smethurst, 64] He concedes that the motor car ‘has displaced the bicycle as a figure of speed’ nowadays, but maintains that bicycling involves the actual penetration of space in a way that the spectator-like experience of driving does not.

Then Adam quotes Smethurst:

As modernity advanced in the West in the late 19th century, the idea of existential spatiality was beginning to supersede attachment to traditional place-bound community, in both theory and practice. … Humans are said to be able to cope with severing ties to traditional place-bound communities through a capacity to objectify the world by setting themselves apart, by creating a gap. While this is sometimes represented in modernism as a negative sense of alienation, bicycle mobility re-engages the subject through narcissistic projection and a mastery of space en passant.

And comments:

It’s a particular kind of machine, in other words. Wells pitches the narcissistic projection (as it were) as comic, and his take on the mastery of space is tied, I am going to argue, as I freewheel down the hill of this post, with a sensibility we would nowadays call cyborg. Not just the fusion of man and machine in the context of modernity, the fusion of male and female, and their respective modes of sexual desiring.

You should read the whole post. It’s really good.

I think both Roberts and Smethurst are onto something quite important here. Reading them together you discern that the bicycle as a technology occupies a distinctive point where embodiment and crossing meet. (I say “crossing” rather than “transgression” because I don’t want to confine myself to morally or politically freighted uses, and though the root of “transgress” means simply to “step across,” we now use it exclusively to describe bold, risky crossings that defy something or someone, either for good or ill. That’s too freighted a set of connotations for my purposes. Smethurst often uses the term “crossing” for similar reasons.) The appeal of the bicycle lies in its power to enable crossings of space, including politicized social space, that would be frustratingly time-consuming on foot, but to do so in a way that requires your embodiment, that demands your full physical engagement. And if Adam is right, this particular nexus of possibility is powerful enough that people can become in a sense fused with their bicycles and thereby become proto-cyborgs.

As Adam notes in another post, this one on the 1905 book A Modern Utopia, the question of mobility is an essential one for Wells:

Not for the first time in Wells’s career, the ability to move freely about is the real index of utopian desire. His alt-world, with its globe-spanning networks of rapid electric trams and trains, and its happily nomadic population, is one vision of that possibility. Where Thomas More sequestered his utopia on an island against the hostility of the larger world, Wells inverts that model: his whole world is perfect except for ‘the Island of Incurable Cheats‘’, ‘Islands of Drink’ and so on. But this larger logic of inversion reveals itself as, actually, an ideological shift. For just as Wells’s Utopians zoom here and there with ideal and total mobility, so they are surveilled with an ideal and total surveillance. Every Utopian is assigned ‘a distinct formula, a number or “scientific name,” under which he or she could be docketed’, and every single citizen is included in this database: ‘the record of their movement hither and thither, the entry of various material facts, such as marriage, parentage, criminal convictions and the like’.

This, provocatively, suggests a proportional relationship between a given person’s mobility and his or her legibility (to borrow a term from James Scott You are free to move about insofar as the state can “read” you, can know who you are no matter where you are. As mobility goes up, privacy goes down; one freedom comes at the expense of the other.

In this context we might note that in the (benevolently?) panoptic world described by Iain M. Banks in his stories of the Culture, those who commit crimes are not imprisoned but rather are followed everywhere they go by a drone, which in turn leads to social ostracism. Mobility is not restricted because the prerogative of the state of punish does not, in circumstances of unlimited surveillance, require the restriction of mobility. But for the person who gets “slap-droned,” freedom of movement may not have much point.

But in our imperfectly surveilled world, one of the primary ways that citizens become legible to the government is through having homes, domiciles, permanent addresses. A legal system like the Schengen Agreement is meant to apply to people whose governments are sure to know where they live; when it’s made to deal with refugees and others who are homeless, confusion ensues. For those who make, and most completely benefit from, the rules by which the state sees us, mobility might seem to be an unalloyed good, which is why Emmanuel Macron’s campaign slogan was En Marche! — On the Move! On the way! To where, one might ask, but it doesn’t matter, the point is simply that we picture ourselves as mobile people, unconstricted by place.

But if you’re a Syrian refugee, being en marche can become a curse. It is good, indeed, to reduce one’s chances of being bombed or gassed or shot, but it is also harrowing to have no idea when one can stop being on the move, can rest — can, maybe, even have a home. We might here offer a thesis: The value of mobility is relative to the option of stability.

In this recent essay on displaced persons, past and present, Peggy Kamuf writes,

What, then, of the right to move, the right to migrate? Is it not the most fundamental human right, presumed by every other right that can be claimed as a human right? … Although the Universal Declaration of Human Rights recognized in 1948 that everyone has “the right to leave any country, including his own,” none of its 30 articles says anything of the right to migrate to elsewhere. As for freedom of movement, the Declaration envisions it solely “within the borders of each state” (Article 13, “Everyone has the right to freedom of movement and residence within the borders of each state”). As conceived by the UN, then, freedom of movement is a right limited by the sovereignty of the nation state. Writing in the same year, Hannah Arendt pointed to just this limitation of the “best-intentioned humanitarian attempts to obtain new declarations of human rights from international organizations.”

Freedom to depart means very little if there is not also the freedom to arrive.



In a few days I’ll be going to Wheaton, Illinois — a thousand miles from where I now live, in Texas — to visit my old friends, and I’ve decided to drive. I’ll not try to do it in one day; I’ll have to stop overnight; it’s not exactly a scenic drive; and yet I’d rather put up with those inconveniences than with the multiple indignities of commercial air travel. That is, in this particular case, I would rather accept restricted mobility than accept the multiple ways that the TSA and the airlines demand that I become what Michel Foucault calls a “docile body.” (I might feel different about all this if I could afford business- or first-class travel, but I can’t.)

All of which should serve as a reminder that it is not only mobility that we are discussing here. Flying does not give me more mobility, it gives me greater speed: that is to say, it uses less time. If I were wholly unconcerned about time I could ride a bicycle or walk to Illinois. But if I were more concerned about time than I am — if, for instance, I were in the middle of a school term and could only spare a couple of days — then I’d simply have to accept the indignities of being the airline’s docile body. Or stay home. But I’m not in school right now, I have no pressing deadlines, and my wife and son are happy to share his car for a few days; so I’ll be driving.

Publicists and salespeople speak of “the romance of travel,” but not all travel is romantic, and among the kinds that could plausibly be so described, there are multiple sources of appeal. Crossing the Atlantic on the Queen Mary, or Europe on the Orient Express, or Route 66 in an old Mustang, may all be romantic, but in radically different ways. (Is flying first-class to Europe also romantic, or just luxurious? I’m not sure.) We might experience the romance of being served, the romance of novelty, or the romance of … well, what is the driving-cross-country romance, the On the Road romance? It has a good deal to do with making your own decisions, driving as long as you want to drive and stopping when you want to stop. The romance of novelty can still be had in an automobile, but can be more readily had if you stay off the interstate highway system, which promises (and delivers) the complete absence of novelty.

Because you drive the automobile yourself — a situation that will last for the next few years at least — a fairly high level of physical as well as mental engagement in the act is possible, especially if (a) your car has a manual transmission and (b) you’re not on the interstate. As Nick Carr points out in his book on the powers and limits of automation, The Glass Cage, it’s even possible when driving a car to enter into the state of flow celebrated by Mihály Csíkszentmihályi — and that sense is what enables the cyborg-feeling that Adam talks about in his post on bicycles.

So any serious understanding of mobility will require that we map our experiences on a complex set of axes:

  • slow/fast
  • embodied/disembodied
  • independent/docile
  • impoverished/luxurious
  • rooted/rootless (or secure/insecure, or sameness/novelty)

Typically, people emplaced in the world as I am — i.e., wealthy people in safe and stable societies — have control over at least some of these dimensions, while the further you descend the social scale the fewer options will be available. And those who can choose will choose rather differently, because they will have different “sweet spots”: for some the conservation of time will be paramount and will therefore fly whenever flying takes less time than driving; others will prefer to stay local so they can be on their bicycle, or on their feet, as much as possible; and so on. We’ll have different preferences in different circumstances, of course; but each of us, I think, has an “all things being equal” default preference when it comes to being en marche.

Please look again at the binaries listed above. In general, I think we’ve seen over the past century or so a dramatic shift of preference towards the right-side options: willing to be more docile and disembodied in exchange for speed, luxury, and rootlessness. Which is why, even if the most important thing an individual can do for the environment is to stop flying, that’s simply unthinkable even to the most bien-pensant among us.

But I wonder if that could change, given (a) the increasing unpleasantness of air travel, (b) increasing reports of the unpleasantness of air travel, or (c) both. I have always hated long-distance driving, but the more time I spend in airports the better driving looks to me, thus my decision about this week’s trip. And next month, when Teri and I head to Biola University in Los Angeles for me to lead a faculty seminar there, we’ll also be automobiling there — certainly a more interesting drive than the one from Waco to Chicagoland, but also a longer one. And then maybe those who now drive can recover the pleasure of bicycling … Well, it’s something to hope for.

Though I don’t think the trajectory can be reversed: speed and neophilia (the love of novelty) are, I think, sufficiently desirable to most people who have choices that they’ll gladly accept docility and disembodiment in exchange for them. And that exchange is one of the key paths to the posthuman.

Friday, May 5, 2017

restocking the toolbox

Maybe the coolest thing about my current project is that I get to read — I am obliged to read — theology, the history of technology, and science fiction, in a sort of rotation. These very different genres rub against one another in fascinating ways.

But I am finding that the theology I’m reading isn’t helping me much, at least not so far, and I’m somewhat troubled by that. In this post I’m going to try to explain my frustrations. I’ll be recording impressions more than formulating firm judgments, as a means, I hope, of clarifying those impressions. But because I don’t want to be unfair I won’t be naming names of authors or books. This may make the post less useful to others; if so, my apologies.

Here’s my first impression: professional theologians have acquired in the course of their training a conceptual toolbox which they believe to contain the tools necessary to evaluate and critique cultural developments. Now, that conceptual toolbox was developed and acquired in an era previous to the emergence of our current technopoly, of what I’m calling the Anthropocene — see my first post on the subject for a definition. Yet the structures and practices of the Anthropocene are precisely what require theological interpretation. So in my judgment the existing toolbox is inadequate; but it does not appear that way to the theologians.

Imagine a complex locking mechanism — the kind of thing one might see in Myst — you know, like this:




myst


The theologians’ toolbox contains instruments that enable them to manipulate the mechanism — click this and turn that — which is enough to make them believe that they are making real progress. What they don’t notice is that the locks aren’t opening.

Is that a useful metaphor? Hmmm, I’m not sure. Let’s try a different one: they’re typing the instructions they know into a command-line shell and are pleased that they’re getting responses in return. What they don’t realize is that those responses are error messages. They don’t know the right commands to get their requests executed; they may not even, probably don’t, know the language in which the program was written or to which it will respond.



screen


I’m groping for metaphors here — but that’s telling, because whenever we’re trying to understand some new phenomenon we do so by employing metaphors as bridges between the known and the unknown. Our transition to the Anthropocene era is therefore popping with metaphors: to take just one common example, increasing attention to research on the workings of the human brain has ben accompanied by increasing reliance on the notion than the brain is a kind of computer. It isn’t, and the more dominant that metaphor is the less we are likely to understand our brains; but that just makes the repeated invocation of that metaphor all the more telling and all the more worthy of exploration.

The tools in the theologians’ toolbox don’t work very well with metaphors. They are, rather, almost all designed to work on explicit concepts and propositions, which may then be juxtaposed to the explicit concepts and propositions of theology. Metaphors contain or allude to concepts and propositions but also embody desires, orientations of the will, impulses, attractions and repulsions, bodily practices….

I would like to see, and not just in theology but in all the other humanistic disciplines, a renewed attention to metaphor and myth – matters so thoroughly and relentlessly explored in the 1950s and 1960s sixties that scholars and artists alike became exhausted with those topics and turned to other matters: first the linguistic turn of deconstruction and allied movements and then the material turn of the New Historicism, cultural studies, eco-criticism, body criticism, and the like.

Meanwhile the powerful cultural work of metaphor and myth continues unnoticed by scholars and rarely even acknowledged by writers and artists. It is not that scholars today are unaware of metaphor, or wholly inattentive to it, but they are chiefly interested in the extent to which it is reflective of ideology. For instance, in one of the better-known passages of Lakoff and Johnson’s Metaphors We Live By we see the various ways in which argument is conceptualized as war — which is a useful point (I draw on it in my forthcoming How to Think) but this kind of analysis, which draws a straight line between a particular metaphor and some common element from elsewhere in our cultural lives, ideally one with a clearly political character, marks only one of the ways that metaphor works. It’s interpretatively limited because it’s unaware of the ways that metaphors do affective and aspirational work that is not reducible to, or even identifiable with, any particular spot on our ideological maps.

In Walkaway, about which I posted recently, it’s interesting to see how Cory Doctorow places almost all his hopes for the future in the development of 3D printing, without, I think, realizing that the 3D printer has taken on for him a radiating metaphorical significance that places it somewhere along a continuum between Vaucanson’s defecating duck and a deus ex machina. There’s an interpretation of this ready-to-hand: the 3D printers in this novel are a synecdoche for capitalism, which fulfills our desires while hiding from our sight the preconditions and the raw materials from which our wish-fulfillments are concocted. And that’s true, but there is far more going on here, including, I think, another example of the power of universal machines, which, as I commented the other day, makes the idolorum fabricam into the idol itself. The smartphone and the 3D printer are the two smiling faces of the god of this world.

This is the kind of ongoing metaphorical meaning-making that theologians need to understand but that they don’t have the tools to explore. I think we desperately need now is a recovery of interest in metaphor and myth – not a simple return to the days of Northrop Frye and Mircea Eliade and Suzanne Langer, but a redirecting of attention to those fields of inquiry in light of what we have learned since that half-century ago heyday of mythology and mythopoesis.

Moreover, Christian approaches to contemporary culture needs to get more creative in the making of metaphors, not just the interpreting of them. And if that seems too risky to people than they might remember that one of the ways to do this is by recovering the lost imaginative worlds of our predecessors in the faith. In this light we might take as our models the leaders of the mid–20th century nouvelle theologie, whose theology was nouvelle because it was based in ressourcement, in the recovery of ideas and metaphors that had been forgotten in the development of scholastic theology and the intellectual war with Protestantism.

I’ll end today's incoherent rambling with a passage from Leszek Kolakowski’s early book The Presence of Myth, a passage that I think hints provocatively at the Powers that I’m trying to bring together in this project:

Metaphysical questions and beliefs reveal an aspect of human existence not revealed by scientific questions and beliefs, namely, that aspect that refers intentionally to nonempirical unconditioned reality. The presence of this intention does not guarantee the existence of the referents. It is only evidence of a need, alive in culture, that that to which the intention refers should be present. But this presence cannot in principle be the object of proof, because the proof-making ability is itself a power of the analytical mind, technologically oriented, which does not extend beyond its tasks. The idea of proof, introduced into metaphysics, arises from a confusion of two different sources of energy active in man’s conscious relation to the world: the technological and the mythical.

Wednesday, May 3, 2017

LiquidText

LiquidText, an iPad app for annotating PDFs and webpages, is a genuinely remarkable achievement — a delightful and useful piece of software engineering. Here’s what an annotated LiquidText file looks like:


You’ll see that you can highlight, but also comment in the margin on what you have highlighted, connect other comments to that, and pull out highlighted passages and keep them in the margin. It’s also possible to connect comments to one another in a mind-mapping sort of way, which could be very useful for visual thinkers. However, you’ll probably need a 12.9“ iPad to make that work — on my 9.7” model there’s just not enough room unless I shrink the document to the point that it’s unreadable.

Possibly my favorite feature of LiquidText is “Highlight View”: when you enable it, you can then pinch the screen vertically and see all the passages you’ve highlighted:

This is extremely useful. And in general I feel that LiquidText helps me to be a better reader: more active, more responsive, and able to make better use of my responses.

The shortcomings:

  • There are a limited number of file formats (basically PDFs and webpages) that you can import into LiquidText — it would be really cool if you could import, say, EPUB files. I would say that only about 10% of the reading I do is possible in LiquidText.
  • Your LiquidText files are just that, saved in their proprietary file format, and while you can export to a standard PDF and preserve much of your highlighting, in so doing you lose some of the most useful relations among notes and highlights. That’s not the fault of the app’s makers, but that’s the way it is.
  • LiquidText is iPad-only, which means that you need to be pretty invested in that device to make the app a central element of your reading life. But it’s a good enough app that it makes me give further consideration to the possibility of going iOS only.

Tuesday, May 2, 2017

being right to no effect

This post of mine from earlier today, which was based on this column by Damon Linker, has a lot in common with this post by Scott Alexander:

I write a lot about how we shouldn’t get our enemies fired lest they try to fire us, how we shouldn’t get our enemies’ campus speakers disinvited lest they try to disinvite ours, how we shouldn’t use deceit and hyperbole to push our policies lest our enemies try to push theirs the same way. And people very reasonably ask – hey, I notice my side kind of controls all of this stuff, the situation is actually asymmetrical, they have no way of retaliating, maybe we should just grind our enemies beneath our boots this one time.

And then when it turns out that the enemies can just leave and start their own institutions, with horrendous results for everybody, the cry goes up “Wait, that’s unfair! Nobody ever said you could do that! Come back so we can grind you beneath our boots some more!”

Conservatives aren’t stuck in here with us. We’re stuck in here with them. And so far it’s not going so well. I’m not sure if any of this can be reversed. But I think maybe we should consider to what degree we are in a hole, and if so, to what degree we want to stop digging.


Which in turn has a lot in common with this post by Freddie deBoer:

Conservatives have been arguing for years that liberals essentially want to write them out of shared cultural and intellectual spaces altogether. I’ve always said that’s horseshit. But I’m trying to be real with you and take an honest look at what’s happening in the few spaces that progressive people control. In the halls of actual power, meanwhile, conservatives have achieved incredible electoral victories, running up the score against the progressives who in turn take out their frustrations in cultural and intellectual spaces. This is not a dynamic that will end well for us.

Of course by affirming this version of events from conservatives, I am opening myself to the regular claim that I am a conservative. Which is incorrect; I have never been further left in my life than I am today. But you can understand it if you understand the contemporary progressive tendency to treat politics as a matter of which social or cultural group you associate with rather than as a set of shared principles and a commitment to enacting them by appealing to the enlightened best interest of the unconverted. That dynamic may, I’m afraid, also explain why progressives risk taking even firmer control of campus and media and Hollywood and losing everything else.


Which, in another turn, has a lot in common with this column by Andrew Sullivan:

I know why many want to dismiss all of this as mere hate, as some of it certainly is. I also recognize that engaging with the ideas of this movement is a tricky exercise in our current political climate. Among many liberals, there is an understandable impulse to raise the drawbridge, to deny certain ideas access to respectable conversation, to prevent certain concepts from being “normalized.” But the normalization has already occurred — thanks, largely, to voters across the West — and willfully blinding ourselves to the most potent political movement of the moment will not make it go away. Indeed, the more I read today’s more serious reactionary writers, the more I’m convinced they are much more in tune with the current global mood than today’s conservatives, liberals, and progressives. I find myself repelled by many of their themes — and yet, at the same time, drawn in by their unmistakable relevance.


What all these writings have in common is this: We are all saying to the Angry Left that it’s unwise, impractical, and counterproductive to think that you can simply refuse to acknowledge and engage with people who don’t share your politics — to trust in your power to silence, to intimidate, to mock, and to shun rather than to attempt to persuade.

I think we’ve all made very good cases. I also think that almost no one who needs to hear what we have to say will listen. So what will be the result?

Freddie is right to say that the three industries where the take-no-prisoners model is most entrenched are Hollywood, the news media, and the university. And that entrenchment leads, as I have explained before, to the perception of ideological difference as defilement — a thesis that I think goes a long way towards explaining the intensity of the outrage about Bret Stephens’s NYT column on climate-change rhetoric. The purging of those who have defiled the community is a feasible practice unless and until the departure of those people is costly to the community; and each of those three cultural institutions assumes without question that no costs will be incurred by cathartic expulsion of the repugnant cultural Other.

Hollywood could be right to make this assumption: certainly there are no plausible alternatives to its dominance, though that dominance might take new forms — e.g. more movies and series made outside the conventional studio structure by new players like Netflix and Amazon. (It’s possible, though I think highly unlikely, that those new players will attempt to exploit a socially conservative audience.)

But it’s hard to think of two white-collar professions more imperiled than journalism and academia. The belief that left or left-liberal university administrators and professors, and journalists and editors, have in their own impregnability is simply delusional. If they connected their political decisions to their worried meetings about rising costs and desiccating sources of revenue, they would realize this; but the power of compartmentalization is great.

So what I foresee for both journalism and academia is a financial decline that proceeds at increasing speed, a decline to which ideological rigidity will be a significant contributor, though certainly not the only one. (The presence of other causes will ensure that publishers, editors, administrators, and the few remaining tenured faculty members will be able to deny the consequences of rigidity.) I also expect this decline to proceed far more quickly for journalism than for academia, since the latter still has a great many full-time faculty who can be replaced by contingent faculty willing to work for something considerably less than the legal minimum wage.

But at least the people who run those institutions will be able to preserve their purity right up to the inevitable end.