Text Patterns - by Alan Jacobs

Tuesday, June 27, 2017

a great silence cometh

We’re going to have radio silence here at Text Patterns for a while — I’m coming to the end of a year of research leave and have a number of projects, small and large, that I need to wrap up before school resumes in August.

One of those projects is related to my previous post: Our social media put us in an odd situation in relation to words, our own and others’. People say things they don't mean, or that they declare afterwards that they don't mean; they retweet or report or link to statements that they later (when challenged) say they don't endorse. Every day we see people positioning themselves in oblique relation to language. Mikhail Bakhtin — early in his career, which is to say, early in the history of the Soviet Union, when people’s words were already landing them in prison — wrote of the moral imperative of being answerable for one’s language; Wendell Berry, in one of his most important essays, wrote of the inestimable value of “standing by words,” standing by what we say. It’s hard to imagine concepts more foreign to the way language is used on social media today. So I’m writing an essay about the restoration of answerability.

I’m also in the early stages of writing about the changing fortunes of the concept of nature, something that I discussed briefly in a few recent posts.

As usual, I’ll be posting images, links, and very brief reflections (often on Christian matters) at my personal blog.

I’m continuing to think about Anthropocene theology, but that’s going to be a very long-term project.

The next book I write, by the way, will be called Auden and Science.

And then I have various responsibilities concerning the books I’ve already written. How to Think: A Guide for the Perplexed — that’s the subtitle for the U.K. edition, which I like better than the one for the American edition — will be out this fall, and I’ll be involved in the publicity for that. Also, I’ll be blogging about some of the ideas of the book on that site.

The Year of Our Lord 1943: Christian Intellectuals and Total War will be out next year from Oxford UP, and I have final revisions to do for that, plus all the questionnaire-answering and form-filling that accompany publication.

AND: I’ll need to start prepping for my fall classes! By the time I return I’ll have been fifteen months out of the classroom, and that’s a long time. I am itching to reconnect with students, believe it or not.

But the research leave was great. I got a lot of work done and have been refreshed by the opportunity to read and think and reflect. This past year also marked the end of a decade of illness in my family, which means that for the past few months I have had the strange and wonderful experience of not having to spend a lot of time caring for a sick loved one. (That’s taken me a while to get used to, oddly enough. I still wake up in the morning casting my mind around for the first thing I need to do for ... oh wait, everyone’s fine!) So I’ll be headed back into teaching with a lot of energy. Lord, may it please you that the next few years are like this one — for me personally, I mean. I’d prefer not to have any more years like this one in American politics.

Ciao for now!

so many questions

I have many questions — real, deep, sincere questions — about this.

  • Does Katherine Dettwyler really believe that a person deserves torture and death for stealing a poster?
  • Or does she, rather, believe that a person deserves torture and death for being a clueless privileged culturally-imperialist white male?
  • Or does she, perhaps, believe that a person deserves not torture and death but maybe arrest for being a clueless privileged culturally-imperialist white male, and just wrote carelessly?
  • Is she right that a significant number of her white male students "think nothing of raping drunk girls at frat parties and snorting cocaine, cheating on exams, and threatening professors with physical violence"?
  • How do any or all of these beliefs affect her ability to do her job as a teacher?
  • How many college teachers share these beliefs?
  • Is this a situation in which no "beliefs" as such are involved, but Dettwyler's Facebook post was rather an unfocused spur-of-the-moment venting arising from frustration with a lousy job or lousy working conditions?
  • If your answer to the previous question is "Yes" or "Probably yes," how do you account for the fact that Dettwyler seems to have made very similar comments on blogs?
  • How might a tendency to go off on unfocused spur-of-the-moment venting arising from frustration with a lousy job or lousy working conditions affect a person's ability to do her or his job as a teacher?
  • Did the University of Delaware ask Dettwyler for an explanation of her post and/or comments?
  • Did the university ask her to apologize for them?
  • Suppose that she did apologize — would that be sufficient for her to keep her job?
  • What would the University of Delaware have done if a tenured faculty member had made precisely the same comments?
  • As those outside the academy go apoplectic over these matters, those inside the academy shrug. Is shrugging enough?
  • What does it mean that so many people these days wish death on strangers whom they dislike or disagree with?
  • Should we feel better when we're told that people don't really mean it when they, for instance, respond to a tweet expressing a view about English grammar by wishing an entire generation of Americans dead?
  • Like, if you don't really in your heart of hearts want those people you disagree with to die in a fire or be raped and tortured, then we don't have a problem? Is that the argument?
  • Presumably all of the above, and worse, has been said to Katherine Dettwyler since her Facebook post went public — does that help?
  • Does vigilante vengeance have limits?
  • Even if it's just verbal vengeance?
  • Is forgiveness a social good?

Monday, June 26, 2017

historical knowledge and world citizenship

Few writers have meant as much to me, as consistently, over many years as Loren Eiseley — I say a bit about my teenage discovery of him in this essay. I am now writing a piece on the new Library of America edition of his essays for Education and Culture, and, man, is it going to be hard for me to keep it below book-length. I keep coming across little gems of provocation and insight. In lieu of buttonholing my family and making them listen to me read passages aloud — though I’m not saying I’ll never do that — I may post some choice quotations here from time to time.

Here’s a wonderful passage from the early pages of The Firmament of Time, Eiseley’s lapidary and meditative history of geology, or rather of what the rise of modern geology did to the human experience of time. I like it because it illuminates certain blind spots of today’s academics — and people more generally — and because it reminds us just how essential the study of history is.

Like other members of the human race, scientists are capable of prejudice. They have occasionally persecuted other scientists, and they have not always been able to see that an old theory, given a hairsbreadth twist, might open an entirely new vista to the human reason. I say this not to defame the profession of learning but to urge the extension of education in scientific history. The study leads both to a better understanding of the process of discovery and to that kind of humbling and contrite wisdom which comes from a long knowledge of human folly in a field supposedly devoid of it. The man who learns how difficult it is to step outside the intellectual climate of his or any age has taken the first step on the road to emancipation, to world citizenship of a high order.

He has learned something of the forces which play upon the supposedly dispassionate mind of the scientist; he has learned how difficult it is to see differently from other men, even when that difference may be incalculably important. It is a study which should bring into the laboratory and the classroom not only greater tolerance for the ideas of others but a clearer realization that even the scientific atmosphere evolves and changes with the society of which it is a part. When the student has become consciously aware of this, he is in a better position to see farther and more dispassionately in the guidance of his own research. A not unimportant by-product of such an awareness may be an extension of his own horizon as a human being.

Topsy-turvy, Tono-Bungay

In his blog-through of the works of H. G. Wells, Adam Roberts has reached Tono-Bungay, and there’s much food for thought in the post. Real food, not patent medicine like Tono-Bungay itself. Much of the novel, in Adam’s account, considers just that relationship: between the real and the unreal, the health-giving and the destructive, the truly valuable and mere waste — all the themes that Robertson Davies explores in The Rebel Angels and that are also, therefore, the chief concern of my recent essay on Davies, “Filth Therapy”.

Here I might quote Adam quoting some people who quote some other person:

Patrick Brantlinger and Richard Higgins quote William Cohen’s Introducing Filth: Dirt, Disgust, and Modern Life to the effect that ‘polluting or filthy objects’ can ‘become conceivably productive, the discarded sources in which riches may lie’, adding that ‘“Riches” have often been construed as “waste”’ and noting that ‘the reversibility of the poles — wealth and waste, waste and wealth — became especially apparent with the advent of a so-called consumer society during the latter half of the nineteenth century’ [‘Waste and Value: Thorstein Veblen and H. G. Wells’, Criticism, 48:4 (2006), 453].

This prompts me to want to write a sequel to “Filth Therapy,” though I clearly need to read Introducing Filth first.

It occurs to me that these are matters of longstanding interest to Adam, whose early novel Swiftly I have described as “excresacramental” — it was the first novel by Adam that I read, and given how completely disgusting it is, I’m rather surprised that I kept reading him. But he’s that good, even when he’s dirty-minded, as it were.

These themes make their way into fiction, I think, because of an ongoing suspicion, endemic now in Western culture if not elsewhere, that we have it all wrong, that we have valued what we should not have valued and vice versa, that we have built our house only having first rejected the stone that should be the chief cornerstone. As the old General Confession has it, “We have left undone those thinges whiche we ought to have done, and we have done those thinges which we ought not to have done, and there is no health in us.” This suspicion, which is often muted but never quite goes away, is perhaps the most lasting inheritance of Christianity in a post-Christian world: the feeling that we have not just missed the mark but are utterly topsy-turvy.

Christianity is always therefore suggesting to us the possibility of a “revaluation of all values,” a phrase that Nietzsche in The Antichrist used against Christianity:

I call Christianity the one great curse, the one great intrinsic depravity, the one great instinct for revenge for which no expedient (i.e. A means of attaining an end, especially one that is convenient but considered improper or immoral) is sufficiently poisonous, secret, subterranean, petty — I call it the one immortal blemish of mankind… And one calculates time from the dies nefastus on which this fatality arose - from the first day of Christianity! Why not rather from its last? From today? Revaluation of all values!

But Nietzsche issues this call because he thinks that Christianity itself has not set us right-side-up, but rather turned us upside-down. It was Christianity that first revalued all values, saying that the first shall be last and the last first, and he who seeks his life will lose it while he who loses his life shall find it, and blessed are the meek, and blessed are the poor in spirit…. Nietzsche’s call is therefore a call for restoration of the values that Christianity flipped: rule by the strong, contempt for the weak. It is, when considered in the long historical term, a profoundly conservative call.

Whether or not Nietzsche’s demand for a new paganism is right, surely it is scarcely necessary: for rule by the strong and contempt for the weak is the Way of the World, always has been and always will be; Christianity even at its most powerful can scarcely distract us from that path, much less set us marching in the opposite direction. Because that Way is so intrinsic to our neural and moral orientation, because we run so smoothly along its well-paved road, it is always useful to us to read books that don’t suggest merely minor adjustments in our customs but rather point to the possibility of something radically other. Such books are at the very least a kind of tonic, and a far better one than the nerve-wracking stimulation of Tono-Bungay.

Saturday, June 24, 2017

the big impediment to going iOS-only

At Macdrifter, Gabe Weatherhead makes a vital point:

But Apple has a blind spot that I think might mean the iPad never has parity with the Mac. The App Store just doesn’t encourage big powerful app development.

The price point on the iOS App Store is too low for many indie developers to succeed. I look at the most powerful apps I use and they come from a handful of companies dedicated to craft but supported by Mac revenue. Omnigraffle, OmniFocus, and OmniOutliner are great. But there are few competitors in this space that can reach the same level of quality and still make a profit. I suspect that in an iOS-only world these apps would end.

Transmit and Coda by Panic are top-tier software, but even they seem to be struggling to justify their existence.

This doesn’t mean that great apps don’t still get released on iOS. They just don’t keep getting supported. My favorite text editors, the best personal databases, the variety of bookmark managers. They might keep rolling out, but the majority aren’t actively developed.

This is exactly right. Even the pro-quality apps that remain in development tend to be updated inconsistently and (in comparison to Mac apps) rarely. Every time I think about going iOS-only, I realize that too many of the apps I rely on are apps … I can’t rely on.

Wednesday, June 21, 2017

Darwin's mail

I’ve just read Janet Browne’s two-volume biography of Charles Darwin, and it’s a magnificent achievement — one of the finest biographies I’ve ever read. I especially admire Browne’s judgment in knowing when to stick with the events of the life and when to pull back her camera to reveal the larger social contexts in which Darwin worked.

One of the interesting subthemes of the book concerns the way Darwin gravitated towards technologies that would allow him to pursue whatever aroused his curiosity — and whenever his curiosity was aroused he tended to become obsessive until he satisfied it. For instance, though he hated having his photograph taken, he made extensive use of photographs in writing his peculiar book on The Expression of the Emotions in Man and Animals.

Also: if you think of the various scientific institutions and journals of the Victorian era as a kind of network, he and his chief supporters (Thomas Henry Huxley above all) skillfully exploited those technologies to spread the news of natural selection far more quickly than it could have been expected to spread otherwise.

But as someone who has a long-standing interest in the postal service, these passages from the early pages of the second volume are especially provocative:

Systematically, he turned his house into the hub of an ever-expanding web of scientific correspondence. Tucked away in his study, day after day, month after month, Darwin wrote letters to a remarkable number and variety of individuals. He relied on these letters for every aspect of his evolutionary endeavour, using them not only to pursue his investigations across the globe but also to give his arguments the international spread and universal application that he and his colleagues regarded as essential footings for any new scientific concept. They were his primary research tool. Furthermore, after the Origin of Species was published, he deliberately used his correspondence to propel his ideas into the public domain—the primary means by which he ensured his book was being read and reviewed. His study inside Down House became an intellectual factory, a centre of administration and calculation, in which he churned out requests for information and processed answers, kept himself at the leading edge of contemporary science, and ultimately orchestrated a transformation in Victorian thought.

Maybe it wasn't the telegraph that was the Victorian internet but rather the penny post — even if it was slower.

And Darwin was utterly unashamed to use letters to get other people (friends, family, and often strangers) to do research for him:

He also hunted down anyone who could help him on specific issues, from civil servants, army officers, diplomats, fur-trappers, horse-breeders, society ladies, Welsh hill-farmers, zookeepers, pigeon-fanciers, gardeners, asylum owners, and kennel hands, through to his own elderly aunts or energetic nieces and nephews. Many of his letters went to residents of far-flung regions — India, Jamaica, New Zealand, Canada, Australia, China, Borneo, the Hawaiian Islands — reflecting the increasing European domination of the globe and rapidly improving channels of communication.

It’s a good thing that Darwin was a wealthy man: “In 1851 he spent £20 on ‘stationery, stamps & newspapers’ (nearly £1,000 in modern terms) ... By 1877 Darwin’s expenditure on postage and stationery had doubled to £53 14s. 7d, a sum roughly equal to his butler’s annual salary.”

And here’s Browne’s incisive summary of this method:

If there was any single factor that characterised the heart of Darwin’s scientific undertaking it was this systematic use of correspondence. Darwin made the most of his position as a gentleman and scientific author to obtain what he needed. He was a skilful strategist. The flow of information that he initiated was almost always one-way. Like countless other well-established figures of the period, Darwin regarded his correspondence primarily as a supply system, designed to answer his own wants. “If it would not cause you too much trouble,” he would write. “Pray add to your kindness,” “I feel that you will think you have fallen on a most troublesome petitioner,” “I trust to your kindness to excuse my troubling you.” ...

Alone at his desk, captain of his ship, safely anchored in his country estate on the edge of a tiny village in Kent, he was in turn manager, chief executive, broker, and strategist for a world-wide enterprise. Once, in a passing compulsion, he attached a mirror to the inside of his study window, angled so that he could catch the first glimpse of the postman turning up the drive. It stayed there for the rest of his life.

Tuesday, June 20, 2017

Nature

Bruno Latour shares with Timothy Morton a determination to overthrow the concept of “nature” because he believes that that concept “makes it possible to recapitulate the hierarchy of beings in a single ordered series.” Therefore any genuine (non-anthropocentric) “political ecology” — of the sort I briefly described in a previous post — requires “the destruction of the idea of nature” (Politics of Nature, p. 25).

As for Morton, he thinks the idea of nature does one basic thing: since nature is, fundamentally and always, “a surrounding medium that sustains our being,” it follows that “Putting something called Nature on a pedestal and admiring it from afar does for the environment what patriarchy does for the figure of Woman. It is a paradoxical act of sadistic admiration” (Ecology without Nature, pp. 4, 5).

We might notice that these two reasons for rejecting the concept of Nature are incompatible. The problem for Latour is that Nature places humans within “a single ordered series,” but at the top of it; for Morton, human beings are outside the order of Nature, which “surrounds” us. I think Morton is closer to being correct than Latour is: the way that Latour describes Nature is actually more appropriate to the concept that preceded it within the discourses of Western thought, Creation. It is from the Jewish and Christian account of Creation — over which human beings have been given “dominion” — that the “single ordered series,” at the top of which humans stand, emerges. But Morton’s critique applies better to the term that has recently succeeded Nature within our talk about such matters, “the environment.”

It’s therefore tempting to say that Nature is the term that bridges the historical gap between Creation and “the environment.” But that would oversimplify the story. And the deficiency that Latour and Morton share is an ahistorical oversimplification of what Raymond Williams calls “perhaps the most complex word in the language” — and even Williams’s account seems condensed in comparison with the one that C. S. Lewis gives in his long essay in Studies in Words. But Williams gets at the really key point here when he issues this caution: “since nature is a word which carries, over a very long period, many of the major variations of human thought — often, in any particular use, only implicitly yet with powerful effect on the character of the argument — it is necessary to be especially aware of its difficulty.” This is just what Morton and Latour fail to do.

And it would do no good to claim to be interested in only one of the many meanings of the word “nature,” because, as Williams points out, they tend to bleed into one another. Nature in the sense of “that which surrounds humans and with which we interact” is always in complex, confusing relation with Nature as the whole show, all that there is, as when Pope writes: “All are but parts of one stupendous whole / Whose body Nature is and God the soul.” Here human beings are clearly part of that “body” (thus Nature is here still a synonym for Creation). But of course we can lose our awareness of our place in that “one stupendous whole,” through pride or simply through participation in technological modernity, and can then come to see our lives as somehow “unnatural”; which can then lead us in turn to seek ways to become “one with Nature.” At the very least we seek what Morton and Donna Haraway both refer to as kinship, and the words “kin” and “kind” are Anglo-Saxon equivalents of Nature: the old name for Mother Nature is Dame Kind.

But kinship is not identity, and this is precisely why “Nature,” “nature,” “natural,” “unnatural,” all drift in and out of focus and shift their meanings like holograms. To understand this you need only read King Lear, where all these notions are ceaselessly deployed and redeployed — where Lear wonder what his nature is while cursing his unnatural daughters, and Edmund reckons with the consequences of being Gloucester’s natural (i.e. bastard) son. We do not know what we are kin to or how close the kinship is; we don’t know what we are like, which means we do not understand our true nature. And so the narrow and historically insensitive definitions offered by Latour and Morton simply won’t do; they are manifestly inadequate to the situation on the ground.

But this we know: kinship is not identity. We have very good reasons to doubt that our creaturely cousins, or siblings — St. Francis’s Brother Sun and Sister Moon, among others — ask these questions. Which means that from them we can learn much but perhaps not everything we want and need to know. As Auden wrote,

But trees are trees, an elm or oak
Already both outside and in,
And cannot, therefore, counsel folk
Who have their unity to win.

Monday, June 19, 2017

Q&A

Q: Why are you reading all this stuff, anyway? It seems pretty obvious that you don’t have much sympathy for it.

A: That’s a good question. I could give several answers. For one thing, I’m not as lacking in sympathy as you think. I have respect for any good-faith attempts to reckon with the immensely vexed question of what it means to be human, and the corollary questions about how we are most healthily related to the nonhuman, and I think Morton and Haraway are really trying to figure these things out. There’s a moral urgency to their writing that I admire.

Q: Is that so? Sure doesn’t sound like it.

A: Well, yeah, I guess that last post was kind of negative. As I was reading Morton I realized that some pretty important intellectual decisions had been made before he even began his argument, and I wanted to register that protest.

Q: But if you feel that a particular philosophical project has gone astray from the start, why not just move along to thinkers and lines of thought you find more fruitful, more resonant with potential?

A: Remember that this is a work in progress: as I said in that post, I’m currently reading Morton, I’m not done. (And in a sense I’m still reading Haraway, even though I put her book aside months ago.) When you’re blogging your way through a reading project, any one post is sure to give an incomplete picture of your response and likely to give a misleading one.

Q: Fair enough, I guess, but there does seem to be a pattern to your writing about a lot of recent work. You read it, think about it, and then declare that there are resources in the history of Christian thought that address these questions — whatever the questions are — better than the stuff you’ve been reading does. So why not just read and think about those Christian figures who always seem to do it better?

A: Because often those non-Christian (or non-religious, or anti-Christian, or anti-religion) thinkers often raise important questions that Christians tend to neglect, and I have to see whether there are in fact such adequate resources from within Christianity to address the questions raised by others. So far I have found that my tradition is indeed up for those challenges, buts its resources are augmented and strengthened by having to address what it never would have asked on its own. I truly believe that Christianity will emerge stronger from a genuinely dialogical encounter with rival traditions, in part because it will (as it has so often in the past) adopt and adapt what is best in those traditions for its own purposes. It doesn’t always work out that way; Hank Hill was right when he said to the praise band leader “You’re not making Christianity better, you’re just making rock-and-roll worse!” But most of the time the genuinely dialogical encounter more than pays for itself.

Q: Maybe. But you often seem out of your depth with the kind of thing —the kind of stuff you’re reading theses days — and often in the mood to kick over the traces. Wouldn’t you be better off sticking with the stuff you actually have a professional level of knowledge of? Auden? Other twentieth century religiously-inclined literary figures?

A: Honestly, you may be right. I often wonder about that very point. And that’s one of the reasons — that’s the main reason, I guess — why I talk about writing books of the technological history of modernity and the Anthropocene condition but end up writing them about the stuff I have spent most of my career teaching.

Q: So why are you even here, man? Why not drop this blog and get back to work in your own field?

A: Because this is a place where I can exercise my habitual curiosity about things I don’t know much about. Because this is a kind of Pensieve for me, a way to clear away thoughts that otherwise would clog up my brain. Because every once in a while something of value coalesces out of all this randomness. I have very few readers and still fewer commenters, so I’m not getting the thrill of regular feedback, but hitting the “Publish” button offers an acceptable simulacrum of accomplishment. Those are probably not very good reasons, but they’re the reasons I have.

But I’m not gonna lie: spending so much time reading stuff with which at a deep level I’m at odds is wearing, it really is. Especially since I know that the people I’m reading — and working so hard to read fairly — are highly unlikely to treat serious Christian thinkers with comparable respect. With any respect. They don’t know that Christian theology that’s deeply and resourcefully engaged with the modern world exists, and if they knew they wouldn’t care. What’s I’m doing when I read thinkers like Morton and Haraway is an engagement on my part, but it’s not a conversation. That’s just what it’s like if you want to bring Christian thought to bear on modern academic discourse. You only do it if you believe you’re called to do it.

Saturday, June 17, 2017

in responsibilities begin dreams

Lately I've been reading the philosopher Timothy Morton, who has a lot to say about living in the Anthropocene, and I see that he has a forthcoming book called Humankind: Solidarity with Non-Human People. On his website the book is described thus:

What is it that makes humans human? As science and technology challenge the boundaries between life and non-life, between organic and inorganic, this ancient question is more timely than ever. Acclaimed Object-Oriented philosopher Timothy Morton invites us to consider this philosophical issue as eminently political. It is in our relationship with non-humans that we decided the fate of our humanity. Becoming human, claims Morton, actually means creating a network of kindness and solidarity with non-human beings, in the name of a broader understanding of reality that both includes and overcomes the notion of species. Negotiating the politics of humanity is the first and crucial step to reclaim the upper scales of ecological coexistence, not to let Monsanto and cryogenically suspended billionaires to define them and own them.

The book isn't out yet, but I find this description worrying. The idea that "becoming human ... actually means creating a network of kindness and solidarity with non-human beings" sounds wonderful, in the most abstractly theoretical terms, but I doubt we can solve our and the world's problems simply by "negotiating the politics of humanity" — at least if Morton means, as I suspect he does based on what I have read so far, redefining the sphere of the political to include the whole range of nonhuman creatures, including the vast and ontologically complex phenomena he calls hyperobjects. Because we don't have a great track record of treating one another well, do we? I'm all for "kindness and solidarity with non-human beings," but first things first, you know? A good many people out there can't even manage kindness and solidarity with parents whose children were murdered in school shootings.

I'm reminded here of a comment made by Maciej Ceglowski in a recent talk. Responding to the claims of Silicon Valley futurists that we're just a few decades away from ending the reign of Death and achieving immortality for at least some, Ceglowski said, "I’m not convinced that a civilization that is struggling to cure male-pattern baldness is ready to take on the Grim Reaper." Similarly, I'd encourage those who plan to achieve kinship with all living things to call me back once they can have rational and peaceable conversations with people who live on their block.

I have the same concern with Morton's project as I do with Donna Haraway's theme of "making kin," which I wrote about here. I suspect that much of the appeal of seeking communion with pigeons, plutonium, and black holes (to use examples taken from Haraway and Morton) is that pigeons, plutonium, and black holes don't talk, tweet, or vote. If projects like Haraway's and Morton's don't reckon seriously with this problem, then they are likely to be in equal parts frivolous and evasive.

Such projects raise, for me, a further question, which is whether the language of kinship and solidarity is the right language to accomplish what its users want. Because you can achieve a feeling of kinship or solidarity without taking on any particular responsibility for the well-being of another creature. Here the old Christian language of "stewardship" seems to me to have greater force, and a force that is especially applicable to the Anthropocene moment: We do not own this world, but it has been entrusted to our care, and only if we seriously strive to live up to the terms of that trust will we have a chance of achieving true kinship and solidarity with all that we care for. It seems to me that Yeats had it backwards: it is not the case that "in dreams begin responsibilities," but in responsibilities begin dreams.



After posting this I realized that I'm not done. Morton, Haraway, Graham Hartman, and others working along similar lines are keen to bridge the gaps between humans and nonhumans — or, perhaps it would be better to say, deny the validity of the gaps that human beings perceive to exist between themselves and the rest of the world. They thus conclude that we require a new philosophical orientation to the nonhuman world, though one that employs quite familiar concepts (kinship, solidarity, intention, purpose, desire) — those concepts are just deployed in relation to beings/objects which formerly were thought to be outside the scope of such terms. We're not used to thinking that hammers have desires and black holes have consciousness.

This strategy of employing familiar language in unfamiliar contexts gives the appearance of being radical but may not be quite that. It strikes me as being largely a reversal of Skinnerian behaviorism: the behaviorists said that human beings are nothing special because they're just like animals and plants, responding to stimuli in law-governed ways; now the object-oriented ontologists say that human beings are nothing special because animals and plants (and hammers and black holes) all possess the traits of consciousness and desire that we have traditionally believed to be distinctive to us. The goal of the philosophical redescription seems to be the same: to dethrone humanity, to get us to stop thinking of ourselves as sitting at the pinnacle of the Great Chain of Being.

And underlying this goal is the assumption (often stated explicitly by all these figures, I think) that our belief in our unique and superior status among the rest of the beings/objects in the world has led us to abuse those beings/objects for our own enrichment or amusement.

I think this whole project is unlikely to bear the fruit it wants to bear, and I have several reasons for thinking so, which I will just gesture at here and develop in later posts.

(1) I doubt the power of philosophical redescription. Changes in our practices will lead to changes in description, not the other way around. The failure to recognize the direction that the causal arrow points is the signal failure of people who, being symbol manipulators by profession, think that the manipulation of symbols is the key to All Good Things. (I have written about this often, for instance, here.)

(2) I don't think we have taken the role of Apex Species on the Great Chain of Being too seriously, I think we have failed to take it seriously enough.

(3) I believe that all of these difficulties can best be addressed by living into certain ancient ways of thinking — which, in our neophilic age, is a hard sell, I know.

People will say, "Go back to Christianity? We tried that and it got us into this situation." To which the obvious rejoinder is the Chestertonian one that Christianity hasn't been tried and found wanting, it has been found difficult and left untried. But perhaps more to the point: everything has failed. Every day I hear lefties say that capitalism has been tried and didn't work, and righties say that socialism has been tried and didn't work — to which each side retorts that its preferred system hasn't really been tried, hasn't been implemented properly and thoroughly.

And all of these people are correct. Every imaginable system has been put into play with partial success at best, and the problems result from incomplete or half-hearted implementation of that system and from flaws inherent to it — which flaws are precisely what make people half-hearted or incomplete in their implementation of it. Everything has been tried and found wanting, and found difficult and left untried. This is the human condition. Attempts to remedy social and personal ills always run aground on both the sheer complexity of our experience and our mixed and conflicting desires (mixed and conflicted both within ourselves and in relation to one another).

New vocabularies, or even the deployment of old vocabularies in supposedly radical new ways, won't fix that. Which is not to say that improvements in conditions are impossible.

Much more on all this later.

Friday, June 16, 2017

digital culture through file types

This is a fabulous idea by Mark Sample: studying digital culture through file types. He mentions MP3, GIF, HTML, and JSON, but of course there are many others worthy of attention. Let me mention just two:

XML: XML is remarkably pervasive, providing the underlying document structure for things ranging from RSS and Atom feeds to office productivity software like Microsoft Office and iWork — but secretly so. That is, you could make daily and expert use of a hundred different applications without ever knowing that XML is at work under the hood.

Text: There's a great story to be told about how plain text files went from being the most basic and boring of all file types to a kind of lifestyle choice — a lifestyle choice I myself have made.

If you have other suggestions, please share them here or with Mark.

men ignoring (as well as interrupting) women

The New York Times is wrong about a great many things these days, but it's certainly right about this: men really do interrupt women All. The. Time. (And the NYT has covered this story before.) I have seen the phenomenon myself in many faculty meetings over the years, and it's especially painful when a woman sits in silence through 45 minutes of a meeting, finally decides to say something — and is instantly cut off.

I have often talked too much in meetings, but I don't think I do this — women who have worked with me, please let me know if I'm wrong. Please. (Could you do it in an email instead of in the comments, below, though? That would be a kindness.) But interrupting is just one of many ways confident and articulate men — or confident men who just think they're articulate — can sideline their female colleagues.

Once, some years ago now, a younger colleague asked me to join her for lunch. She wanted to talk to me about something: the fact that I had not expressed interest in or support of her scholarship, even though it overlapped with my own in some areas. My first thought was that I really did admire her work and thought; but that was immediately followed by the realization that I had never told her so. I had completely failed to offer the support and encouragement that would have meant a lot to her as someone making her way in our department and our institution. So I apologized, and asked if she would forgive me, which of course she did.

In the aftermath of that lunch meeting, I thought a lot about why I had so manifestly failed my colleague, and I've continued to think about it since. I don't fully understand the complexities of the situation, and I may be looking for self-exculpation here, but I do think I've identified one element of the problem, and it involves sexually-segregated socializing.

A number of my younger male colleagues had expressed gratitude for my support of them, and when I thought about how I had expressed that support — the advice I had given, the responses to their work — I realized that that had rarely happened on campus, in our offices or hallways, but rather in coffee shops and pubs. When we met on free mornings for coffee to chat as we got through some grading or diminished the size of our inboxes, or met in the evenings after work for a pint or two — that's when I got the chance to say some supportive things.

But while we often asked our female colleagues to join us for such outings, they rarely did. I am honestly not sure whether they just weren't interested, or had conflicting obligations, or didn't hear enough to make it perfectly clear that their presence was really wanted and that we didn't desire to create a Boy's Club. But I do know that I should have been aware of these dynamics and found other ways to let the women in my circles know that I valued their work. Once that single colleague had the boldness to call my attention to my shortcomings in this area, I made an effort to compensate — though I don't know that I ever did enough.

I especially want to ask my fellow academics: What do you think about the account I've given? Does it sound plausible? What am I missing, either about myself or about the general social dynamics?

frequency of citation does not equal quality of research

Google Scholar has just added a set of what it calls Classic papers: "Classic papers are highly-cited papers in their area of research that have stood the test of time. For each area, we list the ten most-cited articles that were published ten years earlier." The problem here is the equating of frequent citation with "standing the test of time." As it happens, many scholarly papers retracted by the journals where they were published continue to be widely cited anyway. Frequency of citation is not a good proxy for "classic" status.

Thursday, June 15, 2017

the oven bird imagines the future

This provocative post by Alec Ryrie asks an important question: Why is our culture’s dystopian imagination so absolute? Drawing on a recent history thesis by Olive Hornby that describes outbreaks of plague in early-modern England during which between a third and half of the people in some communities died. Not all but a handful, not 99%, but a little less than half, maybe. Enough to inflict profound damage on the emotional, spiritual, and economic life of a place — but not enough to destroy it altogether. Ryrie:

Most disasters are not absolute. They are real, devastating, and consequential, but they do not wipe the slate clean. Human beings are resilient and are also creatures of habit. You can panic, but you can’t keep panicking, and once you’ve finished, you tend to carry on, because what else is there? The real catastrophes of the West in the past century (world wars, the Spanish flu) have been of this kind: even as the principal imagined one (nuclear war) is of the absolute variety.

We need to learn to be better at imagining serious but non-terminal disasters, the kind which are actually going to hit us. (For a recent cinematic example, the excellent and chilling Contagion.) That way, when we confront such things, we will be less tempted simply to say ‘Game over!’ and to attempt to reboot reality, and will instead try to work out how to deal with real, permanent but not unlimited damage.

In such a case you can’t say “Game over” — he’s quoting Aliens there — because the game isn’t over. The game goes on, in however damaged a form, leaving us all forced to confront the truth taught us by the oven bird:

There is a singer everyone has heard,
Loud, a mid-summer and a mid-wood bird,
Who makes the solid tree trunks sound again.
He says that leaves are old and that for flowers
Mid-summer is to spring as one to ten.
He says the early petal-fall is past
When pear and cherry bloom went down in showers
On sunny days a moment overcast;
And comes that other fall we name the fall.
He says the highway dust is over all.
The bird would cease and be as other birds
But that he knows in singing not to sing.
The question that he frames in all but words
Is what to make of a diminished thing.

Wednesday, June 14, 2017

literary fiction and climate change, revisited

Here we have Siddhartha Deb making precisely the same inexplicable error that Amitav Ghosh, whom he quotes, made last year — a mistake on which I commented at the time. The thought sequence goes like this:

1) Declare yourself interested only in “literary” fiction;

2) Define literary fiction as a genre concerned only with the quotidian reality of today;

3) Complain that literary fiction is deficient in imaginative speculation about the realities and possibilities of climate change.

But if you have already conflated “literary fiction” and “fiction” — note how Deb uses the terms interchangeably — and have defined the former as having a “need to keep the fluky and the exceptional out of its bounds, conceding the terrain of improbability — cyclones, tornadoes, tsunamis, and earthquakes — to genre fiction,” then you have ensured the infallibility of your thesis. Because any story that engages with “the fluky and the exceptional” (or, riskily, the future) ipso facto becomes “genre fiction” and therefore outside the bounds of your inquiry.

This self-blinkering leads Deb into some very strange statements:

In the United States too, even well meaning liberal fiction, often falling under the rubric of cli-fi, reveals itself as incapable in grappling with [our steadfast rapaciousness]. This is perhaps because to think of modern life as a failure, and to question the idea of progress, requires an extremism of vision or a terrifying kind of independence. An indie bestseller like Emily St. John Mandel’s Station Eleven, set in an eco-apocalypse, features rhapsodies on the internet and electricity. Marcel Theroux in Far North includes a paean to modern flight as one of the finest inventions of “our race,” even though the effect of air travel on carbon emissions is quite horrific.

Let me just pause to note that Deb has a rather expansive notion of “the United States,” given that Emily St. John Mandel is Canadian and Marcel Theroux was born in Uganda and educated wholly in England. Setting that aside, Deb’s description of Mandel’s book is farcically inaccurate. It is true that there are characters in the book, some among the handful of people who have survived a plague that killed 99.9% of humanity, who miss the internet and electricity. Does Deb think that in such an world nobody would miss those technologies? Or is it his view that a truly virtuous writer should make a point of suppressing such heretical notions?

Either position is silly. Of course people in such a world would miss technological modernity, for good reasons and bad. At one point we get “an incomplete list” of what’s gone:

No more diving into pools of chlorinated water lit green from below. No more ball games played out under floodlights. No more porch lights with moths fluttering on summer nights. No more trains running under the surface of cities on the dazzling power of the electric third rail. No more cities. No more films, except rarely, except with a generator drowning out half the dialogue, and only then for the first little while until the fuel for the generators ran out, because automobile gas goes stale after two or three years. Aviation gas lasts longer, but it was difficult to come by.

No more screens shining in the half-light as people raise their phones above the crowd to take pictures of concert states. No more concert stages lit by candy-colored halogens, no more electronica, punk, electric guitars.

No more pharmaceuticals. No more certainty of surviving a scratch on one's hand, a cut on a finger while chopping vegetables for dinner, a dog bite....

No more countries, all borders unmanned.

No more fire departments, no more police. No more road maintenance or garbage pickup. No more spacecraft rising up from Cape Canaveral, from the Baikonur Cosmodrome, from Vandenburg, Plesetsk, Tanegashima, burning paths through the atmosphere into space.

No more Internet. No more social media, no more scrolling through litanies of dreams and nervous hopes and photographs of lunches, cries for help and expressions of contentment and relationship-status updates with heart icons whole or broken, plans to meet up later, pleas, complaints, desires, pictures of babies dressed as bears or peppers for Halloween. No more reading and commenting on the lives of others, and in so doing, feeling slightly less alone in the room. No more avatars.

Again: Does Deb think people in a devastated world wouldn't think this way? Or does it think it wrong to give voice to such memories and reflections?

Does he think that such a list offers nothing but regret?

The central figures of Station Eleven are the members of a group called the Traveling Orchestra. They play classical music and perform plays.

All three caravans of the Traveling Symphony are labeled as such, THE TRAVELING SYMPHONY lettered in white on both sides, but the lead caravan carries an additional line of text: Because survival is insufficient.

When I first read Station Eleven I had mixed feelings about it, but in the two years since I have thought often about the Traveling Symphony and what it achieved, what it reminded people of, what it made possible. The book offers, especially through the Symphony, a moving and at times profound meditation on the complex relationships that obtain among technology, art, and human flourishing. I’d strongly recommend that Siddhartha Deb read it.

And he should read some Kim Stanley Robinson while he’s at it.

play as work

Andromeda

Peter Suderman writes about playing the video game Mass Effect: Andromeda,

The game boasts an intricate conversation system, and a substantial portion of the playtime is spent talking to in-game characters, quizzing them for information (much of which adds color but is ultimately irrelevant), asking them for assignments, relaying details of your progress, and then finding out what they would like you to do next.

At a certain point, it started to feel more than a little familiar. It wasn't just that it was a lot like work. It was that it was a lot like my own work as a journalist: interviewing subjects, attempting to figure out which one of the half-dozen questions they had just answered provided useful information, and then moving on to ask someone else about what I had just been told.

Eventually I quit playing. I already have a job, and though I enjoy it quite a bit, I didn't feel as if I needed another one.

But what about those who aren't employed? It's easy to imagine a game like Andromeda taking the place of work.

You should read the whole article, because it’s a fascinating and deeply reflective account of the costs and benefits of a world in which “about three quarters of the increase in leisure time among men since 2000 has gone to gaming.” What I love about Peter’s narrative is that it is sure to make video-game alarmists less alarmed and video-game enthusiasts less enthusiastic.

I have a thousand ideas and questions about this essay, but I’ll mention just one line of thought here: I find myself wondering how, practically speaking, video games got this way. Did game designers learn through focus groups and beta testing that games with a significant work-like component were more addictive? Or were they simply answering to some need in their own psyches? I’m guessing that the correct answer is: some of both. But in any case, there’s a strong suggestion here that human beings experience a deep need for meaningful work, and will accept meaningfulness in small quantities or in fictional form rather than do without it.

Tuesday, June 13, 2017

Penguin Café

Another music post...

Nearly thirty years ago now I bought a CD on pure impulse, knowing almost nothing about the performers: When in Rome, by the Penguin Café Orchestra. You’ve probably heard some of their songs: “Perpetuum Mobile” — in 15/8 time! — or “Telephone and Rubber Band”, though maybe not my favorite of their songs, “Dirt.” The style is difficult to describe and definitely doesn't work for everyone. Simon Jeffes, who founded the PCO, wrote its songs, and played whatever instruments needed playing for a given tune, called their work “modern semi-acoustic chamber music,” and, in a different context, “imaginary folklore.” I like that latter description: I imagine a hidden land somewhere populated by people of English, Celtic, Portuguese, and Venezuelan descent, playing away on instruments they found in their grandparents’ attics. As I say, not for everyone, but I loved it from the start.

When Simon Jeffes died of a brain tumor in 1997, at the age of 48, it seemed that the story of PCO was over. But a one-off reunion concert on the tenth anniversary of his death, featuring his son Arthur, caused a great many people to say that they want more. So Arthur Jeffes (an archeologist by training) got some musicians together and founded Penguin Café to play his father’s music and some of his own. The results are getting more interesting — for instance, in Cantorum, an attempt to use some of the characteristic rhythms and repetitions of electronic music with analog ones. Check it out:

Monday, June 12, 2017

Nils Frahm



A few years ago the German pianist/composer/producer Nils Frahm fell out of bed and broke his thumb. As he later recalled,

All of a sudden I had so much time, an unexpected holiday. I cancelled most of my schedule and found myself being a little bored. Even though my doctor told me not to touch a piano for a while, I just couldn’t resist. I started playing a silent song with 5 fingers on my right and the remaining 4 on my left hand. I set up one microphone and recorded another tune every other night before falling asleep.

If you click on the link above, you’ll see that you can download for free the resulting recording, called Screws in honor of what held his thumb together as it was healing.

I like Frahm’s electronic music very much, but it’s his solo piano work that really captivates me. He often uses an upright piano that he has modified slightly by adjusting the size and texture of the felts, though his wonderful 2015 record Solo was recorded on a unique 12-foot-tall piano called the Klavins M370. He can play loud and fast, but his best music is slow and contemplative, and has reminded many people of Erik Satie’s Gymnopédies, though when his improvisations get chordal they remind me a bit of Keith Jarrett’s quieter moments.

Maybe the most important predecessor to Frahm, though, is Glenn Gould — not in pianistic technique, but in recording technique. In his recording sessions, Gould famously insisted that the microphones be placed as close to the piano strings as possible, yielding a very intimate sound — one which was intensified, I think, by his spare use of pedals. Try listening to a random piece from Gould’s version of Bach’s Well-Tempered Clavier and then compare it to, say Sviatoslav Richter’s (equally great) version, and you’ll immediately envision Richter playing on a big stage in a great concert hall. Gould’s music is for the private listener.

And Frahm takes this emphasis on privacy even further. He has fitted one of his pianos with a pickup that sits inside the instrument, so that you can hear the mechanism moving as the hammers lift and drop and as the pedals engage and disengage. You’re reminded that pianos are made largely of wood — Frahm seems to be playing a living creature rather than a thing. I am not certain that in recording Screws he had the mic inside the piano, but it sounds like it to me; and there are ambient noises from the room in which he recorded it too. In an interview a few years back he commented: “There is something very beautiful about a mono recording of a piano. ‘Screws’ which I just recorded was with one microphone, an old condenser, fed through an EMT stereo reverb and that was it. That was the whole process.” Simple, analog, warm, quiet, private. (Similarly, here’s Nils with one of his favorite toys.)

However: the benefits of such simplicity and warmth are not so easily accessed by the listener. Listening to Frahm’s solo music on a bog-standard pair of earbuds will not allow you to discern many of the subtleties that make it beautiful, and will reveal none of them if there’s any noise at all in the room where you’re listening. My hearing is not nearly as good as it once was, thanks to a youth misspent in too much rock-and-roll played at far too high a volume, but I’ve found that to get the most out of Frahm’s music I benefit from the lossless 24-bit versions he offers on his site, played through a DAC headphone amp and a very good set of headphones. So, as so often in our world today: simplicity and warmth are expensive, and increasingly available only to a privileged few.

But in the best way available to you, check out Nils Frahm’s music. It’s truly remarkable.

Tuesday, May 30, 2017

alert: latency in posting


Friends: My beloved and I are about to take a road trip to Southern California, where next week I'll be leading a faculty seminar at Biola University. We'll take our time driving out there and driving back, because I've never seen the desert Southwest and plan to enjoy taking some of it in. Blogging will resume soon after our return.

Monday, May 29, 2017

iOS users and meta-users

The most recent episode of Canvas — the podcast on iOS and "productivity" (a word I hate, but never mind that for now) hosted by Federico Viticci and Fraser Speirs — focused on hopes for the upcoming iOS 11. Merlin Mann joined the podcast as a guest, and the three of them went around and talked about features they'd like to see introduced to iOS.

Some examples: Viticci wants the ability to record, in video and sound, actions performed on the iPad; Speirs imagines having a digital equivalent of a transparent sheet to draw down over the iPad screen on which he could write with an Apple Pencil, thereby marking up, as it were, things that are happening in an app; and Merlin Mann, who has 450 apps on his iOS devices, wishes for the ability to batch-delete apps, for example, ones that he hasn't used in two years or more.

Listening to the episode, I thought: These aren't iOS users, not even power users, they're meta-users. Viticci writes and talks about iOS for a living; Speirs teaches students how to use iPads; Mann makes his way in life talking about productivity, especially (though not only) on digital devices. Their iOS wish-lists make them the edgiest of edge-cases, because their uses are all about the uses of others.

As for me, a user neither power nor meta, many of my wishes for iOS involve things that Apple can't do on its own. For instance:

  • I wish Bluetooth worked better, but Bluetooth is a standard Apple doesn't control. No matter how well Apple handles its implementation of the standard, they can't control how well device manufacturers handle their implementations. But in any case, given how long Bluetooth has been around, it really, really ought to work better than it does.
  • This site is on Blogger (sigh), and Google has withdrawn their iOS Blogger app and made sure that the Blogger UI doesn't render properly on Safari for iOS — it seems that they're trying to drive iOS users towards Chrome. (Also, there are no good blogging apps for iOS: some are abandonware, some have hideously ugly and non-intuitive UIs, and one, Blogo, demands that you sign up for an account and turn over your data to its owners.)
  • Many, many websites just don't render properly on an iPad, and I expect will never do so. Which makes me wonder what Apple can do on its end (besides enabling Reader View, which is great) to improve poor rendering. E.g.: One of the most lasting problems in iOS involves selecting text, which can be extremely unpredictable: sometimes when you touch the screen nothing selects, while at other times when you're trying to select just one word the whole page gets selected instead. But these problems almost always happen on websites, and are a function, I think, of the poor rendering in Safari for iOS. Is there anything that Apple can do about this, I wonder?

Among the things that Apple can definitely do something about, here are a few wishes from me:

  • When you're connected to a wi-fi network and the signal gets weak or intermittent, and there's another known network with a stronger signal available, your iOS device should switch to that better network automatically. Optimize for best connection.
  • Apple should strongly push developers to implement Split View.
  • Apple should strongly push developers of keyboard-friendly apps to implement keyboard shortcuts — and if they have Mac apps, the same shortcuts on both platforms (the people at Omni are great at this).
  • This is perhaps pie-in-the-sky, but I crave extensive, reliable natural-language image searching in Photos. But I expect we'll get this from Google before we get it from Apple.

Sunday, May 28, 2017

"major collegiate disorders"

A follow-up to yesterday's post...

Of course it's possible to reach too far into the past to get context for current events in the university, but this book certainly offers some interesting food for thought:


I love the fact that there was something called the Conic Section Rebellion.

Anyone who said that nothing like this could happen today would, I think, be correct; but I leave as a potentially illuminating exercise for my readers this question: Why couldn't it happen today?

Saturday, May 27, 2017

getting context, and a grip

Several long quotations coming. Please read them in full.

James Kirchik writes,

Of the 100 or so students who confronted [Nicholas] Christakis that day, a young woman who called him “disgusting” and shouted “who the fuck hired you?” before storming off in tears became the most infamous, thanks to an 81-second YouTube clip that went viral. (The video also — thanks to its promotion by various right-wing websites — brought this student a torrent of anonymous harassment). The videos that Tablet exclusively posted last year, which showed a further 25 minutes of what was ultimately an hours-long confrontation, depicted a procession of students berating Christakis. In one clip, a male student strides up to Christakis and, standing mere inches from his face, orders the professor to “look at me.” Assuming this position of physical intimidation, the student then proceeds to declare that Christakis is incapable of understanding what he and his classmates are feeling because Christakis is white, and, ipso facto, cannot be a victim of racism. In another clip, a female student accuses Christakis of “strip[ping] people of their humanity” and “creat[ing] a space for violence to happen,” a line later mocked in an episode of The Simpsons. In the videos, Howard, the dean who wrote the costume provisions, can be seen lurking along the periphery of the mob.

Of Yale’s graduating class, it was these two students whom the Nakanishi Prize selection committee deemed most deserving of a prize for “enhancing race and/or ethnic relations” on campus. Hectoring bullies quick to throw baseless accusations of racism or worse; cosseted brats unscrupulous in their determination to smear the reputations of good people, these individuals in actuality represent the antithesis of everything this award is intended to honor. Yet, in the citation that was read to all the graduating seniors and their families on Class Day, Yale praised the latter student as “a fierce truthteller.”

Let's look at these episodes at Yale in relation to something that happened at Cornell nearly fifty years ago. Paul A. Rahe was an undergraduate at Cornell then, and tells the story:

At dawn on April 18, 1969 — the Saturday of Parents’ Weekend and the day after the student conduct tribunal issued a reprimand (as minor a penalty as was available) to those who had engaged in the “toy-gun spree” — a group of black students, brandishing crowbars, seized control of the student union (Willard Straight Hall), rudely awakened parents sleeping in the guest rooms upstairs, used the crowbars to force open the doors, and ejected them from the union.

Later that day, they brought at least one rifle with a telescopic sight into the building. On Sunday afternoon, the administration agreed to press neither civil nor criminal charges and not to take any other measures to punish those who had occupied Willard Straight Hall, to provide legal assistance to anyone who faced civil charges arising from the occupation, and to recommend that the faculty vote to nullify the reprimands issued to those who had engaged in the “toy-gun spree.” Upon hearing that this agreement had been reached, 110 black students marched out of Willard Straight Hall in military formation to celebrate their victory, carrying more than seventeen rifles and bands of ammunition.

The next day, when the faculty balked and stopped short of accepting the administration’s recommendation, one AAS leader went on the campus radio and threatened to “deal with” three political science professors and three administrators, whom he singled out by name, “as we will deal with all racists.” Finally, on Wednesday, April 23, the faculty met at a special meeting and capitulated to the demands of the AAS, rescinding the reprimand issued by the student conduct tribunal and calling for a restructuring of the university.

At the very least, the Cornell story should give us some context for thinking about what happened at Yale last year. More generally, we should remember that the ceaseless hyperventilation of social media tends to make us think that American culture today is going through a unique process of dissolution. Rick Perlstein is one of my least favorite historians, but he does well to set us straight on that:

“The country is disintegrating,” a friend of mine wrote on Facebook after the massacre of five policemen by black militant Micah Johnson in Dallas. But during most of the years I write about in Nixonland and its sequel covering 1973 through 1976, The Invisible Bridge, the Dallas shootings might have registered as little more than a ripple. On New Year’s Eve in 1972, a New Orleans television station received this message: “Africa greets you. On Dec. 31, 1972, aprx. 11 pm, the downtown New Orleans Police Department will be attacked. Reason — many, but the death of two innocent brothers will be avenged.” Its author was a twenty-three-year-old Navy veteran named Mark James Essex. (In the 1960s, the media had begun referring to killers using middle names, lest any random “James Ray” or “John Gacy” suffer unfairly from the association.) Essex shot three policemen to death, evading arrest. The story got hardly a line of national attention until the following week, when he began cutting down white people at random and held hundreds of officers at bay from a hotel rooftop. Finally, he was cornered and shot from a Marine helicopter on live TV, which also accidentally wounded nine more policemen. The New York Times only found space for that three days later.

Stories like these were routine in the 1970s. Three weeks later, four men identifying themselves as “servants of Allah” holed up in a Brooklyn sporting goods store with nine hostages. One cop died in two days of blazing gun battles before the hostages made a daring rooftop escape. The same week, Richard Nixon gave his second inaugural address, taking credit for quieting an era of “destructive conflict at home.” As usual, Nixon was lying, but this time not all that much. Incidents of Americans turning terrorist and killing other Americans had indeed ticked down a bit over the previous few years — even counting the rise of the Black Liberation Army, which specialized in ambushing police and killed five of them between 1971 and 1972.

In Nixon’s second term, however, they began ticking upward again. There were the “Zebra” murders from October 1973 through April 1974 in San Francisco, in which a group of Black Muslims killed at least fifteen Caucasians at random and wounded many others; other estimates hold them responsible for as many as seventy deaths. There was also the murder of Oakland’s black school superintendent by a new group called the Symbionese Liberation Army, who proceeded to seal their militant renown by kidnapping Patty Hearst in February 1974. Then, in May, after Hearst joined up with her revolutionary captors, law enforcement officials decimated their safe house with more than nine thousand rounds of live ammunition, killing six, also on live TV. Between 1972 and 1974 the FBI counted more than six thousand bombings or attempted bombings in the United States, with a combined death toll of ninety-one. In 1975 there were two presidential assassination attempts in one month.

Let's pause for a moment to think about that: More than six thousand bombings or attempted bombings in two years.

So, is the country disintegrating? In comparison with the Nixon years: No. Not even with Donald Ivanka Kushner Trump in charge. Which is not to say that it couldn't happen, only that it hasn't yet happened, and if we want to avoid further damage we would do well to study the history of fifty years ago with close attention. For the national wounds that were opened in the Sixties may have scabbed over from time to time in the decades since, but they have never healed.

And in relation specifically to the university, we might ask some questions:

  • How significant is it that most of the people running our universities today were undergraduates when things like the Cornell crisis happened?
  • If it is significant, what is the significance?
  • To what extent are the social conflicts that plague some universities today continuations of the conflicts that plagued them fifty years ago?
  • If universities today seem, to many critics, to have lost their commitment to free speech and reasoned disagreement, have they abandoned those principles any more completely they did at the height of those earlier student protests?
  • What happened in the intervening decades? Did universities recover their core commitments wholly, or partially, or not at all?
  • How widespread are protests (and the "coddling" of protestors) today in comparison to that earlier era?
  • What needs to be fixed in our universities?
  • Are universities that have gone down this particular path — praising and celebrating students who confront, berate, and in some cases threaten faculty — fixable? (A question only for those who think such behavior is a bug rather than a feature.)

Vital questions all, I think; but not ones that can be answered in ignorance of the relevant history.

Thursday, May 25, 2017

things and creatures, conscience and personhood

Yesterday I read Jeff VanderMeer’s creepy, disturbing, uncanny, and somehow heart-warming new novel Borne, and it has prompted two sets of thoughts that may or may not be related to one another. But hey, this is a blog: incoherence is its birthright. So here goes.

1.

A few months ago I wrote a post in which I quoted this passage from a 1984 essay by Thomas Pynchon:

If our world survives, the next great challenge to watch out for will come — you heard it here first — when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy. It will be amazing and unpredictable, and even the biggest of brass, let us devoutly hope, are going to be caught flat-footed. It is certainly something for all good Luddites to look forward to if, God willing, we should live so long.

If you look at the rest of the essay, you’ll see that Pynchon thinks certain technological developments could be embraced by Luddites because the point of Luddism is not to reject technology but to empower common people in ways that emancipate them from the dictates of the Capitalism of the One Percent.

But why think that future technologies will not be fully under the control of the “biggest of brass”? It is significant that Pynchon points to the convergence of “artificial intelligence, molecular biology and robotics” — which certainly sounds like he’s thinking of the creation of androids: humanoid robots, biologically rather than mechanically engineered. Is the hope, then, that such beings would become not just cognitively but morally independent of their makers?

Something like this is the scenario of Borne, though the intelligent being is not humanoid in either shape or consciousness. One of the best things about the book is how it portrays a possible, though necessarily limited, fellowship between humans and fundamentally alien (in the sense of otherness, not from-another-planet) sentient beings. And what enables that fellowship, in this case, is the fact that the utterly alien being is reared and taught from “infancy” by a human being — and therefore, it seems, could have become something rather though not totally different if a human being with other inclinations had done the rearing. The story thus revisits the old nature/nurture question in defamiliarizing and powerful ways.

The origins of the creature Borne are mysterious, though bits of the story are eventually revealed. He — the human who finds Borne chooses the pronoun — seems to have been engineered for extreme plasticity of form and function, a near-total adaptability that is enabled by what I will call, with necessary vagueness, powers of absorption. But a being so physiologically and cognitively flexible simply will not exhibit predictable behavior. And therefore one can imagine circumstances in which such a being could take a path rather different than that chosen for him by his makers; and one can imagine that different path being directed by something like conscience. Perhaps this is where Luddites might place their hopes for the convergence of “artificial intelligence, molecular biology and robotics”: in the arising from that convergence of technology with a conscience.

2. 

Here is the first sentence of Adam Roberts’s novel Bête:

As I raised the bolt-gun to its head the cow said: ‘Won’t you at least Turing-test me, Graham?’

If becoming a cyborg is a kind of reaching down into the realm of the inanimate for resources to supplement the deficiencies inherent in being made of meat, what do we call this reaching up? — this cognitive enhancement of made objects and creatures until they become in certain troubling ways indistinguishable from us? Or do we think of the designing of intelligent machines, even spiritual machines, as a fundamentally different project than the cognitive enhancement of animals? In Borne these kinds of experiments — and others that involve the turning of humans into beasts — are collectively called “biotech.” I would prefer, as a general term, the one used in China Miéville’s fabulous novel Embassytown: “biorigging,” a term that connotes complex design, ingenuity, and a degree of making-it-up-as-we-go-along. Such biorigging encompasses every kind of genetic modification but also the combining in a single organism or thing biological components with more conventionally technological ones, the animate and the inanimate. It strikes me that we need a more detailed anatomy of these processes — more splitting, less lumping.

In any case, what both VanderMeer’s Borne and Roberts’s Bête do is describe a future (far future in one case, near in the other) in which human beings live permanently in an uncanny valley, where the boundaries between the human and the nonhuman are never erased but never quite fixed either, so that anxiety over these matters is woven into the texture of everyday experience. Which sounds exhausting. And if VanderMeer is right, then the management of this anxiety will become focused not on the unanswerable questions of what is or is not human, but rather on a slightly but profoundly different question: What is a person?

anti-Latour

When I made this chart I titled it "anti-Latour," but I don't remember why.

Tuesday, May 23, 2017

accelerationism and myth-making

I've been reading a good bit lately about accelerationism — the belief that to solve our social problems and reach the full potential of humanity we need to accelerate the speed of technological innovation and achievement. Accelerationism is generally associated with techno-libertarians, but there is a left accelerationism also, and you can get a decent idea of the common roots of those movements by reading this fine essay in the Guardian by Andy Beckett. Some other interesting summary accounts include this left-accelerationism manifesto and Sam Frank's anthropological account of life among the "apocalyptic libertarians." Accelerationism is mixed up with AI research and new-reactionary thought and life-extension technologies and transhumanist philosophy — basically, all the elements of the Californian ideology poured into a pressure cooker and heat-bombed for a few decades.

There's a great deal to mull over there, but one of the chief thoughts I take away from my reading is this: the influence of fiction, cinema, and music over all these developments is truly remarkable — or, to put it another way, I'm struck by the extent to which extremely smart and learned people find themselves imaginatively stimulated primarily by their encounters with popular culture. All these interrelated movements seem to be examples of trickle-up rather than trickle-down thinking: from storytellers and mythmakers to formally-credentialed intellectuals. This just gives further impetus to my effort to restock my intellectual toolbox for (especially) theological reflection.

One might take as a summary of what I'm thinking about these days a recent reflection by Warren Ellis, the author of, among many other things, my favorite comic:

Speculative fiction and new forms of art and storytelling and innovations in technology and computing are engaged in the work of mad scientists: testing future ways of living and seeing before they actually arrive. We are the early warning system for the culture. We see the future as a weatherfront, a vast mass of possibilities across the horizon, and since we’re not idiots and therefore will not claim to be able to predict exactly where lightning will strike – we take one or more of those possibilities and play them out in our work, to see what might happen. Imagining them as real things and testing them in the laboratory of our practice — informed by our careful cross-contamination by many and various fields other than our own — to see what these things do.

To work with the nature of the future, in media and in tech and in language, is to embrace being mad scientists, and we might as well get good at it.

We are the early warning system for the culture. Cultural critics, read and heed.

Monday, May 22, 2017

Frederick Barbarossa won't be around to save you

In the Boston Globe, Kumble R. Subbaswamy writes,

More than 850 years ago, the emperor of the Holy Roman Empire, Frederick Barbarossa, issued the Authentica habita, granting imperial protection for traveling scholars. This seminal document ensured that research and scholarship could develop throughout the empire independent of government interference, and shielded scholars from reprisal for their academic endeavors. These concepts, the foundation for what we now refer to as “academic freedom,” have, over the centuries, enabled some of the most significant advances in the history of humankind.

As chancellor of the University of Massachusetts Amherst, I work with my colleagues in an environment envied by others. Through the inventiveness of trial and error, the exchange of ideas, peer critique, heated debate, and sometimes even ridicule, we put ourselves out there, focused on our research and scholarly pursuits. Without the freedom to experiment, to fail, to persuasively defend our work, we would not learn, and then improve, and eventually succeed. Without this freedom, we would not be able to pass on to our students the importance of pursuing the truth.”

All this is good, and well said, but the invocation of the Authentica habita is perhaps misplaced. For the purpose of that document was to protect scholars from anger or extortion by extra-academic forces, especially local political authorities across the Empire, whereas the most common threats to academic freedom today come from academics. Whenever an academic these days is threatened with serious personal or professional repercussions for articulating unapproved ideas, you can be pretty sure that the call is coming from inside the house.

So if you, fellow academic, think that justice requires that you police, fiercely, untenured assistant professors of philosophy who make arguments that read directly out of the Progressive Prayer Book but stumble over one phrase: fine. Knock yourself out. But don’t expect anyone else to stand up, ever, for the principles that Frederick Barbarossa stood up for. And under the category “anyone else” I would specifically encourage you to remember local, state, and national legislatures, students, donors, and trustees.

I have beaten this drum over and over again in the past decade, so why not one more time? — People who think like you won’t always be in charge. This is a lesson that the Left seems especially incapable of learning, I think because of its deep-seated belief in the inevitability of progress, a belief that is belied by even the briefest inspection of Washington D.C. You, and people you want to support, may well pay in the future for every victory lap you take today.

But there's another problem here, one that operates in a different dimension — not the dimension of employment or prestige, but rather that of intellectual exploration itself. Some years ago, in a brilliant essay called "Philosophy as a Humanistic Discipline," Bernard Williams wrote of

the well known and highly typical style of many texts in analytic philosophy which seeks precision by total mind control, through issuing continuous and rigid interpretative directions. In a way that will be familiar to any reader of analytic philosophy, and is only too familiar to all of us who perpetrate it, this style tries to remove in advance every conceivable misunderstanding or misinterpretation or objection, including those that would occur only to the malicious or the clinically literal-minded.
But we now live in an academic world increasingly ruled by the malicious and the clinically literal-minded. They occupy the stage and issue their dictates, and get less and less resistance to any ukase they choose to promulgate. This leads to an environment which, by analogy to what Williams calls "the teaching of philosophy by eristic argument," "tends to implant in philosophers an intimidatingly nit-picking superego, a blend of their most impressive teachers and their most competitive colleagues, which guides their writing by means of constant anticipations of guilt and shame." With increasingly frequency, this is what academic thought and academic discourse are driven by: constant anticipations of guilt and shame. Which is, needless to say, no recipe for intellectual creativity and genuine ambition. 

Friday, May 19, 2017

fleshers and intelligences

I'm not a great fan of Kevin Kelly's brand of futurism, but this is a great essay by him on the problems that arise when thinking about artificial intelligence begins with what the Marxists used to call "false reification": the belief that intelligence is a bounded and unified concept that functions like a thing. Or, to put Kelly's point a different way, it is an error to think that human beings exhibit a "general purpose intelligence" and therefore an error to expect that artificial intelligences will do the same.

Kelly opposes to this reifying orthodoxy in AI efforts five affirmations of his own:

  1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  2. Humans do not have general purpose minds, and neither will AIs.
  3. Emulation of human thinking in other media will be constrained by cost.
  4. Dimensions of intelligence are not infinite.
  5. Intelligences are only one factor in progress.

Expanding on that first point, Kelly writes,

Intelligence is not a single dimension. It is a complex of many types and modes of cognition, each one a continuum. Let’s take the very simple task of measuring animal intelligence. If intelligence were a single dimension we should be able to arrange the intelligences of a parrot, a dolphin, a horse, a squirrel, an octopus, a blue whale, a cat, and a gorilla in the correct ascending order in a line. We currently have no scientific evidence of such a line. One reason might be that there is no difference between animal intelligences, but we don’t see that either. Zoology is full of remarkable differences in how animals think. But maybe they all have the same relative “general intelligence?” It could be, but we have no measurement, no single metric for that intelligence. Instead we have many different metrics for many different types of cognition.

Think, to take just one example, of the acuity with which dogs observe and respond to a wide range of human behavior: they attend to tone of voice, facial expression, gesture, even subtle forms of body language, in ways that animals invariably ranked higher on what Kelly calls the "mythical ladder" of intelligence (chimpanzees, for instance) are wholly incapable of. But dogs couldn't begin to use tools the way that many birds, especially corvids, can. So what's more intelligent, a dog or a crow or a chimp? It's not really a meaningful question. Crows and dogs and chimps are equally well adapted to their ecological niches, but in very different ways that call forth very different cognitive abilities.

If Kelly is right in his argument, then AI research is going to be hamstrung by its commitment to g or "general intelligence," and will only be able to produce really interesting and surprising intelligences when it abandons the idea, as Stephen Jay Gould puts is in his flawed but still-valuable The Mismeasure of Man, that "intelligence can be meaningfully abstracted as a single number capable of ranking all people [including digital beings!] on a linear scale of intrinsic and unalterable mental worth."

"Mental worth" is a key phrase here, because a commitment to g has been historically associated with explicit scales of personal value and commitment to social policies based on those scales. (There is of course no logical link between the two commitments.) Thus the argument frequently made by eugenicists a century ago that those who score below a certain level on IQ tests — tests purporting to measure g — should be forcibly sterilized. Or Peter Singer's view that he and his wife would be morally justified in aborting a Down syndrome child simply because such a child would probably grow up to be a person "with whom I could expect to have conversations about only a limited range of topics," which "would greatly reduce my joy in raising my child and watching him or her develop." A moment's reflection should be sufficient to dismantle the notion that there is a strong correlation between, on the one hand, intellectual agility and verbal fluency and, on the other, moral excellence; which should also undermine Singer's belief that a child who is deficient in his imagined general intelligence is ipso facto a person he couldn't "treat as an equal." But Singer never gets to that moment of reflection because his rigid and falsely reified model of intellectual ability, and the relations between intellectual ability and personal value, disables his critical faculties.

If what Gould in another context called the belief that intelligence is "an immutable thing in the head" which allows "grading human beings on a single scale of general capacity" is both erroneous and pernicious, it is somewhat disturbing to see that belief not only continuing to flourish in some communities of discourse but also being extended into the realm of artificial intelligence. If digital machines are deemed superior to human beings in g, and if superiority in g equals greater intrinsic worth.... well, the long-term prospects what what Greg Egan calls "fleshers" aren't great. Unless you're one of the fleshers who controls the machines. For now.



P.S. I should add that I know that people who are good at certain cognitive tasks tend to be good at other cognitive tasks, and also that, as Freddie DeBoer points out here, IQ tests — that is, tests of general intelligence — have predictive power in a range of social contexts, but I don’t think any of that undermines the points I’m making above. Happy to be corrected where necessary, of course.