Text Patterns - by Alan Jacobs

Wednesday, October 26, 2016

KSR's Mars: a stab at a course description

Posting continues to be light and rare around here because I’m still slaving away at two books — one and two — but I am not a machine, so I spend some time each day reading for fun. And the other day I was possessed by an unexpected, sudden, and irresistible urge to re-read Kim Stanley Robinson’s Mars trilogy.

I’m about halfway through Red Mars now and it is just thrilling to be back in this fictional world again. Does KSR put a foot wrong in the whole 2000 pages of the trilogy? I think not. It’s simply a masterpiece.

But in addition to the pure enjoyment of it, I find myself mulling over a possibility: What about teaching an interdisciplinary course built around these books? It would be a way to explore, among other things,

  • the distinctive social value of SF
  • environmental politics
  • the economics and politics of colonialism
  • the future prospects of internationalism
  • the nature of science and the Oppenheimer Principle
  • aesthetics and human perceptions of value
  • geology and areology
  • robotics and automation in manufacturing
  • designing politics from Square One (or what looks to some like Square One)

And that’s just a short list! So, friends, I have three questions.

First, does that sound like a useful and/or fun course?

Second, have I neglected any key themes in the trilogy?

And third, what might be some ancillary texts to assign? For instance, to help us about the ways that SF enables political thought, I might want at least some students to read Ursula K. LeGuin’s The Dispossessed; on the possibilities of Martian colonization I might suggest Robert Zubrin’s The Case for Mars on the questions of what science is and what its ultimate values are, I might assign Ursula Franklin’s The Real World of Technology.

But I’m not sure what I might assign on the hard-science side. I’d love to find a book on robotics that is technically detailed but has some of the panache of Neal Stephenson’s famous report on undersea cables and international communication, “Mother Earth Mother Board,” but that might be too much to ask for. I’d love to find an introduction to geology that had some of the clarity and power of John McPhee’s Annals of the Former World but at one-tenth the length. Any help would be much appreciated.

on Social Justice U

Jonathan Haidt explains “Why Universities Must Choose One Telos: Truth or Social Justice.” When my friend Chad Wellmon (on Twitter) questioned Haidt’s dichotomy, I agreed that there is a problem. After all, people who are promoting social justice in he university think that their beliefs are true!

But I also think Haidt has a point — it just needs to be rephrased. The social-justice faction in the university believes that the most fundamental questions about what justice is have already been answered, and require no further reflection or investigation. (And from this follows the belief that questioning The Answers, and still worse suggesting other answers, is, as Haidt says, a kind of blasphemy: At Social Justice University, “there are many blasphemy laws – there are ideas, theories, facts, and authors that one cannot use. This makes it difficult to do good social science about politically valenced topics. Social science is hard enough as it is, with big complicated problems resulting from many interacting causal forces. But at SJU, many of the most powerful tools are simply banned.”)

What needs to happen, then, I believe, is for “SJU” to be honest about its own intellectual constitution, to say openly, In this university, we are not concerned to follow the model of many academic enterprises and inquire into the nature and forms of justice. We believe we already know what those are. Therefore our questions will involve how best to implement the understanding we have all already agreed to before beginning our work.

And you know, if SJU is a private institution, I don't think they would be simply wrong to do this. After all, I have spent my teaching career in Christian institutions, where there are also certain foundational assumptions at work — which, indeed, is true even at Haidt’s Truth U. If Haidt really thinks that there is no blasphemy at Truth U he is sorely mistaken. (Thought experiment: a professor grades her students by seeking the wisdom of the I Ching.) Every educational institution either implicitly or explicitly sets certain boundaries to its pursuits, that is, agrees to set certain questions aside in order to focus on others. And what has long made American higher education so distinctive is its willingness to let a thousand institutional flowers, of very different species, bloom.

The question I would have for proponents of SJU is: Do you embrace the ideological diversity that has been a hallmark of the American system? Are you willing to allow SJU to do its work alongside Truth U and Christian U, and argue for all of those institutional types to be treated equally under the law? Or, rather, do you want every college and university to be dedicated to social justice as you understand it — for there to be no institutions where the very definition of justice is open to question and debate?

Friday, October 21, 2016

Kathleen Fitzpatrick and "generous thinking"

As I’ve mentioned before, I have been working with colleagues for some time now on a document about the future of the humanities, both within and without the university — more about that in due course. And some of my recent work has been devoted to this constellation of issues: see this review-essay in Books and Culture and this longish reflection in National Affairs.

So in light of all that I’m delighted to see that the estimable Kathleen Fitzpatrick is engaged in a new project on “Generous Thinking” in the university: see the first two installments here and here. I am really excited about the direction Kathleen is taking here and I hope to be a useful interlocutor for her — if I can get these dang books finished.

Wednesday, October 19, 2016

a hidden musical culture

If you don't subscribe to Robin Sloan's P R I M E S newsletter, you should. In the most recent edition he talks about this video:

Robin says this video is "wonderful for its evolving sound and also for its inscrutability. I mean, how is he making those noises? How has he learned to play that monstrous instrument?? It's amazing."

Leave that video playing in a tab. It's really nice. It's also quite strange, because it exists. I get the sense, observing this hobby from afar, that most of these slow-building basement performances are ephemeral or, if recorded, never shared. This is a music culture almost totally orthogonal to iTunes and Spotify and even SoundCloud.

I love the vision of a modular synthesizer enthusiast -- ideally 51 years old, a tax accountant with two children -- padding down into the basement where the music machine waits to spend an hour pulling cables and twirling knobs, listening and tweaking, building an analog soundscape, thick and warm like a blanket, and, in that moment, theirs alone.

except for all the others

Farhad Manjoo thinks the Clinton campaign email scandal proves that email in general needs to be ditched:

Email sometimes tricks us into feeling efficient, but it rarely is. Because it’s asynchronous, and because there are no limits on space and time, it often leads to endless, pointless ruminations. If they had ditched email and just held a 15-minute meeting, members of the campaign could have hashed out the foreign-agent decision more quickly in private.

In other words, limits often help. Get on the phone, make a decision, ditch your inbox. The world will be better off for it.

Sounds great — but what if “members of the campaign” weren’t all in the same place? I guess then Manjoo would say “get on the phone” — but have you ever tried arranging a conference call? If more than three people are involved it’s next to impossible. Talk about “inefficient”!

Also, when people hold a conference call to make a significant decision, it’s typically recorded so there will be a record of what they’ve decided, which is necessary in order to avoid the “that’s not how I remember it” problem — but that means that you have something that can be stolen later by nefarious parties.

Manjoo recommends Slack or Hipchat, which can work, but only when the conversation is among people wholly within a given organization.

Email drives me crazy the way it drives everyone else crazy, but I can set aside certain times of the day in which to use it. If I had to have my work interrupted eleven times a day for phone conferences, at someone else’s convenience, or had to have a Slack window open and pinging merrily away all day long, I’d never get anything done. Churchill’s famous comment about democracy — “the worst form of government, except for all those others that have been tried” — might be adapted here: email is the worst form of business communication, except for all the others that Manjoo recommends.

physicians, patients, and intellectual triage

Please, please read this fascinating essay by Maria Bustillos about her daughter’s diagnosis of MS — and how doctors can become blind to some highly promising forms of treatment. The problem? The belief, drilled into doctors and scientists at every stage of their education, that double-blind randomized tests are not just the gold standard for scientific evidence but the only evidence worth consulting. One of the consequences of that belief: that diet-based treatments never get serious considerations, because they can’t be tested blindly. People always know what they’re eating.

See this passage, which refers to Carmen’s doctor as “Dr. F.”:

In any case, the question of absolute “proof” is of no interest to me. We are in no position to wait for absolute anything. We need help now. And incontrovertibly, there is evidence — not proof, but real evidence, published in a score of leading academic journals — that animal fat makes MS patients worse. It is very clearly something to avoid. In my view, which is the view of a highly motivated layperson whose livelihood is, coincidentally, based in doing careful research, there is not the remotest question that impaired lipid metabolism plays a significant role in the progression of MS. Nobody understands exactly how it works, just yet, but if I were a neurologist myself, I would certainly be telling my patients, listen, you! — just in case, now. Please stick to a vegan plus fish diet, given that the cost-benefit ratio is so incredibly lopsided in your favor. There’s no risk to you. The potential benefit is that you stay well.

But Dr. F, who is a scientist, and moreover one charged with looking after people with MS, is advising not only against dieting, but is literally telling someone (Carmen!) who has MS, yes, if you like butter, you should “enjoy” it, even though there is real live evidence that it might permanently harm you, but not proof, you know.

In this way, Dr. F. illustrates exactly what has gone wrong with so much of American medicine, and indeed with American society in general. I know that sounds ridiculous, like hyperbole, but I mean it quite literally. Dr. F. made no attempt to learn about or explain how, if saturated fat is not harmful, Swank, and now Jelinek, could have arrived at their conclusions, though she cannot prove that saturated fat isn’t harmful to someone with MS. The deficiency in Dr. F.’s reasoning is not scientific: it’s more like a rhetorical deficiency, of trading a degraded notion of “proof” for meaning, with potentially catastrophic results. Dr. F. may be a good scientist, but she is a terrible logician.

I might say, rather than “terrible logician,” Dr. F. is someone who is a poor reasoner — who has made herself a poor reasoner by dividing the world into things that are proven and all other things, and then assuming that there’s no way to distinguish among all those “other things.”

You can see how this happens: the field of medicine is moving so quickly, with new papers coming out every day (and being retracted every other day), that Dr. F. is just doing intellectual triage. The firehose of information becomes manageable if you just stick to things that are proven. But as Bustillos says, people like Carmen don't have that luxury.

What an odd situation. We have never had such powerful medicine; and yet it has never been more necessary for sick people to learn to manage their own treatment.

Monday, October 17, 2016

John Gray and the human future

So many things I wish I could write about; so little time to do anything but work on those darn books. But at least I can call your attention to a few of those provocations. I’ll do one today, others later this week.

John Gray’s review of Yuval Noah Harari’s Homo Deus makes some vital distinctions that enthusiastic futurists like Harari almost never make. See this key passage:

Harari is right in thinking of human development as a process that no one could have planned or intended. He fails to see that the same is true of the post-human future. If such new species appear, they will be created by governments and powerful corporations, and used by any group that can get its hands on them – criminal cartels, terrorist networks, religious cults, and so on. Over time, these new species will be modified and redesigned, first by their human controllers, then by the new species themselves. It won’t be too long ­before some of them slip free from their human creators. One type may come out on top, at least for a while, but there is nothing to suggest this process will end in a godlike being that is supreme over all the rest. Like the evolution of human beings, post-human evolution will be a process of drift, with no direction or endpoint.

It is interesting how closely Gray’s argument tracks with the one C. S. Lewis made in The Abolition of Man, especially the third chapter, on the Conditioners and the rest of us. And of course — of course — there’s no question which group Harari identifies with, because futurists always assume the position of power — they always think of themselves as sitting comfortably among the Conditioners: “Forget economic growth, social reforms and political revolutions: in order to raise global happiness levels, we need to manipulate human biochemistry.” To which John Gray rightly replies,

Yet who are “we”, exactly? An elite of benevolent scientists, perhaps? If choice is an illusion, however, those who do the manipulating will be no freer than those who are being manipulated.

That point about free will is also Lewis’s. But Gray raises a possibility that Lewis didn't explore in Abolition, though he hints at it in the fictional counterpart to that book, That Hideous Strength: What if the Controllers don't agree with one another? “If it ever comes about, a post-human world won’t be one in which the human species has deified itself. More like the cosmos as imagined by the Greeks, it will be ruled by a warring pantheon of gods.” And so John Gray’s recommendation to those of us who want to understand what such a world would be like? “Read Homer.”

Friday, October 14, 2016

the future of the codex Bible

Catching up on a topic dear to my heart: here’s a fine essay in Comment by J. Mark Bertrand on printed and digital Bibles. A key passage:

Pastors and scholars rely heavily on software like BibleWorks and Accordance, and laypeople in church are more likely to open Bible apps on their phones than to carry printed editions. The days are coming and may already be upon us when parishioners look askance at sermons not preached from an iPad. ("But aren't you missional?")

And yet, the printed Bible is not under threat. If anything the advent of e-books has ushered in a renaissance of sorts for the physical form of the Good Book. The fulfillment of the hypertext dream by digital Bibles has cleared the way for printed Bibles to pursue other ends. The most exciting reinvention of the printed Scriptures is the so-called reader's Bible, a print edition designed from the ground up not as a reference work but as a book for deep, immersive reading.

Please read it all. And then turn to Bertrand’s Bible Design Blog, where he has recently reflected further on the same issues, and written a few detailed posts — one and two and three — on the new Crossway Reader’s Bible, in six beautifully printed and bound volumes. I got my copies the other day, and they really do constitute a remarkable feat of workmanship and design. You can read, and view, more about the project here.

I would love to say more about all this — and other matters dear to the heart of this ol’ blog — but I am still devoting most of my time to work on two books, one on Christian intellectual life in World War II and one called How to Think: A Guide for the Perplexed. Those will be keeping my mind occupied for the next few months. When I am able to post here, the posts will likely do little more than point to interesting things elsewhere.

Tuesday, October 11, 2016

thoughts on the processing of words

This review was commissioned by John Wilson and meant for Books and Culture. Alas, it will not be published there.

“Each of us remembers our own first time,” Matthew Kirschenbaum writes near the beginning of his literary history of word processing — but he rightly adds, “at least ... those of us of a certain age.” If, like me, you grew up writing things by hand and then at some point acquired a typewriter, then yes, your first writing on a computer may well have felt like a pretty big deal.

The heart of the matter was mistakes. When typing on a typewriter, you made mistakes, and then had to decide what, if anything, to do about them; and woe be unto you if you didn't notice a mistyped word until after you had removed the sheet of paper from the machine. If you caught it immediately after typing, or even a few lines later, then you could roll the platen back to the proper spot and use correcting material — Wite-Out and Liquid Paper were the two dominant brands, though fancy typewriters had their own built-in correction tape — to cover the offending marks and replace them with the right ones. But if you had already removed the paper, then you had to re-insert it and try, by making minute adjustments with the roller or the paper itself, to get everything set just so — but perfect success was rare. You’d often end up with the new letters or words slightly out of alignment with the rest of the page. Sometimes the results would look so bad that you’d rip the paper out of the machine in frustration and type the whole page again, but by that time you’d be tired and more likely to make further mistakes....

Moreover, if you were writing under any kind of time pressure — and I primarily used a typewriter to compose my research papers in college and graduate school, so time pressure was the norm — you were faced with a different sort of problem. Scanning a page for correctable mistakes, you were also likely to notice that you had phrased a point awkwardly, or left out an important piece of information. What to do? Fix it, or let it be? Often the answer depended on where in the paper the deficiencies appeared, because if they were to be found on, say, the second page of the paper, then any additions would force the retyping of that page but of every subsequent page — something not even to be contemplated when you were doing your final bleary-eyed 2 AM inspection of a paper that had to be turned in when you walked into your 9 AM class. You’d look at your lamentably imprecise or incomplete or just plain fuddled work and think, Ah, forget it. Good enough for government work — and fall into bed and turn out the light.

The advent of “word processing” — what an odd phrase — electronic writing, writing on a computer, whatever you call it, meant a sudden and complete end to these endless deliberations and tests of your fine motor skills. You could change anything! anywhere! right up to the point of printing the thing out — and if you had the financial wherewithal or institutional permissions that allowed you to ignore the cost of paper and ink, you could even print out a document, edit it, and then print it out again. A brave new world indeed. Thus, as the novelist Anne Rice once commented, when you’re using a word processor “There’s really no excuse for not writing the perfect book.”

But there’s the rub, isn't there? For some few writers the advent of word processing was a pure blessing: Stanley Elkin, for instance, whose multiple sclerosis made it impossible for him to hold a pen properly or press a typewriter’s keys with sufficient force, said that the arrival of his first word-processing machine was “the most important day of my literary life.” But for most professional writers — and let’s remember that Track Changes is a literary history of word processing, not meant to cover the full range of its cultural significance — the blessing was mixed. As Rice says, now that endless revision is available to you, as a writer you have no excuse for failing to produce “the perfect book” — or rather, no excuse save the limitations of your own talent.

As a result, the many writers’ comments on word processors that Kirschenbaum cites here tend to be curiously ambivalent: it’s often difficult to tell whether they’re praising or damning the machines. So the poet Louis Simpson says that writing on a word processor “tells you your writing is not final,” which sounds like a good thing, but then he continues: “It enables you to think you are writing when you are not, when you are only making notes or the outline of a poem you may write at a later time.” Which sounds ... not so good? It’s hard to tell, though if you look at Simpson’s whole essay, which appeared in the New York Times Book Review in 1988, you’ll see that he meant to warn writers against using those dangerous machines. (Simpson’s article received a quick and sharp rebuttal from William F. Buckley, Jr., an early user of and advocate for word processors.)

Similarly, the philosopher Jacques Derrida, whom Kirschenbaum quotes on the same page:

Previously, after a certain number of versions, everything came to a halt — that was enough. Not that you thought the text was perfect, but after a certain period of metamorphosis the process was interrupted. With the computer, everything is rapid and so easy; you get to thinking you can go on revising forever.

Yes, “you get to thinking” that — but it’s not true, is it? At a certain point revision is arrested by publishers’ deadlines or by the ultimate deadline, death itself. The prospect of indefinite revision is illusory.

But however ambivalent writers might be about the powers of the word processor, they are almost unanimous in insisting that they take full advantage of those powers. As Hannah Sullivan writes in her book The Work of Revision, which I reviewed in these pages, John Milton, centuries ago, claimed that his “celestial patroness ... inspires easy my unpremeditated verse,” but writers today will tell you how much they revise until you’re sick of hearing about it. This habit predates the invention of the word processor, but has since become universal. Writers today do not aspire, as Italian Renaissance courtiers did, to the virtue called sprezzatura: a cultivated nonchalance, doing the enormously difficult as though it were easy as pie. Just the opposite: they want us to understand that their technological equipment does not make their work easier but far, far harder. And in many ways it does.

Matthew Kirschenbaum worked on Track Changes for quite some time: pieces of the book started appearing in print, or in public pixels, at least five years ago. Some of the key stories in the book have therefore been circulating in public, and the most widely-discussed of them have focused on a single question: What was the first book to be written on a word processor? This turns out to be a very difficult question to answer, not least because of the ambiguities inherent in the terms “written” and “word processor.” For instance, when John Hersey was working on his novel My Petition for More Space, he wrote a complete draft by hand and then edited it on a mainframe computer at Yale University (where he then taught). Unless I have missed something, Kirschenbaum does not say how the handwritten text got into digital form, but I assume someone entered the data for Hersey, who wanted to do things this way largely because he was interested in his book’s typesetting and the program called the Yale Editor or just E gave him some control over that process. So in a strict sense Hersey did not write the book on the machine; nor was the machine a “word processor” as such.

But in any case, Hersey, who used the Yale Editor in 1973, wouldn't have beaten Kirschenbaum’s candidate for First Word-Processed Literary Book: Len Deighton’s Bomber, a World War II thriller published in 1970. Deighton, an English novelist who had already published several very successful thrillers, most famously The IPCRESS File in 1962, had the wherewithal to drop $10,000 — well over $50,000 in today’s money — on IBM’s Frankensteinian hybrid of a Selectric typewriter and a tape-based computing machine, the MT/ST. IBM had designed this machine for heavy office use, never imagining that any individual writer would purchase one, so minimizing the size hadn’t been a focus of the design: as a result, Deighton could only get the thing into his flat by having a window removed, which allowed it to be swung into his study by a crane.

Moreover, Deighton rarely typed on the machine himself: that task was left to his secretary, Ellenor Handley, who also took care to print sections of the book told from different points of view on appropriately color-coded paper. (This enabled Deighton to see almost at a glance whether some perspectives were over-represented in his story.) So even if Bomber is indeed the first word-processed book, the unique circumstances of its composition set it well apart from what we now think of as the digital writing life. Therefore, Kirschenbaum also wonders “who was the first author to sit down in front of a digital computer’s keyboard and compose a published work of fiction or poetry directly on the screen.”

Quite possibly it was Jerry Pournelle, or maybe it was David Gerrold or even Michael Crichton or Richard Condon; or someone else entirely whom I have overlooked. It probably happened in the year 1977 or 1978 at the latest, and it was almost certainly a popular (as opposed to highbrow) author.

After he completed Track Changes, Kirschenbaum learned that Gay Courter’s 1981 bestselling novel The Midwife was written completely on an IBM System 6 word processor that she bought when it first appeared on the market in 1977 — thus confirming his suspicion that mass-market authors were quicker to embrace this technology than self-consciously “literary” ones, and reminding us of what he says repeatedly in the book: that his account is a kind of first report from a field that we’ll continue to learn more about.

In any case, the who-was-first questions are not as interesting or as valuable as Kirschenbaum’s meticulous record of how various writers — Anne Rice, Stephen King, John Updike, David Foster Wallace — made, or did not quite make, the transition from handwritten or typewritten drafts to a full reliance on the personal computer as the site for literary writing. Wallace, for instance, always wrote in longhand and transcribed his drafts to the computer at some relatively late stage in the process. Also, when he had significantly altered a passage, he deleted earlier versions from his hard drive so he would not be tempted to revert to them.

The encounters of writers with their machines are enormously various and fun to read about. Kirschenbaum quotes a funny passage in which Jonathan Franzen described how his first word processor kept making distracting sounds that he could only silence by wedging a pencil in the guts of the machine. Franzen elsewhere describes using a laptop with no wireless access whose Ethernet port he glued shut so he could not get online — a problem not intrinsic to electronic writing but rather to internet-capable machines, and one that George R. R. Martin solves by writing on a computer that can’t connect to the internet, using the venerable word-processing program WordStar. Similarly, my friend Edward Mendelson continues to insist that WordPerfect for MS-DOS is the best word-processing program, and John McPhee writes using a computer program that a computer-scientist friend coded for him back in 1984. (I don't use a word-processing program at all, but rather a programmer’s text editor.) If it ain’t broke, don't fix it. And if it is broke, wedge a pencil in it.

Kirchenbaum believes that this transition to digital writing is “an event of the highest significance in the history of writing.” And yet he confesses, near the end of his book, that he’s not sure what that significance is. “Every impulse that I had to generalize about word processing — that it made books longer, that it made sentences shorter, that it made sentences longer, that it made authors more prolific — was seemingly countered by some equally compelling exemplar suggesting otherwise.” Some reviewers of Track Changes have wondered whether Kirschenbaum isn’t making too big a deal of the whole phenomenon. In the Guardian of London, Brian Dillon wrote, “This review is being drafted with a German fountain pen of 1960s design – but does it matter? Give me this A4 pad, my MacBook Air or a sharp stick and a stretch of wet sand, and I will give you a thousand words a day, no more and likely no different. Writing, it turns out, happens in the head after all.”

Maybe. But we can’t be sure, because we can’t rewind history and make Dillon write the review on his laptop, and then rewind it again, take him to the beach, and hand him a stick. I wrote this review on my laptop, but I sometimes write by speaking, using the Mac OS’s built-in dictation software, and I draft all of my books and long essays by hand, using a Pilot fountain pen and a Leuchtturm notebook. I cannot be certain, but I feel that each environment changes my writing, though probably in relatively subtle ways. For instance, I’m convinced that when I dictate my sentences are longer and employ more commas; and I think my word choice is more precise and less predictable when I am writing by hand, which is why I try to use that older technology whenever I have time. (Because writing by hand is slower, I have time to reconsider word choices before I get them on the page. But then I not only write more slowly, I have to transcribe the text later. If only Books and Culture and my book publishers would accept handwritten work!)

We typically think of the invention of printing as a massive consequential event, but Thomas Hobbes says in Leviathan (1650) that in comparison with the invention of literacy itself printing is perhaps “ingenious” but fundamentally “no great matter.” Which I suppose is true. This reminds us that assessing the importance of any technological change requires comparative judgment. The transition to word processing seemed like a very big deal at the time, because, as Hannah Sullivan puts it, it lowered the cost of revision to nearly zero. No longer did we have to go through the agonies I describe at the outset of this review. But I am now inclined to think that it was not nearly as important as the transition from stand-alone PCs to internet-enabled devices. The machine that holds a writer’s favored word-processing or text-editing application will now, barring interventions along the lines of Jonathan Franzen’s disabled Ethernet port, be connected to the endless stream of opinionating, bloviating, and hate-mongering that flows from our social-media services. And that seems to me an even more consequential change for the writer, or would-be writer, than the digitizing of writing was. Which is why I, as soon as I’ve emailed this review to John Wilson, will be closing this laptop and picking up my notebook and pen.

Friday, October 7, 2016

"oddkin" and really odd kin

I’ve been reading Donna Haraway’s Staying with the Trouble: Making Kin in the Chthulucene, and like all Haraway’s work it’s a strange combination of the deeply unconventional and the deeply conventional. Conventional in that formally it’s a standard academic monograph, complete with all the usual apparatus, including not just proper citations and endnotes but also extensive thanks to all the high-class venues around the world where noted academics get to visit to give draft versions of their book chapters. Unconventional in that Haraway has some peculiar ideas and a peculiar (but often delightful, to me anyway) prose style. I find myself wishing that the form was as ambitious and unpredictable as the weirder of the ideas; the rigors of standard academic discursive practice serve as a kind of straitjacket for those ideas, it seems to me.

Here’s a passage that will give you a pretty good sense of what Haraway is up to in this book:

The book and the idea of “staying with the trouble” are especially impatient with two responses that I hear all too frequently to the horrors of the Anthropocene and the Capitalocene. The first is easy to describe and, I think, dismiss, namely, a comic faith in technofixes, whether secular or religious: technology will somehow come to the rescue of its naughty but very clever children, or what amounts to the same thing, God will come to the rescue of his disobedient but ever hopeful children. In the face of such touching silliness about technofixes (or techno-apocalypses), sometimes it is hard to remember that it remains important to embrace situated technical projects and their people. They are not the enemy; they can do many important things for staying with the trouble and for making generative oddkin.

“Making generative oddkin”? Yes. Seeking to become kin with all sorts of creatures and things — pigeons, for instance. There’s a brilliant early chapter here on human interaction with pigeons. Of course, that interaction has been conducted largely on human terms, and Haraway wants to create two-way streets where in the past they ran only from humans to everything else. “Staying with the trouble requires making oddkin; that is, we require each other in unexpected collaborations and combinations, in hot compost piles. We become-with each other or not at all.”

But here’s the thing: Haraway’s human kin are “antiracist, anticolonial, anticapitalist, proqueer feminists of every color and from every people,” and people who share her commitment to “Make Kin Not Babies”: “Pronatalism in all its powerful guises ought to be in question almost everywhere.”

It’s easy to talk about the need to “become-with each other,” but based on many years of experience, I suspect that — to borrow a tripartite distinction from Scott Alexander — most people who use that kind of language are fine with their ingroup (“antiracist, anticolonial, anticapitalist, proqueer feminists of every color and from every people”) and fine with the fargroup (pigeons), but the outgroup? The outgroup that lives in your city and votes in the same elections you do? Not so much.

So here’s my question for Professor Haraway: Does the project of “making kin” extend to that couple down the street from you who have five kids, attend a big-box evangelical church, and plan to vote for Trump? Fair warning: They’re a little more likely to talk back than the pigeons are.

Wednesday, October 5, 2016

pronoun trouble

“Hah! That’s it! Hold it right there!” And then, knowledgeably, confidentially, to the audience: “Pronoun trouble.

Yes, we’re having lots of pronoun trouble these days — for instance, at the University of Toronto. That story quotes “A. W. Peet, a physics professor who identifies as non-binary and uses the pronoun ‘they.’” This is a topic on which plenty of people have plenty of intemperate things to say, which means that a good many of the important underlying issues typically haven’t been explored very thoughtfully. Let me try to identify a couple of them.

The first thing to note is that, so far anyway, the debates haven't been about all pronouns: they have focused only on third person singular pronouns, or what might substitute for the third person singular pronoun — as in the case of Professor Peet, above. That is, the gendered pronouns in English. (As a number of commenters have pointed out, these debates would be almost impossible to have in most of the other European languages, into which gender distinctions are woven so much more densely — and in which, therefore, paradoxically enough, they don't seem to carry as much identity-bearing weight.)

For the Ontario Human Rights Commission, gender identity is “each person’s internal and individual experience of gender. It is their sense of being a woman, a man, both, neither, or anywhere along the gender spectrum,” and everyone has a right to declare where they are on that spectrum — but, more important, also to have that declaration accepted and acknowledged by others.

I have some questions and thoughts.

1) How would I feel if I had a boss — my Dean, say, or Provost (hi Tom, hi Greg) — who persistently referred to me as “she”? And ignored me when I said “That should be ‘he’”? I’d be pretty pissed off. But I’m not sure the best way to describe such language is as a violation of my human rights. Nor do I think it should be seen as a criminal act. Might there be some less extreme language to describe it?

2) Would the experience for someone whose “individual experience of gender” is less traditional than my own be morally and legally different? If so, why?

3) Some people — for instance, here’s the story of Paige Abendroth — claim to experience a “flip” of their “individual experience of gender” from time to time, unpredictably. How responsible should Paige’s coworkers be for keeping up with the flipping? How much tolerance would, or should, the Ontario Human Rights Commission have for any co-workers of Paige who struggled to get it right? Conversely, does Paige have the responsibility to inform everyone in the workplace that such flipping occurs, and to announce to them when it has occurred so that they can start using the proper pronouns?

4) Why should gender be the only relevant consideration here? Suppose I come to experience, as some people do, a complete detachment from myself — a kind of alienation powerful enough that it feels wrong to speak of myself as “I,” and deeply uncomfortable to be addressed as “you” — an existential, not just a rhetorical, illeism. Would my human rights be violated if my co-workers continued to employ the pronoun “you” when addressing me directly, if I wished them not to do so? Surely someone will object that if I were to have this experience it would be a sign of psychological disorder, but why? By what reasoning can we say that that kind of experience is disordered but the experience of Paige Abendroth isn’t? This is not a rhetorical question: I’d really like to know what the argument would look like.

5) We’ve been here before: just ask the theologians. The way many of them have addressed the “pronoun trouble” posed when they talk about God is, I think a harbinger of the future. Many theologians say things like “How God experiences Godself is an unfathomable mystery” and “We must not think of God as utterly independent of God’s creation” — which is to say, they avoid pronouns altogether, gendered ones anyway. As pronoun preference comes to be more and more frequently enshrined within the discourse of human rights, people will become more and more fearful of the consequences of getting pronouns wrong; and the best way to avoid getting the pronouns wrong is to stop using them altogether.

I bet that’s where we’re headed. And I don't even think it will be all that hard to manage. If someone says "Paige needs to do what's best for Paige" instead of "Paige needs to do what's best for her," no one would even notice. When I first started thinking about how all this might apply to me, I realized how rarely, in a classroom setting, I have cause to use third-person singular pronouns. If Alison makes an interesting comment and I want to get a response, I say "What do y'all think about Alison's argument?" not "What do y'all think about her argument?" — the latter would seem rude, I think. Moreover, the good people at Language Log have spent years rehabilitating the singular "they." I can easily imagine the use of third-person singular pronouns gradually all but disappearing from our everyday language — though it will be easier to achieve in speech than in writing.

And then the world will move on to the next gross violation of human rights.

Mary Midgley on cooperative thinking

Mary Midgley is one of my favorite philosophers. Her The Myths We Live By plays a significant role in a forthcoming book of mine and her essay “On Trying Out One’s New Sword” eviscerates cultural relativism, or what she calls “moral isolationism,” more briefly and elegantly than one would have thought possible.

Midgley studied philosophy at Oxford during World War II, along with several other women who would become major philosophers: Elizabeth Anscombe, Philippa Foot, Mark Warnock, Iris Murdoch. People have often wondered how this happened — how, in a field so traditionally inhospitable to women, a number of brilliant ones happened to emerge at the same time and in the same place. Three years ago, in a letter to the Guardian, Midgley offered a fascinating sociological explanation:

As a survivor from the wartime group, I can only say: sorry, but the reason was indeed that there were fewer men about then. The trouble is not, of course, men as such – men have done good enough philosophy in the past. What is wrong is a particular style of philosophising that results from encouraging a lot of clever young men to compete in winning arguments. These people then quickly build up a set of games out of simple oppositions and elaborate them until, in the end, nobody else can see what they are talking about. All this can go on until somebody from outside the circle finally explodes it by moving the conversation on to a quite different topic, after which the games are forgotten. Hobbes did this in the 1640s. Moore and Russell did it in the 1890s. And actually I think the time is about ripe for somebody to do it today. By contrast, in those wartime classes – which were small – men (conscientious objectors etc) were present as well as women, but they weren't keen on arguing.

It was clear that we were all more interested in understanding this deeply puzzling world than in putting each other down. That was how Elizabeth Anscombe, Philippa Foot, Iris Murdoch, Mary Warnock and I, in our various ways, all came to think out alternatives to the brash, unreal style of philosophising – based essentially on logical positivism – that was current at the time. And these were the ideas that we later expressed in our own writings.

Given that so many people think of philosophy simply as arguing, and therefore as an intrinsically competitive activity, it might be rather surprising to hear Midgley claim that interesting and innovative philosophical thought emerged from her environment at Oxford because of the presence of a critical mass of people who “weren't keen on arguing” but were “more interested in understanding this deeply puzzling world” (emphasis mine).

In a recent follow-up to and expansion of that letter, Midgley quotes Colin McGinn describing his own philosophical education at Oxford, thirty years later, especially in classes with Gareth Evans: “Evans was a fierce debater, impatient and uncompromising; as I remarked, he skewered fools gladly (perhaps too gladly). The atmosphere in his class was intimidating and thrilling at the same time. As I was to learn later, this is fairly characteristic of philosophical debate. Philosophy and ego are never very far apart. Philosophical discussion can be ... a clashing of analytically honed intellects, with pulsing egos attached to them ... a kind of intellectual blood-sport, in which egos get bruised and buckled, even impaled.” To which Midgley replies, with her characteristic deceptively mild ironic tone:

Well, yes, so it can, but does it always have to? We can see that at wartime Oxford things turned out rather differently, because even bloodier tournaments and competitions elsewhere had made the normal attention to these games impossible. So, by some kind of chance, life had made a temporary break in the constant obsession with picking small faults in other people’s arguments – the continuing neglect of what were meant to be central issues – that had become habitual with the local philosophers. It had interrupted those distracting feuds which were then reigning, as in any competitive atmosphere feuds always do reign, preventing serious attempts at discussion, unless somebody deliberately controls them.

And Midgley doesn't shy away from stating bluntly what the thinks about the intellectual habits that Gareth Evans was teaching young Colin McGinn and others: “Such habits, while they prevail, simply stop people doing any real philosophy.”

So Midgley suggests that other habits be taught: “Co-operative rather than competitive thinking always needs to be widely taught. Feuds need to be put in the background, because all students equally have to learn a way of working that will be helpful to everybody rather than just promoting their own glory.” Of course, promoting your own glory is the usual path to academic success, and if that’s what you want, then your way is clear. But Midgley wants people who choose that path to know that if they don't learn co-operative thinking, “they can’t really do effective philosophy at all.” They won't make progress “in understanding this deeply puzzling world.”

I can't imagine any academic endeavor that wouldn't be improved, intellectually and morally, if its participants heeded Midgley’s counsel.

Monday, October 3, 2016

books on The Good Book

The Wall Street Journal commissioned this review but in the end didn't find space for it. Which is cool, because they paid me for it anyway. I offer it here gratis, for your reading pleasure. 

One of the first attempts to account for literature in terms of evolutionary psychology was provided by Stephen Pinker, in his 1998 book How the Mind Works. There he suggested that “Fictional narratives supply us with a mental catalogue of the fatal conundrums we might face someday and the outcomes of strategies we could deploy in them.” Take Hamlet for example: “What are the options if I were to suspect that my uncle killed my father, took his position, and married my mother?”

This was perhaps a rather wooden and literal-minded example, and Pinker has received some hearty ribbing for perpetrating it, so one might expect that more recent entries in the genre have grown more sophisticated. But not so much.

The difficulties start with what ev-psych critics think a story is. They think a book is a kind of machine for solving problems of survival or flourishing, sort of like a wheel or a hammer except made with words rather than wood or rock. Thus Carel von Schaik and Kai Michel (hereafter S&M) in The Good Book of Human Nature: An Evolutionary Reading of the Bible: “We know how humans evolved over the last 2 million years and how and to what degree the prehistoric environment shaped the human psyche.... We can therefore reconstruct the problems the Bible was trying to solve.” Leaving aside the rather significant question of how much “we” actually do know about human prehistory and its role in forming our brains, one might still ask whether the Bible is a problem-solving device. But this is one of the governing assumptions of S&M's book and no alternatives to this assumption are ever considered.

The Good Book of Human Nature is governed by a few other assumptions too. One is that the turning point in human development was what Jared Diamond called “the worst mistake in the history of the human race”: trading in a hunter-gatherer life for a sedentary agricultural life. Another is that humans possess three “natures” that are related to this transition: first, “innate feelings, reactions, and preferences” that predate the transition; second, a cultural nature, based on strategies for dealing with the problems that arose from assuming a sedentary life; and third, “our rational side,” which is based on consciously held beliefs.

These assumptions in turn generate a theory of religion, which is basically that religion is a complex strategy for keeping the three natures in some degree of non-disabling relation to one another. And when, equipped with these assumptions and this theory, S&M turn their attention to the Bible — again, conceived as a problem-solving device — it turns out that the Bible confirms their theory at every point. Previous interpreters of the Bible, S&M note, have never come to any agreement about what it means, but they have discovered what it’s “really about,” what its “actual subject really is”: “the adoption of a sedentary way of life.” They do not say whether they expect to put an end to interpretative disagreement. Perhaps modesty forbade.

Thus armed, S&M get to work. The patriarchal narratives illustrate and teach responses to “the problems created by patriarchal families," and formulate an “expansion strategy" in relation to said problems. The portions of Scripture known in Judaism as the Writings — Ketuvim, including the Psalms, Proverbs, Job and so on — collectively embody an IAR (immunization against refutation) strategy. The prophets, including the New Testament’s accounts of the life of Jesus? All about CREDs (credibility-enhancing displays).

If you like this sort of thing, this is the sort of thing you’ll like. To me, a little of it goes a very long way — and this Good Book offers 450 pages of it, which is like a two-finger piano exercise that lasts seven hours. My complaint is the opposite of that put forth by the Emperor in Amadeus: Too few notes, I say. Played too many times.

Is it really likely that this enormously divergent collection of writings we call the Bible has a single “subject”? That the heartfelt outpourings of the Psalms and the lamentations of Job amount to a “strategy”? Moreover, given that the conditions of production that S&M think relevant — the shift from hunter-gatherers to agriculturalists — happened all over the world, the account they give here should be the same were they working on any surviving writings from the same era. Which means that their book on Homer and Hesiod and Sappho would say mostly the same things this book says.

This is what happens when you confine your reading to a few highly general principles of “human history” and “human social development”: all the particularity, and therefore all the interest, drains from the world. S&M may have encountered some interesting residual phenomena from the sedentarization of homo sapiens. What they have not encountered is the Bible.

After all this, I turned with some relief to A. N. Wilson’s The Book of the People, not because I expected to agree with it, but because I expected it to involve something clearly recognizable to me as reading. But I did not get quite what I thought I would.

The material of Wilson’s book arises largely from conversations with a person known only by the single initial “L.” Wilson unaccountably extends this peculiar naming convention to everyone else in the book, including his wife and daughters and an English journalist (“H.”) living in Washington who once wrote for a number of London periodicals, smoked and drank a lot, and ultimately died of throat cancer. (Couldn't we at least call him Hitch?) But in the case of L. there seems to be good reason for this limited form of identification.

Wilson met L. when he was an undergraduate and she a graduate student at Oxford. Wilson very gradually discloses details about her over the course of the book: that she was very tall and wore thick glasses; that she was a Presbyterian; that she was a disciple of the great Canadian literary scholar Northrop Frye; that she had a lifelong history of mental illness, which may have contributed to an irregular work history and a preference for moving frequently; and, above all, that she planned to write a book about the Bible.

Wilson studied theology at one point, and considered enterting the priesthood, but later became thoroughly disillusioned by Christianity and by religion in general, going so far as to write a pamphlet called Against Religion (1991). But almost as soon as he had written it he began to have reservations — “I am in fact one of life’s wishy-washies,” he confesses at one point — and eventually returned to belief, as L. had prophesied he would. L. told him that he could only come to the truth about God and the Bible after rejecting falsehoods about it, chief among those falsehoods being the two varieties of fundamentalism: theistic and atheistic.

As Wilson travels through life — and travels around the world: much of this book involves descriptions of apparently delightful journeys to romantic or historic places — he keeps thinking about the Bible, and when he does he also thinks of L. They correspond; they meet from time to time. Typically she has moved to another place and has added to her notes on her Bible book, though she never gets around to reading it. Eventually we learn that she has died. Wilson manages to get to her funeral, at an Anglo-Catholic convent in Wiltshire, and receives from the nuns there a packet containing her jottings. “It is from these notes that the present book is constructed. This is L.’s book as much as mine.”

So what does Wilson learn from L. about the Bible? It is hard to say. To give one example of his method: at one point he muses that L. must have in some sense patterned herself on Simone Weil, the great French mystic who died in 1943, which reminds him that Weil had been brought to Christian faith largely by her encounter with the poetry of the 17th-century Anglican George Herbert. This leads him to quote some of Herbert’s poems, and to note their debt to the Psalms, which in turn leads him to think about how the Psalms are used in the Gospels, which, in the last link of this particular literary chain, leads him to wonder whether the story of the Crucifixion is but poetry, a “literary construct.” A question which he does not answer: instead he turns to an account of L.’s funeral.

That’s how this book goes: it consists of a series of looping anecdotal flights that occasionally touch down and look at the Bible for a moment, before being spooked by something and lifting off again. There is at least as much about traveling to Ghent to see Van Eyck’s great altarpiece, and reading Gibbon’s Decline and Fall in Istanbul with Hagia Sophia looming portentously in the background, and meeting L. in coffeeshops, as about the Bible itself.

If there is any definitive lesson Wilson wishes us to learn from all this, it is the aforementioned folly of fundamentalism. At several points he recalls his own forays into the “historical Jesus” quests and dismisses them as pointless: none of the rock-hard evidence believers seek will ever be found, nor will unbelievers be able to find conclusive reason to dismiss the accounts the Gospels give of this peculiar and extraordinary figure.

At this point we should reflect on that literary device of using initials rather than names. More than once Wilson calls to our attention the view widely held among biblical scholars that the texts we have are composites of earlier and unknown texts: thus the “Documentary Hypothesis” about the Pentateuch, with its four authors (J, E, D, and P), and the posited source (in German Quelle) for the synoptic Gospels, Q. In light of all this we cannot be surprised when, late in the book, Wilson confesses that L. is herself a “composite figure,” one he “felt free to mythologize.”

Is he simply saying that we’re all just storytellers, that it’s mythologizing all the way down, no firm floor of fact to be discovered? If so, then while The Book of the People may in some sense live up to its subtitle — How to Read the Bible — it certainly does not tell us, any more than S&M did, why we should bother with this strange and often infuriating book.

I find it hard not to see both The Good Book of Human Nature and The Book of the People as complicated attempts to avoid encountering the Bible on its own terms, in light of its own claims for itself and for its God. I keep thinking that what Kierkegaard said about “Christian scholarship” is relevant to these contemporary versions of reading: “We would be sunk if it were not for Christian scholarship! Praise be to everyone who works to consolidate the reputation of Christian scholarship, which helps to restrain the New Testament, this confounded book which would one, two, three, run us all down if it got loose.”

Wednesday, August 10, 2016

post latency warning

Folks, posts will be few and far between here, for a while. I'm working hard on a book, and life in general is sufficiently complicated that I don't have many unused brain cells. I'm finding it healthier and saner to devote my online time to my tumblr, where I mainly post images that I enjoy contemplating. And you know, I've found some very cool stuff lately, so please check it out.


Monday, August 8, 2016

secrets of Apple (not) revealed

John Gruber and others are praising this Fast Company feature on Apple, but I don’t see why. It’s all like this:

The iPhone will continue to morph, in ways designed to ensure its place as the primary way we interact with and manage our technological experience for the foreseeable future. Apple will sell more devices, but its evolution will also enable it to explore new revenue opportunities. This is how Apple adapts. It expands its portfolio by building on the foundation laid by earlier products. That steady growth has made it broader and more powerful than any other consumer technology company.

Contentless abstraction. The iPhone will somehow “morph.” Apple will explore unnamed “revenue opportunities.” Also, Tim Cook thinks health care is really important, and Apple products need to work with networks Apple doesn’t own. Revelatory! Elsewhere in the article we learn that far more people are working on the Maps app than when it launched — that’s about as concrete as the article gets.

Everybody who writes about Apple ends up doing this: madly whipping the egg whites into big fluffy peaks. Because Apple never tells anyone anything.

Saturday, August 6, 2016

Self on digital

Will Self's meditation on digital imagery in the Guardian is a peculiar one — I'm not sure he quite knows what he wants to say. If I had to sum it up, I'd call it an essay suspended between two fears: first, that digital imagery in the end won't prove to be a perfectly seamless simulacrum of experience; second, that it will.

Joseph Brodsky once wrote, “should the truth about the world exist, it’s bound to be non-human”. Now we have the temerity to believe we can somehow perceive that non-human reality, although to do so would be a contradiction in terms. Over the next few years a new generation of television receivers will be rolled out. (We might call them “visual display units” since the formal distinction between computers and televisions is on the point of dissolving.) These machines are capable of displaying imagery at ultra-high definition; so-called “8K UHDTV” composes pictures employing 16 times the number of pixels of current high definition TV, which presents us – if we could only see it – with the bizarre spectacle of an image that exists in a higher resolution than our own eyes are capable of perceiving. Will this natural limitation on our capacity to technologically reproduce the world’s appearance lead our scientists and technologists to desist? I doubt it: the philosopher John Gray observes that: “In evolutionary prehistory, consciousness emerged as a side-effect of language. Today it is a byproduct of the media.”

Whatever the digital is and does, Self seems to be saying, it makes us. Thus his fascination with the distorted and decomposed images of Wiktor Forss:

We might compare these images to others that also decompose the digital, give it the qualities of the analog, but in a different way. See Robin Sloan on video style transfer — for instance, Raiders of the Lost Ark in the style of Gustav Doré: give it a watch. What for Self is a source of fear and anxiety could also be a source of playfulness and delight. I'm not sure we need to be quite so angst-ridden about the whole thing.

But in any case, how does Self take the argument, or rather the experience, beyond the sources he cites: Walter Benjamin, Jorge Luis Borges, Marshal McLuhan? (Especially Benjamin). It must be hard for a writer to accept that other and earlier writers have already told his story better than he can tell it.

Friday, August 5, 2016

work in progress

Folks, as some of you know, I've been working for some time on a book about Christian intellectuals in the second world war. But I've set that aside for a while to work on a different project, one prompted by what I guess I'll call the exigencies of the current moment. It'll be called How to Think: A Guide for the Perplexed, and you can get more details about it here.

If you have any questions about it I'd be happy to answer them in the comments below.

Wednesday, August 3, 2016

word games

Ian Bogost reports on what some people think of as a big moment in the history of international capitalism:

At the close of trading this Monday, the top five global companies by market capitalization were all U.S. tech companies: Apple, Alphabet (formerly Google), Microsoft, Amazon, and Facebook.

Bloomberg, which reported on the apparent milestone, insisted that this “tech sweep” is unprecedented, even during the dot-com boom. Back in 2011, for example, Exxon and Shell held two of the top spots, and Apple was the only tech company in the top five. In 2006, Microsoft held the only slot—the others were in energy, banking, and manufacture. But things have changed. “Your new tech overlords,” Bloomberg christened the five.

And then Bogost zeroes in on what’s peculiar about this report:

But what makes a company a technology company, anyway? In their discussion of overlords, Bloomberg’s Shira Ovide and Rani Molla explain that “Non-tech titans like Exxon and GE have slipped a bit” in top valuations. Think about that claim for a minute, and reflect on its absurdity: Exxon uses enormous machinery to extract the remains of living creatures from geological antiquity from deep beneath the earth. Then it uses other enormous machinery to refine and distribute that material globally. For its part, GE makes almost everything — from light bulbs to medical imaging devices to wind turbines to locomotives to jet engines.

Isn’t it strange to call Facebook, a company that makes websites and mobile apps a “technology” company, but to deny that moniker to firms that make diesel trains, oil-drilling platforms, and airplane engines?

I’m reminded here of a comment the great mathematician G. H. Hardy once made to C. P. Snow: “Have you noticed how the word ‘intellectual’ is used nowadays? There seems to be a new definition that doesn’t include Rutherford or Eddington or Dirac or Adrian or me. It does seem rather odd.”

As Bogost points out, the financial world uses “technology” to mean “computer technology.” But, he also argues, this is not only nonsensical, it’s misleading. Try depriving yourself of the word “technology” to describe those companies and things start looking a little different. “Almost all of Google’s and Facebook’s revenue, for example, comes from advertising; by that measure, there’s an argument that those firms are really Media industry companies, with a focus on Broadcasting and Entertainment.” Amazon is a retailer. Among those Big Five only Apple and Microsoft are computing companies, and they are so in rather different ways, since Microsoft makes most of its money from software, Apple from hardware.

Here’s a useful habit to cultivate: Notice whenever people are leaning hard on a particular word or phrase, making it do a lot of work. Then try to formulate what they’re saying without using that terminology. The results can be illuminating.


I’m still thinking about the myths and metaphors we live by, especially the myths and metaphors that have made modernity, and the world keeps giving me food for thought.

So speaking of food, recently I was listening to a BBC Radio show about food — I think it was this one — and one of the people interviewed was Ken Albala, a food historian at the University of the Pacific. Albala made the fascinating comment that in the twentieth century, much of our thinking about proper eating was shaped (bent, one might better say) by thinking of the human body as a kind of internal combustion engine. Just as in the 21st century we think of our brains as computers, in the 20th we thought of our bodies as automobiles.

But perhaps, given the dominance of digital computing in our world, including its imminent takeover of the world of automobiling, we might be seeing a shift in how we conceive of our bodies, from analog metaphors to digital ones. Isn’t that what Soylent is all about, and the fascination with smoothies? — Making nutrition digital! An amalgamated slurry of ingredients goes in one end; an amalgamated slurry of ingredients comes out the other end. Input/Output, baby. Simple as that.

UPDATE: My friend James Schirmer tells me about Huel — human fuel! Or, as pretty much everyone will think of it, "gruel but with an H."

"Please, sir, may I have some more"?

Saturday, July 30, 2016

my boilerplate letter to social media services


Someone has signed up for your service using my email address. (And, interestingly, using this name.) Please delete my email address from your database.

The email I got welcoming me to your service came from a no-reply address, so I had to go to your website and dig around until I found a contact form. I see that you require me to give you my name as well as my email address, so you're demanding that I tell you things about myself I’d rather you not know because you aren't smart enough, or don't care enough, to include one simple step in your sign-up process: Confirm that this is your email address.

This neglect is both discourteous and stupid. It’s discourteous because it effectively allows anyone who wants to spam someone else to use your service as a quick-and-easy tool for doing so. It’s stupid because then anyone so victimized will tag anything that comes from you as spam, which will eventually lead to your whole company being identified as a spammer. You’ll all be sitting around in the office saying, between chugs of Soylent, “We keep ending up in Gmail's spam filters, what’s up with that? Those idiots.”

So, again, please delete my email address from your database. And please stop being a rude dumbass, like all the other rude dumbasses to whom I have to send this message, more frequently than most people would believe.

Most sincerely yours,

Alan Jacobs

Wednesday, July 27, 2016

on expertise

One of the most common refrains in the aftermath of the Brexit vote was that the British electorate had acted irrationally in rejecting the advice and ignoring the predictions of economic experts. But economic experts have a truly remarkable history of getting things wrong. And it turns out, as Daniel Kahneman explains in Thinking, Fast and Slow, that there is a close causal relationship between being an expert and getting things wrong:

People who spend their time, and earn their living, studying a particular topic produce poorer predictions than dart-throwing monkeys who would have distributed their choices evenly over the options. Even in the region they knew best, experts were not significantly better than nonspecialists. Those who know more forecast very slightly better than those who know less. But those with the most knowledge are often less reliable. The reason is that the person who acquires more knowledge develops an enhanced illusion of her skill and becomes unrealistically overconfident. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” [Philip] Tetlock writes. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of The New York Times in ‘reading’ emerging situations.” The more famous the forecaster, Tetlock discovered, the more flamboyant the forecasts. “Experts in demand,” he writes, “were more overconfident than their colleagues who eked out existences far from the limelight.”

So in what sense would it be rational to trust the predictions of experts? We all need to think more about what conditions produce better predictions — and what skills and virtues produce better predictors. Tetlock and Gardner have certainly made a start on that:

The humility required for good judgment is not self-doubt – the sense that you are untalented, unintelligent, or unworthy. It is intellectual humility. It is a recognition that reality is profoundly complex, that seeing things clearly is a constant struggle, when it can be done at all, and that human judgment must therefore be riddled with mistakes. This is true for fools and geniuses alike. So it’s quite possible to think highly of yourself and be intellectually humble. In fact, this combination can be wonderfully fruitful. Intellectual humility compels the careful reflection necessary for good judgment; confidence in one’s abilities inspires determined action....

What's especially interesting here is the emphasis not on knowledge but on character — what's needed is a certain kind of person, and especially the kind of person who is humble.

Now ask yourself this: Where does our society teach, or even promote, humility?

Monday, July 25, 2016

some thoughts on the humanities

I can't say too much about this right now, but I have been working with some very smart people on a kind of State of the Humanities document — and yes, I know there are hundreds of those, but ours differs from the others by being really good.

In the process of drafting a document, I wrote a section that ... well, it got cut. I'm not bitter about that, I am not at all bitter about that. But I'm going to post it here. (It is, I should emphasize, just a draft and I may want to revise and expand it later.)

Nearly fifty years ago, George Steiner wrote of the peculiar character of intellectual life “in a post-condition” — the perceived sense of living in the vague aftermath of structures and beliefs that can never be restored. Such a condition is often proclaimed as liberating, but at least equally often it is experienced as (in Matthew Arnold's words) a suspension between two worlds, “one dead, / The other powerless to be born.” In the decades since Steiner wrote, humanistic study has been more and more completely understood as something we do from within such a post-condition.

But the humanities cannot be pursued and practiced with any integrity if these feelings of belatedness are merely accepted, without critical reflection and interrogation. In part this is because, whatever else humanistic study is, it is necessarily critical and inquiring in whatever subject it takes up; but also because humanistic study has always been and must always be willing to let the past speak to the present, as well as the present to the past. The work, the life, of the humanities may be summed up in an image from Kenneth Burke’s The Philosophy of Literary Form (1941):

Imagine that you enter a parlor. You come late. When you arrive, others have long preceded you, and they are engaged in a heated discussion, a discussion too heated for them to pause and tell you exactly what it is about. In fact, the discussion had already begun long before any of them got there, so that no one present is qualified to retrace for you all the steps that had gone before.You listen for a while, until you decide that you have caught the tenor of the argument; then you put in your oar. Someone answers; you answer him; another comes to your defense; another aligns himself against you, to either the embarrassment or gratification of your opponent, depending upon the quality of your ally’s assistance. However, the discussion is interminable. The hour grows late, you must depart. And you do depart, with the discussion still vigorously in progress.

It is from this ‘unending conversation’ that the materials of your drama arise.

It is in this spirit that scholars of the humanities need to take up the claims that our movement is characterized by what it has left behind — the conceptual schemes, or ideologies, or épistèmes, to which it is thought to be “post.” In order to grasp the challenges and opportunities of the present moment, three facets of our post-condition need to be addressed: the postmodern, the posthuman, and the postsecular.

Among these terms, postmodern was the first-coined, and was so overused for decades that it now seems hoary with age. But it is the concept that lays the foundation for the others. To be postmodern, according to the most widely shared account, is to live in the aftermath of the collapse of a great narrative, one that began in the period that used to be linked with the Renaissance and Reformation but is now typically called the “early modern.” The early modern — we are told, with varying stresses and tones, by host of books and thinkers from Foucault’s Les Mots et les choses (1966) to Stephen Grenblatt’s The Swerve (2011) — marks the first emergence of Man, the free-standing, liberated, sovereign subject, on a path of self-emancipation (from the bondage of superstition and myth) and self-enlightenment (out of the darkness that precedes the reign of Reason). Among the instruments that assisted this emancipation, none were more vital than the studia humanitatis — the humanities. The humanities simply are, in this account of modernity, the discourses and disciplines of Man. And therefore if that narrative has unraveled, if the age of Man is over — as Rimbaud wrote, “Car l’Homme a fini! l’Homme a joué tous les rôles!” — what becomes of the humanities?

This logic is still more explicit and forceful with regard to the posthuman. The idea of the posthuman assumes the collapse of the narrative of Man and adds to it an emphasis on the possibility of remaking human beings through digital and biological technologies leading ultimately to a transhuman mode of being. From within the logic of this technocratic regime the humanities will seem irrelevant, a quaint relic of an archaic world.

The postsecular is a variant on or extension of the postmodern in that it associates the narrative of man with a “Whig interpretation of history,” an account of the past 500 years as a story of inevitable progressive emancipation from ancient, confining social structures, especially those associated with religion. But if the age of Man is over, can the story of inevitable secularization survive it? The suspicion that it cannot generates the rhetoric of the postsecular.

(In some respects the idea of the postsecular stands in manifest tension with the posthuman — but not in all. The idea that the posthuman experience can be in some sense a religious one thrives in science fiction and in discursive books such as Erik Davis’s TechGnosis [1998] and Ray Kurzweil’s The Age of Spiritual Machines [1999] — the “spiritual” for Kurzweil being “a feeling of transcending one’s everyday physical and mortal bounds to sense a deeper reality.”)

What must be noted about all of these master concepts is that they were articulated, developed, and promulgated primarily by scholars in the humanities, employing the traditional methods of humanistic learning. (Even Kurzweil, with his pronounced scientistic bent, borrows the language of his aspirations — especially the language of “transcendence” — from humanistic study.) The notion that any of these developments renders humanistic study obsolete is therefore odd if not absurd — as though the the humanities exist only to erase themselves, like a purely intellectual version of Claude Shannon’s Ultimate Machine, whose only function is, once it's turned on, to turn itself off.

But there is another and better way to tell this story.

It is noteworthy that, according to the standard narrative of the emergence of modernity, the idea of Man was made possible by the employment of a sophisticated set of philological tools in a passionate quest to understand the alien and recover the lost. The early humanists read the classical writers not as people exactly like them — indeed, what made the classical writers different was precisely what made them appealing as guides and models — but nevertheless as people, people from whom we can learn because there is a common human lifeworld and a set of shared experiences. The tools and methods of the humanities, and more important the very spirit of the humanities, collaborate to reveal Burke’s “unending conversation”: the materials of my own drama arise only through my dialogical encounter with others, those from the past whose voices I can discover and those from the future whose voices I imagine. Discovery and imagination are, then, the twin engines of humanistic learning, humanistic aspiration. In was in just this spirit that, near the end of his long life, the Russian polymath Mikhail Bakhtin wrote in a notebook,

There is neither a first nor a last word and there are no limits to the dialogic context (it extends into the boundless past and the boundless future).... At any moment in the development of the dialogue there are immense, boundless masses of forgotten contextual meanings, but at certain moments of the dialogue’s subsequent development along the way they are recalled and invigorated in new form (in a new context). Nothing is absolutely dead: every meaning will have its homecoming festival.

The idea that underlies Bakhtin’s hopefulness, that makes discovery and imagination essential to the work of the humanities, is, in brief, Terence’s famous statement, clichéd though it may have become: Homo sum, humani nihil a me alienum puto. To say that nothing human is alien to me is not to say that everything human is fully accessible to me, fully comprehensible; it is not to erase or even to minimize cultural, racial, or sexual difference; but it is to say that nothing human stands wholly outside my ability to comprehend — if I am willing to work, in a disciplined and informed way, at the comprehending. Terence’s sentence is best taken not as a claim of achievement but as an essential aspiration; and it is the distinctive gift of the humanities to make that aspiration possible.

It is in this spirit that those claims that, as we have noted, emerged from humanistic learning, must be evaluated: that our age is postmodern, posthuman, postsecular. All the resources and practices of the humanities — reflective and critical, inquiring and skeptical, methodologically patient and inexplicably intuitive — should be brought to bear on these claims, and not with ironic detachment, but with the earnest conviction that our answers matter: they are, like those master concepts themselves, both diagnostic and prescriptive: they matter equally for our understanding of the past and our anticipating of the future.

Tuesday, July 19, 2016

The World Beyond Kant's Head

For a project I’m working on, and will be able to say something about later, I re-read Matthew Crawford’s The World Beyond Your Head, and I have to say: It’s a really superb book. I read it when it first came out, but I was knee-deep in writing at the time and I don’t think I absorbed it as fully as I should have. I quote Crawford in support of several of the key points I make in my theses on technology, but his development of those points is deeply thoughtful and provocative, even more than I had realized. If you haven’t read it, you should.

But there’s something about the book I want to question. It concerns philosophy, and the history of philosophy.

In relation to the kinds of cultural issues Crawford deals with here -- issues related to technology, economics, social practices, and selfhood -- there are two ways to make use of the philosophy of the past. The first involves illumination: one argues that reading Kant and Hegel (Crawford’s two key philosophers) clarifies our situation, provides alternative ways of conceptualizing and responding to it, and so on. The other way involves causation: one argues that we’re where we are today because of the triumphal dissemination of, for instance, Kantian ideas throughout our culture.

Crawford does some of both, but in many respects the chief argument of his book is based on a major causal assumption: that much of what’s wrong with our culture, and with our models of selfhood, arises from the success of certain of Kant’s ideas. I say “assumption” because I don’t think that Crawford ever actually argues the point, and I think he doesn’t argue the point because he doesn’t clearly distinguish between illumination and causation. That is, if I’ve read him rightly, he shows that a study of Kant makes sense of many contemporary phenomena and implicitly concludes that Kant’s ideas therefore are likely to have played a causal role in the rise of those phenomena.

I just don’t buy it, any more than I buy the structurally identical claim that modern individualism and atomization all derive from the late-medieval nominalists. I don’t buy those claims because I have never seen any evidence for them. I am not saying that those claims are wrong, I just want to know how it happens: how you get from extremely complex and arcane philosophical texts that only a handful of people in history have ever been able to read to world-shaping power. I don’t see how it’s even possible.

One of Auden’s most famous lines is: “Poetry makes nothing happen.” He was repeatedly insistent on this point. In several articles and interviews he commented that the social and political history of Europe would be precisely the same if Dante, Shakespeare, and Mozart had never lived. I suspect that this is true, and that it’s also true of philosophy. I think that we would have the techno-capitalist society we have if Duns Scotus, William of Ockham, Immanuel Kant, and G.F.W. Hegel had never lived. If you disagree with me, please show me the path which those philosophical ideas followed to become so world-shapingly dominant. I am not too old to learn.

Sunday, July 17, 2016

some friendly advice about online writing and reading

Dennis Cooper, a writer and artist, is a pretty unsavory character, so in an ideal world I wouldn't choose him as a poster boy for the point I want to make, but ... recently Google deleted his account, and along with it, 14 years of blog posts. And they are quite within their rights to do so.

People, if you blog, no matter on what platform, do not write in the online CMS that your platform provides. Instead, write in a text editor or, if you absolutely must, a word processing app, save it to your very own hard drive, and then copy and paste into the CMS. Yes, it’s an extra step. It’s also absolutely worth it, because it means you always have a plain-text backup of your blog posts.

You should of course then back up your hard drive in at least two different ways (I have an external drive and Dropbox).

Why write in a text editor instead of a word processing app? Because when you copy from the latter, especially MS Word, you tend to pick up a lot of unnecessary formatting cruft that can make your blog post look different than you want it to. I write in BBEdit using Markdown, and converting from Markdown to HTML yields exceptionally clean copy. If you’d like to try it without installing scripts, you can write a little Markdown and convert it to HTML by using this web dingus — there are several others like it.

While I’m giving advice about writing on the web, why not some about reading as well? Too many people rely on social-media sites like Facebook and Twitter to get their news, which means that what they get is unpredictably variable, depending on what other people link to and how Facebook happens to be tweaking its algorithms on any given day. Apple News is similarly uncertain. And I fundamentally dislike the idea of reading what other people, especially other people who work for mega-corporations, want me to see.

Try using an RSS reader instead. RSS remains the foundation of the open web, and the overwhelming majority of useful websites have RSS feeds. There are several web-based RSS readers out there — I think the best are Feedly and Newsblur — and when you build up a roster of sites you profit from reading, you can export that roster as an OPML file and use it with a different service. And if you don't like those web interfaces you can get a feed-reading app that works with those (and other) services: I’m a big fan of Reeder, though my introduction to RSS was NewNewsWire, which I started using when it was little more than a gleam in Brent Simmons’s eye.

So, the upshot: in online writing and reading alike, look for independence and sustainability. Your life will be better for it.

Monday, July 11, 2016

Green Earth

Another Kim Stanley Robinson novel, and another set of profoundly mixed feelings. Green Earth, which was published last year, is a condensation into a single volume of three novels that appeared in the middle of the last decade and are generally known as the Science in the Capital trilogy.

Robinson is an extraordinarily intelligent writer with a wide-ranging mind, and no one writes about either scientific thinking or the technological implementation of science with the clarity and energy that he evidences. Moreover, he has an especially well thought-out, coherent and consistent understanding of the world — what some people (not me) call a worldview, what used to be called "a philosophy." But that philosophy is also what gets him into trouble, because he has overmuch trust in people who share it and insufficient understanding of, or even curiosity about, people who don’t.

Robinson is a technocratic liberal universalist humanitarian (TLUH), and though Green Earth is in many ways a fascinating novel, an exceptionally well-told story, it is also to a somewhat comical degree a TLUH wish-fulfillment fantasy. I can illustrate this through a brief description of one of the novel's characters: Phil Chase, a senator from Robinson's native California whose internationalist bent is so strong that his many fans call him the World's Senator, who rises to become President, whose integrity is absolute, who owes nothing to any special-interest groups, who listens to distinguished scientists and acts on their recommendations, who even marries a distinguished scientist and — this is the cherry on the sundae — has his marriage blessed by the Dalai Lama. TLUH to the max.

In Green Earth Robinson's scientists tend to be quite literally technocrats, in that they work for, or have close ties to, government agencies, which they influence for good. Only one of them does anything wrong in the course of the book, and that — steering a National Science Foundation panel away from supporting a proposal that only some of them like anyway — is scarcely more than a peccadillo. And that character spends the rest of the book being so uniformly and exceptionally virtuous that, it seems to me, Robinson encourages us to forget that fault.

Robinson's scientists are invariably excellent at what they do, honest, absolutely and invariably committed to the integrity of scientific procedure, kind to and supportive of one another, hospitable to strangers, deeply concerned about climate change and the environment generally. They eat healthily, get plenty of exercise, and drink alcohol on festive occasions but not habitually to excess. They are also all Democrats.

Meanwhile, we see nothing of the inner lives of Republicans, but we learn that they are rigid, without compassion, owned by the big oil companies, practiced in the blackest arts of espionage against law-abiding citizens, and associated in not-minutely-specified ways with weirdo fundamentalist Christian groups who believe in the Rapture.

Green Earth is really good when Robinson describes the effects of accellerated climate change and the various means by which it might be addressed. And I liked his little gang of Virtuous Hero Scientists and wanted good things to happen to them. But the politically Manichaean character of the book gets really tiresome when extended over several hundred pages. Robinson is just so relentless in his flattery of his likely readers' presuppositions — and his own. (The Mars Trilogy is so successful in part because all its characters are scientists and if they were uniformly virtuous as the scientists in Green Earth there would be no story.)

It's fascinating to me that Robinson is so extreme in his caricatures, because in some cases he's quite aware of the dangers of them. Given what I've just reported, it wouldn't be surprising if Robinson were attracted to Neil DeGrasse Tyson's imaginary republic of Rationalia, but he's too smart for that. At one point the wonderfully virtuous scientist I mentioned earlier hears a lecture by a Tibetan Buddhist who says, "An excess of reason is itself a form of madness" — a quote from an ancient sage, and an echo of G.K. Chesterton to boot, though Robinson may not know that. Our scientist instantly rejects this idea — but almost immediately thereafter starts thinking about it and can’t let the notion go; it sets him reading, and eventually he comes across the work of Antonio Damasio, who has shown pretty convincingly that people who operate solely on the basis of "reason" (as usually defined) make poorer decisions than those whose emotions are in good working order and play a part in decision-making.

So Robinson is able to give a subtle and nuanced account of how people think — how scientists think, because one of the subtler themes of the book is the way that scientists think best when their emotions are engaged, especially the emotion of love. Those who love well think well. (But St. Augustine told us that long ago, didn't he?)

Given Robinson's proper emphasis on the role of the whole person in scientific thinking, you'd expect him to have a stronger awareness of the dangers of thinking according to the logic of ingroups and outgroups. But no: in this novel the ingroups are all the way in and the outgroups all the way out. Thus my frustration.

Still, don't take these complaints as reasons not to read Green Earth or anything else by Robinson. I still have more of his work to read and I'm looking forward to it. He always tells a good story and I always learn a lot from reading his books. And I can't say that about very many writers. Anyway, Adam Roberts convinced me that I had under-read one of Robinson's other books, so maybe that'll happen again....