Text Patterns - by Alan Jacobs

Monday, October 31, 2016

social media, emotion, and politics

I’m not going to enter this contest — I should leave it to people who need the money and the publicity more than I do — but if I were to answer the contest question, Are digital technologies making politics impossible?, I would say something along these lines:

No, digital technologies are not making politics impossible, but they have already radically changed politics in ways that none of the existing political structures have so far been able to adapt to. Digital technologies — more particularly, digital social media — have had multiple political effects, but the primary one has been to keep the people who use them in a constant state of extreme emotional stimulation. As Bianca Bosker has explained in a recent story in The Atlantic, Tristan Harris wants software engineers to take a kind of Hippocratic Oath that they will cease to take advantage of their users’ susceptibility to emotional manipulation, will cease their “race to the bottom of the brain stem,” but I cannot imagine a more pointless campaign. None of the current social media companies will act in a way that could take eyeballs away from their apps, because eyeballs are what they monetize; and none of their would-be successors will do it either, for the same reason.

Likewise, no more than a tiny percentage of users will develop any self-discipline or alter their habits in any way in response to being manipulated by their social-media apps. The addiction is too strong and too universally shared.

Some of those who feel silenced, marginalized, and powerless — especially those who also believe that they have declined as a a cultural force, or have had rightful power snatched from them — embrace the addiction to social media most enthusiastically because they get the most out of it. They come to believe that their voices are loud and powerful, or at least that they can strike a blow against their enemies. Thus they can devote a great deal of otherwise empty time to harassing and abusing anyone who doesn't live up to their expectations — which can mean not just evident political enemies but people whom they believe have betrayed their side. Consider the stories that have just appeared by David French and his wife Nancy French, explaining what it’s like to be on the receiving end of such abuse.

We can perhaps understand this behavior better by zeroing in on one particular kind of abuse. The Anti-Defamation League has just released a report which shows a marked upsurge in anti-Semitic social media activity in recent years, but the report also notes that of “2.6 million tweets containing language frequently found in anti-Semitic speech between August 2015 – July 2016,” 68% came from 1,600 Twitter accounts. Twitter has suspended about 20% of these accounts, but their owners can just create as many more as they want.

A look at his Twitter mentions suggests that David French gets some anti-Semitic abuse, even though he is not Jewish. (I do too, simply because Jacobs is a common Jewish name.) But the same people who produce a lot of anti-Semitic tweets are nasty towards many other racial and social groups, as well as those whom they think politically naïve or treacherous. They are not specialists in harassment, but rather generalists; and as the numbers above indicate, they are very, very active.

On websites where comments sections are still provided, such abusers will probably dominate the space; but precisely because of their behavior, comments have been disappearing from major websites for some years now. Facebook’s code architecture — its real-name policy, its reciprocity (I friend you, you friend me), and a few other elements — tend to limit, though certainly not eliminate, abuse there.

Which leaves Twitter.

In early 2015 Twitter CEO Dick Costolo wrote, “We suck at dealing with abuse and trolls on the platform and we've sucked at it for years. It's no secret and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face every day.” Since then nothing has changed. Twitter simply is not seriously interested in protecting its users from abuse, and unless it is sold to a company that does care, it will remain an environment perfectly calibrated for incubating hatred.

It’s also true, though, that for people who want to communicate with an audience beyond their family and friends, Twitter remains by far the best option. So people like the Frenches won't want to leave it altogether until a better alternative emerges, and it’s hard to imagine that happening. We’d need everyone to move to a rival at once, or in very short order. In social media, incumbency has great power.

So, to return to the question with which we began, what are the political implications of this situation? Primarily this: that a very small number of angry and hate-filled people are empowered by Twitter’s code architecture and membership policies to spread their message of anger and hatred to people who have no reliable means of ignoring it, which leaves the entire Twitter-using world in a more-or-less constant state of emotional agitation. That the emotions generated are usually negative doesn't turn people away: as I have commented before, “It is impossible to understand social media without grasping that, as Craig Raine has said, ‘All emotion is pleasurable.’” And the pleasure of such emotion will inevitably outweigh any desire to have rational conversations. There’s a reason why Donald Trump and his most enthusiastic supporters like Twitter so much. As Richard Spencer, a pro-Trump alt-right white nationalist, recently said, "It's not so much about policy – it's more about the emotions that [Trump] evokes. And emotions are more important than facts."

To Twitter's power to stimulate emotion may be added the more familiar one of intermittent reinforcement, with this result: The emotional extremity of Twitter will continue to draw people like moths to a flame, even when that extremity makes it almost impossible for people to engage rationally and patiently with one another. And the communicative habits strengthened and intensified by Twitter will continue to bleed into the larger world of political discourse, as we have already seen in the Presidential debates of this season. (Just think about how fluttered everyone became when one person, Ken Bone, asked a question that presumed thoughtful and informed consideration of a complex political issue rather than posturing and grandstanding. It was like a message in a bottle, from another time and place.)

Unless Twitter makes significant changes to its code and policies — and as I’ve suggested, I think that highly unlikely — the “race to the bottom of the brain stem” will continue, and our political culture will become more purely emotional. Indeed, that process has gone far enough that even when Twitter gives way to a different platform, that platform will almost certainly intensify emotions more effectively than Twitter does. This is the communicative environment in which politics will, for the foreseeable future, be performed.

As I said at the outset of this post, “digital technologies ... have already radically changed politics in ways that none of the existing political structures have so far been able to adapt to.” In the coming years the existing political structures will certainly take adaptive steps. None of them are likely to console those who think facts ought to be more important than emotions, but that does not mean that we are wholly without hope. In the media/politics environment that is on its way, and to a considerable extent already here, the people who will achieve the greatest public good will be those who learn to encourage the better, the nobler emotions. All emotion is pleasurable, but there are only some emotions that we do well to take pleasure in. And those who learn to feel as they should can eventually be brought to understand the value of thought.

a technological tale for Reformation Day

What I have been calling the technological history of modernity is in part a story about the power of recognizing how certain technologies work — and the penalties imposed on those who fail to grasp their logic.

In his early book Renaissance Self-Fashioning, Stephen Greenblatt tells a story:

In 1531 a lawyer named James Bainham, son of a Gloucestershire knight, was accused of heresy, arrested, and taken from the Middle Temple to Lord Chancellor More’s house in Chelsea, where he was detained while More tried to persuade him to abjure his Protestant beliefs. The failure of this attempt called forth sterner measures until, after torture and the threat of execution, Bainham finally did abjure, paying a £20 fine to the king and standing as a penitent before the priest during the Sunday sermon at Paul’s Cross. But scarcely a month after his release, according to John Foxe, Bainham regretted his abjuration “and was never quiet in mind and conscience until the time he had uttered his fall to all his acquaintance, and asked God and all the world forgiveness, before the congregation in those days, in a warehouse in Bow lane.” On the following Sunday, Bainham came openly to Saint Austin’s church, stood up “with the New Testament in his hand in English and the Obedience of a Christian Man [by Tyndale] in his bosom,” and, weeping, declared to the congregants that he had denied God. He prayed the people to forgive him, exhorted them to beware his own weakness to die rather than to do as he had done, “for he would not feel such a hell again as he did feel, for all the world’s good.” He was, of course, signing his own death warrant, which he sealed with letters to the bishop of London and others. He was promptly arrested and, after reexamination, burned at the stake as a relapsed heretic.

When Bainham was first interrogated by More, he told the Lord Chancellor that “The truth of holy Scripture was never, these eight hundred years past, so plainly and expressly declared unto the people, as it hath been within these six years” — the six years since the printing of Tyndale’s New Testament in 1525.

The very presence of this book was, to ecclesial traditionalists, clearly the essential problem. So back in 1529 Thomas More and his friend Cuthbert Tunstall, then Bishop of London, had crossed the English Channel to Antwerp, where Tyndale’s translation was printed. (Its printing and sale were of course forbidden in England.) More and Tunstall searched high and low, bought every copy of the translation they could find, and burned them all in a great bonfire.

Tyndale gladly received this as a boon: he had already come to recognize that his first version of the New Testament had many errors, and he used the money received from More and Tunstall to hasten his work on completing and publishing a revision, which duly appeared in 1534.

Friday, October 28, 2016

looking for climate-change fiction in all the wrong places

Amitav Ghosh asks, “Where Is the Fiction About Climate Change?”

When I try to think of writers whose imaginative work has communicated a more specific sense of the accelerating changes in our environment, I find myself at a loss; of literary novelists writing in English only a handful of names come to mind: Margaret Atwood, Kurt Vonnegut Jr, Barbara Kingsolver, Doris Lessing, Cormac McCarthy, Ian McEwan and T Coraghessan Boyle. No doubt many other names could be added to this list, but even if it were to be expanded to 100, or more, it would remain true, I think, that the literary mainstream, even as it has become more engagé on many fronts, remains just as unaware of the crisis on our doorstep as the population at large.

Perhaps there is a real lack of engagement here — but there may also be a lack of imagination on Ghosh’s part. Note how careful he is to specify that he is talking about “literary novelists,” and the “literary mainstream.” Presumably that rules out science fiction writers like Kim Stanley Robinson, whose Science in the Capital series explores the causes and possible consequences of climate change with great depth and intelligence. And Robinson is just one among many writing what some call cli-fi.

Ghosh has worked himself into an unnecessary bind. Since the consequences of climate change are not yet as dramatic as they are almost certain to become, those matters need to be explored by writers who produce speculative fiction. If you want that to happen, and yet you ignore fiction that you deem outside the “literary mainstream,” then sure, you’ll find a gap in imaginative coverage of the issue. But expand your sense of the “literary” just a bit and the picture will look quite different.

Wednesday, October 26, 2016

KSR's Mars: a stab at a course description

Posting continues to be light and rare around here because I’m still slaving away at two books — one and two — but I am not a machine, so I spend some time each day reading for fun. And the other day I was possessed by an unexpected, sudden, and irresistible urge to re-read Kim Stanley Robinson’s Mars trilogy.

I’m about halfway through Red Mars now and it is just thrilling to be back in this fictional world again. Does KSR put a foot wrong in the whole 2000 pages of the trilogy? I think not. It’s simply a masterpiece.

But in addition to the pure enjoyment of it, I find myself mulling over a possibility: What about teaching an interdisciplinary course built around these books? It would be a way to explore, among other things,

  • the distinctive social value of SF
  • environmental politics
  • the economics and politics of colonialism
  • the future prospects of internationalism
  • the nature of science and the Oppenheimer Principle
  • aesthetics and human perceptions of value
  • geology and areology
  • robotics and automation in manufacturing
  • designing politics from Square One (or what looks to some like Square One)

And that’s just a short list! So, friends, I have three questions.

First, does that sound like a useful and/or fun course?

Second, have I neglected any key themes in the trilogy?

And third, what might be some ancillary texts to assign? For instance, to help us about the ways that SF enables political thought, I might want at least some students to read Ursula K. LeGuin’s The Dispossessed; on the possibilities of Martian colonization I might suggest Robert Zubrin’s The Case for Mars on the questions of what science is and what its ultimate values are, I might assign Ursula Franklin’s The Real World of Technology.

But I’m not sure what I might assign on the hard-science side. I’d love to find a book on robotics that is technically detailed but has some of the panache of Neal Stephenson’s famous report on undersea cables and international communication, “Mother Earth Mother Board,” but that might be too much to ask for. I’d love to find an introduction to geology that had some of the clarity and power of John McPhee’s Annals of the Former World but at one-tenth the length. Any help would be much appreciated.

on Social Justice U

Jonathan Haidt explains “Why Universities Must Choose One Telos: Truth or Social Justice.” When my friend Chad Wellmon (on Twitter) questioned Haidt’s dichotomy, I agreed that there is a problem. After all, people who are promoting social justice in he university think that their beliefs are true!

But I also think Haidt has a point — it just needs to be rephrased. The social-justice faction in the university believes that the most fundamental questions about what justice is have already been answered, and require no further reflection or investigation. (And from this follows the belief that questioning The Answers, and still worse suggesting other answers, is, as Haidt says, a kind of blasphemy: At Social Justice University, “there are many blasphemy laws – there are ideas, theories, facts, and authors that one cannot use. This makes it difficult to do good social science about politically valenced topics. Social science is hard enough as it is, with big complicated problems resulting from many interacting causal forces. But at SJU, many of the most powerful tools are simply banned.”)

What needs to happen, then, I believe, is for “SJU” to be honest about its own intellectual constitution, to say openly, In this university, we are not concerned to follow the model of many academic enterprises and inquire into the nature and forms of justice. We believe we already know what those are. Therefore our questions will involve how best to implement the understanding we have all already agreed to before beginning our work.

And you know, if SJU is a private institution, I don't think they would be simply wrong to do this. After all, I have spent my teaching career in Christian institutions, where there are also certain foundational assumptions at work — which, indeed, is true even at Haidt’s Truth U. If Haidt really thinks that there is no blasphemy at Truth U he is sorely mistaken. (Thought experiment: a professor grades her students by seeking the wisdom of the I Ching.) Every educational institution either implicitly or explicitly sets certain boundaries to its pursuits, that is, agrees to set certain questions aside in order to focus on others. And what has long made American higher education so distinctive is its willingness to let a thousand institutional flowers, of very different species, bloom.

The question I would have for proponents of SJU is: Do you embrace the ideological diversity that has been a hallmark of the American system? Are you willing to allow SJU to do its work alongside Truth U and Christian U, and argue for all of those institutional types to be treated equally under the law? Or, rather, do you want every college and university to be dedicated to social justice as you understand it — for there to be no institutions where the very definition of justice is open to question and debate?

Friday, October 21, 2016

Kathleen Fitzpatrick and "generous thinking"

As I’ve mentioned before, I have been working with colleagues for some time now on a document about the future of the humanities, both within and without the university — more about that in due course. And some of my recent work has been devoted to this constellation of issues: see this review-essay in Books and Culture and this longish reflection in National Affairs.

So in light of all that I’m delighted to see that the estimable Kathleen Fitzpatrick is engaged in a new project on “Generous Thinking” in the university: see the first two installments here and here. I am really excited about the direction Kathleen is taking here and I hope to be a useful interlocutor for her — if I can get these dang books finished.

Wednesday, October 19, 2016

a hidden musical culture

If you don't subscribe to Robin Sloan's P R I M E S newsletter, you should. In the most recent edition he talks about this video:



Robin says this video is "wonderful for its evolving sound and also for its inscrutability. I mean, how is he making those noises? How has he learned to play that monstrous instrument?? It's amazing."

Leave that video playing in a tab. It's really nice. It's also quite strange, because it exists. I get the sense, observing this hobby from afar, that most of these slow-building basement performances are ephemeral or, if recorded, never shared. This is a music culture almost totally orthogonal to iTunes and Spotify and even SoundCloud.

I love the vision of a modular synthesizer enthusiast -- ideally 51 years old, a tax accountant with two children -- padding down into the basement where the music machine waits to spend an hour pulling cables and twirling knobs, listening and tweaking, building an analog soundscape, thick and warm like a blanket, and, in that moment, theirs alone.

except for all the others

Farhad Manjoo thinks the Clinton campaign email scandal proves that email in general needs to be ditched:

Email sometimes tricks us into feeling efficient, but it rarely is. Because it’s asynchronous, and because there are no limits on space and time, it often leads to endless, pointless ruminations. If they had ditched email and just held a 15-minute meeting, members of the campaign could have hashed out the foreign-agent decision more quickly in private.

In other words, limits often help. Get on the phone, make a decision, ditch your inbox. The world will be better off for it.

Sounds great — but what if “members of the campaign” weren’t all in the same place? I guess then Manjoo would say “get on the phone” — but have you ever tried arranging a conference call? If more than three people are involved it’s next to impossible. Talk about “inefficient”!

Also, when people hold a conference call to make a significant decision, it’s typically recorded so there will be a record of what they’ve decided, which is necessary in order to avoid the “that’s not how I remember it” problem — but that means that you have something that can be stolen later by nefarious parties.

Manjoo recommends Slack or Hipchat, which can work, but only when the conversation is among people wholly within a given organization.

Email drives me crazy the way it drives everyone else crazy, but I can set aside certain times of the day in which to use it. If I had to have my work interrupted eleven times a day for phone conferences, at someone else’s convenience, or had to have a Slack window open and pinging merrily away all day long, I’d never get anything done. Churchill’s famous comment about democracy — “the worst form of government, except for all those others that have been tried” — might be adapted here: email is the worst form of business communication, except for all the others that Manjoo recommends.

physicians, patients, and intellectual triage

Please, please read this fascinating essay by Maria Bustillos about her daughter’s diagnosis of MS — and how doctors can become blind to some highly promising forms of treatment. The problem? The belief, drilled into doctors and scientists at every stage of their education, that double-blind randomized tests are not just the gold standard for scientific evidence but the only evidence worth consulting. One of the consequences of that belief: that diet-based treatments never get serious considerations, because they can’t be tested blindly. People always know what they’re eating.

See this passage, which refers to Carmen’s doctor as “Dr. F.”:

In any case, the question of absolute “proof” is of no interest to me. We are in no position to wait for absolute anything. We need help now. And incontrovertibly, there is evidence — not proof, but real evidence, published in a score of leading academic journals — that animal fat makes MS patients worse. It is very clearly something to avoid. In my view, which is the view of a highly motivated layperson whose livelihood is, coincidentally, based in doing careful research, there is not the remotest question that impaired lipid metabolism plays a significant role in the progression of MS. Nobody understands exactly how it works, just yet, but if I were a neurologist myself, I would certainly be telling my patients, listen, you! — just in case, now. Please stick to a vegan plus fish diet, given that the cost-benefit ratio is so incredibly lopsided in your favor. There’s no risk to you. The potential benefit is that you stay well.

But Dr. F, who is a scientist, and moreover one charged with looking after people with MS, is advising not only against dieting, but is literally telling someone (Carmen!) who has MS, yes, if you like butter, you should “enjoy” it, even though there is real live evidence that it might permanently harm you, but not proof, you know.

In this way, Dr. F. illustrates exactly what has gone wrong with so much of American medicine, and indeed with American society in general. I know that sounds ridiculous, like hyperbole, but I mean it quite literally. Dr. F. made no attempt to learn about or explain how, if saturated fat is not harmful, Swank, and now Jelinek, could have arrived at their conclusions, though she cannot prove that saturated fat isn’t harmful to someone with MS. The deficiency in Dr. F.’s reasoning is not scientific: it’s more like a rhetorical deficiency, of trading a degraded notion of “proof” for meaning, with potentially catastrophic results. Dr. F. may be a good scientist, but she is a terrible logician.

I might say, rather than “terrible logician,” Dr. F. is someone who is a poor reasoner — who has made herself a poor reasoner by dividing the world into things that are proven and all other things, and then assuming that there’s no way to distinguish among all those “other things.”

You can see how this happens: the field of medicine is moving so quickly, with new papers coming out every day (and being retracted every other day), that Dr. F. is just doing intellectual triage. The firehose of information becomes manageable if you just stick to things that are proven. But as Bustillos says, people like Carmen don't have that luxury.

What an odd situation. We have never had such powerful medicine; and yet it has never been more necessary for sick people to learn to manage their own treatment.

Monday, October 17, 2016

John Gray and the human future

So many things I wish I could write about; so little time to do anything but work on those darn books. But at least I can call your attention to a few of those provocations. I’ll do one today, others later this week.

John Gray’s review of Yuval Noah Harari’s Homo Deus makes some vital distinctions that enthusiastic futurists like Harari almost never make. See this key passage:

Harari is right in thinking of human development as a process that no one could have planned or intended. He fails to see that the same is true of the post-human future. If such new species appear, they will be created by governments and powerful corporations, and used by any group that can get its hands on them – criminal cartels, terrorist networks, religious cults, and so on. Over time, these new species will be modified and redesigned, first by their human controllers, then by the new species themselves. It won’t be too long ­before some of them slip free from their human creators. One type may come out on top, at least for a while, but there is nothing to suggest this process will end in a godlike being that is supreme over all the rest. Like the evolution of human beings, post-human evolution will be a process of drift, with no direction or endpoint.

It is interesting how closely Gray’s argument tracks with the one C. S. Lewis made in The Abolition of Man, especially the third chapter, on the Conditioners and the rest of us. And of course — of course — there’s no question which group Harari identifies with, because futurists always assume the position of power — they always think of themselves as sitting comfortably among the Conditioners: “Forget economic growth, social reforms and political revolutions: in order to raise global happiness levels, we need to manipulate human biochemistry.” To which John Gray rightly replies,

Yet who are “we”, exactly? An elite of benevolent scientists, perhaps? If choice is an illusion, however, those who do the manipulating will be no freer than those who are being manipulated.

That point about free will is also Lewis’s. But Gray raises a possibility that Lewis didn't explore in Abolition, though he hints at it in the fictional counterpart to that book, That Hideous Strength: What if the Controllers don't agree with one another? “If it ever comes about, a post-human world won’t be one in which the human species has deified itself. More like the cosmos as imagined by the Greeks, it will be ruled by a warring pantheon of gods.” And so John Gray’s recommendation to those of us who want to understand what such a world would be like? “Read Homer.”

Friday, October 14, 2016

the future of the codex Bible

Catching up on a topic dear to my heart: here’s a fine essay in Comment by J. Mark Bertrand on printed and digital Bibles. A key passage:

Pastors and scholars rely heavily on software like BibleWorks and Accordance, and laypeople in church are more likely to open Bible apps on their phones than to carry printed editions. The days are coming and may already be upon us when parishioners look askance at sermons not preached from an iPad. ("But aren't you missional?")

And yet, the printed Bible is not under threat. If anything the advent of e-books has ushered in a renaissance of sorts for the physical form of the Good Book. The fulfillment of the hypertext dream by digital Bibles has cleared the way for printed Bibles to pursue other ends. The most exciting reinvention of the printed Scriptures is the so-called reader's Bible, a print edition designed from the ground up not as a reference work but as a book for deep, immersive reading.

Please read it all. And then turn to Bertrand’s Bible Design Blog, where he has recently reflected further on the same issues, and written a few detailed posts — one and two and three — on the new Crossway Reader’s Bible, in six beautifully printed and bound volumes. I got my copies the other day, and they really do constitute a remarkable feat of workmanship and design. You can read, and view, more about the project here.



I would love to say more about all this — and other matters dear to the heart of this ol’ blog — but I am still devoting most of my time to work on two books, one on Christian intellectual life in World War II and one called How to Think: A Guide for the Perplexed. Those will be keeping my mind occupied for the next few months. When I am able to post here, the posts will likely do little more than point to interesting things elsewhere.

Tuesday, October 11, 2016

thoughts on the processing of words

This review was commissioned by John Wilson and meant for Books and Culture. Alas, it will not be published there.



“Each of us remembers our own first time,” Matthew Kirschenbaum writes near the beginning of his literary history of word processing — but he rightly adds, “at least ... those of us of a certain age.” If, like me, you grew up writing things by hand and then at some point acquired a typewriter, then yes, your first writing on a computer may well have felt like a pretty big deal.

The heart of the matter was mistakes. When typing on a typewriter, you made mistakes, and then had to decide what, if anything, to do about them; and woe be unto you if you didn't notice a mistyped word until after you had removed the sheet of paper from the machine. If you caught it immediately after typing, or even a few lines later, then you could roll the platen back to the proper spot and use correcting material — Wite-Out and Liquid Paper were the two dominant brands, though fancy typewriters had their own built-in correction tape — to cover the offending marks and replace them with the right ones. But if you had already removed the paper, then you had to re-insert it and try, by making minute adjustments with the roller or the paper itself, to get everything set just so — but perfect success was rare. You’d often end up with the new letters or words slightly out of alignment with the rest of the page. Sometimes the results would look so bad that you’d rip the paper out of the machine in frustration and type the whole page again, but by that time you’d be tired and more likely to make further mistakes....

Moreover, if you were writing under any kind of time pressure — and I primarily used a typewriter to compose my research papers in college and graduate school, so time pressure was the norm — you were faced with a different sort of problem. Scanning a page for correctable mistakes, you were also likely to notice that you had phrased a point awkwardly, or left out an important piece of information. What to do? Fix it, or let it be? Often the answer depended on where in the paper the deficiencies appeared, because if they were to be found on, say, the second page of the paper, then any additions would force the retyping of that page but of every subsequent page — something not even to be contemplated when you were doing your final bleary-eyed 2 AM inspection of a paper that had to be turned in when you walked into your 9 AM class. You’d look at your lamentably imprecise or incomplete or just plain fuddled work and think, Ah, forget it. Good enough for government work — and fall into bed and turn out the light.

The advent of “word processing” — what an odd phrase — electronic writing, writing on a computer, whatever you call it, meant a sudden and complete end to these endless deliberations and tests of your fine motor skills. You could change anything! anywhere! right up to the point of printing the thing out — and if you had the financial wherewithal or institutional permissions that allowed you to ignore the cost of paper and ink, you could even print out a document, edit it, and then print it out again. A brave new world indeed. Thus, as the novelist Anne Rice once commented, when you’re using a word processor “There’s really no excuse for not writing the perfect book.”

But there’s the rub, isn't there? For some few writers the advent of word processing was a pure blessing: Stanley Elkin, for instance, whose multiple sclerosis made it impossible for him to hold a pen properly or press a typewriter’s keys with sufficient force, said that the arrival of his first word-processing machine was “the most important day of my literary life.” But for most professional writers — and let’s remember that Track Changes is a literary history of word processing, not meant to cover the full range of its cultural significance — the blessing was mixed. As Rice says, now that endless revision is available to you, as a writer you have no excuse for failing to produce “the perfect book” — or rather, no excuse save the limitations of your own talent.

As a result, the many writers’ comments on word processors that Kirschenbaum cites here tend to be curiously ambivalent: it’s often difficult to tell whether they’re praising or damning the machines. So the poet Louis Simpson says that writing on a word processor “tells you your writing is not final,” which sounds like a good thing, but then he continues: “It enables you to think you are writing when you are not, when you are only making notes or the outline of a poem you may write at a later time.” Which sounds ... not so good? It’s hard to tell, though if you look at Simpson’s whole essay, which appeared in the New York Times Book Review in 1988, you’ll see that he meant to warn writers against using those dangerous machines. (Simpson’s article received a quick and sharp rebuttal from William F. Buckley, Jr., an early user of and advocate for word processors.)

Similarly, the philosopher Jacques Derrida, whom Kirschenbaum quotes on the same page:

Previously, after a certain number of versions, everything came to a halt — that was enough. Not that you thought the text was perfect, but after a certain period of metamorphosis the process was interrupted. With the computer, everything is rapid and so easy; you get to thinking you can go on revising forever.

Yes, “you get to thinking” that — but it’s not true, is it? At a certain point revision is arrested by publishers’ deadlines or by the ultimate deadline, death itself. The prospect of indefinite revision is illusory.

But however ambivalent writers might be about the powers of the word processor, they are almost unanimous in insisting that they take full advantage of those powers. As Hannah Sullivan writes in her book The Work of Revision, which I reviewed in these pages, John Milton, centuries ago, claimed that his “celestial patroness ... inspires easy my unpremeditated verse,” but writers today will tell you how much they revise until you’re sick of hearing about it. This habit predates the invention of the word processor, but has since become universal. Writers today do not aspire, as Italian Renaissance courtiers did, to the virtue called sprezzatura: a cultivated nonchalance, doing the enormously difficult as though it were easy as pie. Just the opposite: they want us to understand that their technological equipment does not make their work easier but far, far harder. And in many ways it does.



Matthew Kirschenbaum worked on Track Changes for quite some time: pieces of the book started appearing in print, or in public pixels, at least five years ago. Some of the key stories in the book have therefore been circulating in public, and the most widely-discussed of them have focused on a single question: What was the first book to be written on a word processor? This turns out to be a very difficult question to answer, not least because of the ambiguities inherent in the terms “written” and “word processor.” For instance, when John Hersey was working on his novel My Petition for More Space, he wrote a complete draft by hand and then edited it on a mainframe computer at Yale University (where he then taught). Unless I have missed something, Kirschenbaum does not say how the handwritten text got into digital form, but I assume someone entered the data for Hersey, who wanted to do things this way largely because he was interested in his book’s typesetting and the program called the Yale Editor or just E gave him some control over that process. So in a strict sense Hersey did not write the book on the machine; nor was the machine a “word processor” as such.

But in any case, Hersey, who used the Yale Editor in 1973, wouldn't have beaten Kirschenbaum’s candidate for First Word-Processed Literary Book: Len Deighton’s Bomber, a World War II thriller published in 1970. Deighton, an English novelist who had already published several very successful thrillers, most famously The IPCRESS File in 1962, had the wherewithal to drop $10,000 — well over $50,000 in today’s money — on IBM’s Frankensteinian hybrid of a Selectric typewriter and a tape-based computing machine, the MT/ST. IBM had designed this machine for heavy office use, never imagining that any individual writer would purchase one, so minimizing the size hadn’t been a focus of the design: as a result, Deighton could only get the thing into his flat by having a window removed, which allowed it to be swung into his study by a crane.

Moreover, Deighton rarely typed on the machine himself: that task was left to his secretary, Ellenor Handley, who also took care to print sections of the book told from different points of view on appropriately color-coded paper. (This enabled Deighton to see almost at a glance whether some perspectives were over-represented in his story.) So even if Bomber is indeed the first word-processed book, the unique circumstances of its composition set it well apart from what we now think of as the digital writing life. Therefore, Kirschenbaum also wonders “who was the first author to sit down in front of a digital computer’s keyboard and compose a published work of fiction or poetry directly on the screen.”

Quite possibly it was Jerry Pournelle, or maybe it was David Gerrold or even Michael Crichton or Richard Condon; or someone else entirely whom I have overlooked. It probably happened in the year 1977 or 1978 at the latest, and it was almost certainly a popular (as opposed to highbrow) author.

After he completed Track Changes, Kirschenbaum learned that Gay Courter’s 1981 bestselling novel The Midwife was written completely on an IBM System 6 word processor that she bought when it first appeared on the market in 1977 — thus confirming his suspicion that mass-market authors were quicker to embrace this technology than self-consciously “literary” ones, and reminding us of what he says repeatedly in the book: that his account is a kind of first report from a field that we’ll continue to learn more about.

In any case, the who-was-first questions are not as interesting or as valuable as Kirschenbaum’s meticulous record of how various writers — Anne Rice, Stephen King, John Updike, David Foster Wallace — made, or did not quite make, the transition from handwritten or typewritten drafts to a full reliance on the personal computer as the site for literary writing. Wallace, for instance, always wrote in longhand and transcribed his drafts to the computer at some relatively late stage in the process. Also, when he had significantly altered a passage, he deleted earlier versions from his hard drive so he would not be tempted to revert to them.

The encounters of writers with their machines are enormously various and fun to read about. Kirschenbaum quotes a funny passage in which Jonathan Franzen described how his first word processor kept making distracting sounds that he could only silence by wedging a pencil in the guts of the machine. Franzen elsewhere describes using a laptop with no wireless access whose Ethernet port he glued shut so he could not get online — a problem not intrinsic to electronic writing but rather to internet-capable machines, and one that George R. R. Martin solves by writing on a computer that can’t connect to the internet, using the venerable word-processing program WordStar. Similarly, my friend Edward Mendelson continues to insist that WordPerfect for MS-DOS is the best word-processing program, and John McPhee writes using a computer program that a computer-scientist friend coded for him back in 1984. (I don't use a word-processing program at all, but rather a programmer’s text editor.) If it ain’t broke, don't fix it. And if it is broke, wedge a pencil in it.



Kirchenbaum believes that this transition to digital writing is “an event of the highest significance in the history of writing.” And yet he confesses, near the end of his book, that he’s not sure what that significance is. “Every impulse that I had to generalize about word processing — that it made books longer, that it made sentences shorter, that it made sentences longer, that it made authors more prolific — was seemingly countered by some equally compelling exemplar suggesting otherwise.” Some reviewers of Track Changes have wondered whether Kirschenbaum isn’t making too big a deal of the whole phenomenon. In the Guardian of London, Brian Dillon wrote, “This review is being drafted with a German fountain pen of 1960s design – but does it matter? Give me this A4 pad, my MacBook Air or a sharp stick and a stretch of wet sand, and I will give you a thousand words a day, no more and likely no different. Writing, it turns out, happens in the head after all.”

Maybe. But we can’t be sure, because we can’t rewind history and make Dillon write the review on his laptop, and then rewind it again, take him to the beach, and hand him a stick. I wrote this review on my laptop, but I sometimes write by speaking, using the Mac OS’s built-in dictation software, and I draft all of my books and long essays by hand, using a Pilot fountain pen and a Leuchtturm notebook. I cannot be certain, but I feel that each environment changes my writing, though probably in relatively subtle ways. For instance, I’m convinced that when I dictate my sentences are longer and employ more commas; and I think my word choice is more precise and less predictable when I am writing by hand, which is why I try to use that older technology whenever I have time. (Because writing by hand is slower, I have time to reconsider word choices before I get them on the page. But then I not only write more slowly, I have to transcribe the text later. If only Books and Culture and my book publishers would accept handwritten work!)

We typically think of the invention of printing as a massive consequential event, but Thomas Hobbes says in Leviathan (1650) that in comparison with the invention of literacy itself printing is perhaps “ingenious” but fundamentally “no great matter.” Which I suppose is true. This reminds us that assessing the importance of any technological change requires comparative judgment. The transition to word processing seemed like a very big deal at the time, because, as Hannah Sullivan puts it, it lowered the cost of revision to nearly zero. No longer did we have to go through the agonies I describe at the outset of this review. But I am now inclined to think that it was not nearly as important as the transition from stand-alone PCs to internet-enabled devices. The machine that holds a writer’s favored word-processing or text-editing application will now, barring interventions along the lines of Jonathan Franzen’s disabled Ethernet port, be connected to the endless stream of opinionating, bloviating, and hate-mongering that flows from our social-media services. And that seems to me an even more consequential change for the writer, or would-be writer, than the digitizing of writing was. Which is why I, as soon as I’ve emailed this review to John Wilson, will be closing this laptop and picking up my notebook and pen.

Friday, October 7, 2016

"oddkin" and really odd kin

I’ve been reading Donna Haraway’s Staying with the Trouble: Making Kin in the Chthulucene, and like all Haraway’s work it’s a strange combination of the deeply unconventional and the deeply conventional. Conventional in that formally it’s a standard academic monograph, complete with all the usual apparatus, including not just proper citations and endnotes but also extensive thanks to all the high-class venues around the world where noted academics get to visit to give draft versions of their book chapters. Unconventional in that Haraway has some peculiar ideas and a peculiar (but often delightful, to me anyway) prose style. I find myself wishing that the form was as ambitious and unpredictable as the weirder of the ideas; the rigors of standard academic discursive practice serve as a kind of straitjacket for those ideas, it seems to me.

Here’s a passage that will give you a pretty good sense of what Haraway is up to in this book:

The book and the idea of “staying with the trouble” are especially impatient with two responses that I hear all too frequently to the horrors of the Anthropocene and the Capitalocene. The first is easy to describe and, I think, dismiss, namely, a comic faith in technofixes, whether secular or religious: technology will somehow come to the rescue of its naughty but very clever children, or what amounts to the same thing, God will come to the rescue of his disobedient but ever hopeful children. In the face of such touching silliness about technofixes (or techno-apocalypses), sometimes it is hard to remember that it remains important to embrace situated technical projects and their people. They are not the enemy; they can do many important things for staying with the trouble and for making generative oddkin.

“Making generative oddkin”? Yes. Seeking to become kin with all sorts of creatures and things — pigeons, for instance. There’s a brilliant early chapter here on human interaction with pigeons. Of course, that interaction has been conducted largely on human terms, and Haraway wants to create two-way streets where in the past they ran only from humans to everything else. “Staying with the trouble requires making oddkin; that is, we require each other in unexpected collaborations and combinations, in hot compost piles. We become-with each other or not at all.”

But here’s the thing: Haraway’s human kin are “antiracist, anticolonial, anticapitalist, proqueer feminists of every color and from every people,” and people who share her commitment to “Make Kin Not Babies”: “Pronatalism in all its powerful guises ought to be in question almost everywhere.”

It’s easy to talk about the need to “become-with each other,” but based on many years of experience, I suspect that — to borrow a tripartite distinction from Scott Alexander — most people who use that kind of language are fine with their ingroup (“antiracist, anticolonial, anticapitalist, proqueer feminists of every color and from every people”) and fine with the fargroup (pigeons), but the outgroup? The outgroup that lives in your city and votes in the same elections you do? Not so much.

So here’s my question for Professor Haraway: Does the project of “making kin” extend to that couple down the street from you who have five kids, attend a big-box evangelical church, and plan to vote for Trump? Fair warning: They’re a little more likely to talk back than the pigeons are.

Wednesday, October 5, 2016

pronoun trouble


“Hah! That’s it! Hold it right there!” And then, knowledgeably, confidentially, to the audience: “Pronoun trouble.

Yes, we’re having lots of pronoun trouble these days — for instance, at the University of Toronto. That story quotes “A. W. Peet, a physics professor who identifies as non-binary and uses the pronoun ‘they.’” This is a topic on which plenty of people have plenty of intemperate things to say, which means that a good many of the important underlying issues typically haven’t been explored very thoughtfully. Let me try to identify a couple of them.

The first thing to note is that, so far anyway, the debates haven't been about all pronouns: they have focused only on third person singular pronouns, or what might substitute for the third person singular pronoun — as in the case of Professor Peet, above. That is, the gendered pronouns in English. (As a number of commenters have pointed out, these debates would be almost impossible to have in most of the other European languages, into which gender distinctions are woven so much more densely — and in which, therefore, paradoxically enough, they don't seem to carry as much identity-bearing weight.)

For the Ontario Human Rights Commission, gender identity is “each person’s internal and individual experience of gender. It is their sense of being a woman, a man, both, neither, or anywhere along the gender spectrum,” and everyone has a right to declare where they are on that spectrum — but, more important, also to have that declaration accepted and acknowledged by others.

I have some questions and thoughts.

1) How would I feel if I had a boss — my Dean, say, or Provost (hi Tom, hi Greg) — who persistently referred to me as “she”? And ignored me when I said “That should be ‘he’”? I’d be pretty pissed off. But I’m not sure the best way to describe such language is as a violation of my human rights. Nor do I think it should be seen as a criminal act. Might there be some less extreme language to describe it?

2) Would the experience for someone whose “individual experience of gender” is less traditional than my own be morally and legally different? If so, why?

3) Some people — for instance, here’s the story of Paige Abendroth — claim to experience a “flip” of their “individual experience of gender” from time to time, unpredictably. How responsible should Paige’s coworkers be for keeping up with the flipping? How much tolerance would, or should, the Ontario Human Rights Commission have for any co-workers of Paige who struggled to get it right? Conversely, does Paige have the responsibility to inform everyone in the workplace that such flipping occurs, and to announce to them when it has occurred so that they can start using the proper pronouns?

4) Why should gender be the only relevant consideration here? Suppose I come to experience, as some people do, a complete detachment from myself — a kind of alienation powerful enough that it feels wrong to speak of myself as “I,” and deeply uncomfortable to be addressed as “you” — an existential, not just a rhetorical, illeism. Would my human rights be violated if my co-workers continued to employ the pronoun “you” when addressing me directly, if I wished them not to do so? Surely someone will object that if I were to have this experience it would be a sign of psychological disorder, but why? By what reasoning can we say that that kind of experience is disordered but the experience of Paige Abendroth isn’t? This is not a rhetorical question: I’d really like to know what the argument would look like.

5) We’ve been here before: just ask the theologians. The way many of them have addressed the “pronoun trouble” posed when they talk about God is, I think a harbinger of the future. Many theologians say things like “How God experiences Godself is an unfathomable mystery” and “We must not think of God as utterly independent of God’s creation” — which is to say, they avoid pronouns altogether, gendered ones anyway. As pronoun preference comes to be more and more frequently enshrined within the discourse of human rights, people will become more and more fearful of the consequences of getting pronouns wrong; and the best way to avoid getting the pronouns wrong is to stop using them altogether.

I bet that’s where we’re headed. And I don't even think it will be all that hard to manage. If someone says "Paige needs to do what's best for Paige" instead of "Paige needs to do what's best for her," no one would even notice. When I first started thinking about how all this might apply to me, I realized how rarely, in a classroom setting, I have cause to use third-person singular pronouns. If Alison makes an interesting comment and I want to get a response, I say "What do y'all think about Alison's argument?" not "What do y'all think about her argument?" — the latter would seem rude, I think. Moreover, the good people at Language Log have spent years rehabilitating the singular "they." I can easily imagine the use of third-person singular pronouns gradually all but disappearing from our everyday language — though it will be easier to achieve in speech than in writing.

And then the world will move on to the next gross violation of human rights.

Mary Midgley on cooperative thinking

Mary Midgley is one of my favorite philosophers. Her The Myths We Live By plays a significant role in a forthcoming book of mine and her essay “On Trying Out One’s New Sword” eviscerates cultural relativism, or what she calls “moral isolationism,” more briefly and elegantly than one would have thought possible.

Midgley studied philosophy at Oxford during World War II, along with several other women who would become major philosophers: Elizabeth Anscombe, Philippa Foot, Mark Warnock, Iris Murdoch. People have often wondered how this happened — how, in a field so traditionally inhospitable to women, a number of brilliant ones happened to emerge at the same time and in the same place. Three years ago, in a letter to the Guardian, Midgley offered a fascinating sociological explanation:

As a survivor from the wartime group, I can only say: sorry, but the reason was indeed that there were fewer men about then. The trouble is not, of course, men as such – men have done good enough philosophy in the past. What is wrong is a particular style of philosophising that results from encouraging a lot of clever young men to compete in winning arguments. These people then quickly build up a set of games out of simple oppositions and elaborate them until, in the end, nobody else can see what they are talking about. All this can go on until somebody from outside the circle finally explodes it by moving the conversation on to a quite different topic, after which the games are forgotten. Hobbes did this in the 1640s. Moore and Russell did it in the 1890s. And actually I think the time is about ripe for somebody to do it today. By contrast, in those wartime classes – which were small – men (conscientious objectors etc) were present as well as women, but they weren't keen on arguing.

It was clear that we were all more interested in understanding this deeply puzzling world than in putting each other down. That was how Elizabeth Anscombe, Philippa Foot, Iris Murdoch, Mary Warnock and I, in our various ways, all came to think out alternatives to the brash, unreal style of philosophising – based essentially on logical positivism – that was current at the time. And these were the ideas that we later expressed in our own writings.

Given that so many people think of philosophy simply as arguing, and therefore as an intrinsically competitive activity, it might be rather surprising to hear Midgley claim that interesting and innovative philosophical thought emerged from her environment at Oxford because of the presence of a critical mass of people who “weren't keen on arguing” but were “more interested in understanding this deeply puzzling world” (emphasis mine).

In a recent follow-up to and expansion of that letter, Midgley quotes Colin McGinn describing his own philosophical education at Oxford, thirty years later, especially in classes with Gareth Evans: “Evans was a fierce debater, impatient and uncompromising; as I remarked, he skewered fools gladly (perhaps too gladly). The atmosphere in his class was intimidating and thrilling at the same time. As I was to learn later, this is fairly characteristic of philosophical debate. Philosophy and ego are never very far apart. Philosophical discussion can be ... a clashing of analytically honed intellects, with pulsing egos attached to them ... a kind of intellectual blood-sport, in which egos get bruised and buckled, even impaled.” To which Midgley replies, with her characteristic deceptively mild ironic tone:

Well, yes, so it can, but does it always have to? We can see that at wartime Oxford things turned out rather differently, because even bloodier tournaments and competitions elsewhere had made the normal attention to these games impossible. So, by some kind of chance, life had made a temporary break in the constant obsession with picking small faults in other people’s arguments – the continuing neglect of what were meant to be central issues – that had become habitual with the local philosophers. It had interrupted those distracting feuds which were then reigning, as in any competitive atmosphere feuds always do reign, preventing serious attempts at discussion, unless somebody deliberately controls them.

And Midgley doesn't shy away from stating bluntly what the thinks about the intellectual habits that Gareth Evans was teaching young Colin McGinn and others: “Such habits, while they prevail, simply stop people doing any real philosophy.”

So Midgley suggests that other habits be taught: “Co-operative rather than competitive thinking always needs to be widely taught. Feuds need to be put in the background, because all students equally have to learn a way of working that will be helpful to everybody rather than just promoting their own glory.” Of course, promoting your own glory is the usual path to academic success, and if that’s what you want, then your way is clear. But Midgley wants people who choose that path to know that if they don't learn co-operative thinking, “they can’t really do effective philosophy at all.” They won't make progress “in understanding this deeply puzzling world.”

I can't imagine any academic endeavor that wouldn't be improved, intellectually and morally, if its participants heeded Midgley’s counsel.

Monday, October 3, 2016

books on The Good Book

The Wall Street Journal commissioned this review but in the end didn't find space for it. Which is cool, because they paid me for it anyway. I offer it here gratis, for your reading pleasure. 



One of the first attempts to account for literature in terms of evolutionary psychology was provided by Stephen Pinker, in his 1998 book How the Mind Works. There he suggested that “Fictional narratives supply us with a mental catalogue of the fatal conundrums we might face someday and the outcomes of strategies we could deploy in them.” Take Hamlet for example: “What are the options if I were to suspect that my uncle killed my father, took his position, and married my mother?”

This was perhaps a rather wooden and literal-minded example, and Pinker has received some hearty ribbing for perpetrating it, so one might expect that more recent entries in the genre have grown more sophisticated. But not so much.

The difficulties start with what ev-psych critics think a story is. They think a book is a kind of machine for solving problems of survival or flourishing, sort of like a wheel or a hammer except made with words rather than wood or rock. Thus Carel von Schaik and Kai Michel (hereafter S&M) in The Good Book of Human Nature: An Evolutionary Reading of the Bible: “We know how humans evolved over the last 2 million years and how and to what degree the prehistoric environment shaped the human psyche.... We can therefore reconstruct the problems the Bible was trying to solve.” Leaving aside the rather significant question of how much “we” actually do know about human prehistory and its role in forming our brains, one might still ask whether the Bible is a problem-solving device. But this is one of the governing assumptions of S&M's book and no alternatives to this assumption are ever considered.

The Good Book of Human Nature is governed by a few other assumptions too. One is that the turning point in human development was what Jared Diamond called “the worst mistake in the history of the human race”: trading in a hunter-gatherer life for a sedentary agricultural life. Another is that humans possess three “natures” that are related to this transition: first, “innate feelings, reactions, and preferences” that predate the transition; second, a cultural nature, based on strategies for dealing with the problems that arose from assuming a sedentary life; and third, “our rational side,” which is based on consciously held beliefs.

These assumptions in turn generate a theory of religion, which is basically that religion is a complex strategy for keeping the three natures in some degree of non-disabling relation to one another. And when, equipped with these assumptions and this theory, S&M turn their attention to the Bible — again, conceived as a problem-solving device — it turns out that the Bible confirms their theory at every point. Previous interpreters of the Bible, S&M note, have never come to any agreement about what it means, but they have discovered what it’s “really about,” what its “actual subject really is”: “the adoption of a sedentary way of life.” They do not say whether they expect to put an end to interpretative disagreement. Perhaps modesty forbade.

Thus armed, S&M get to work. The patriarchal narratives illustrate and teach responses to “the problems created by patriarchal families," and formulate an “expansion strategy" in relation to said problems. The portions of Scripture known in Judaism as the Writings — Ketuvim, including the Psalms, Proverbs, Job and so on — collectively embody an IAR (immunization against refutation) strategy. The prophets, including the New Testament’s accounts of the life of Jesus? All about CREDs (credibility-enhancing displays).

If you like this sort of thing, this is the sort of thing you’ll like. To me, a little of it goes a very long way — and this Good Book offers 450 pages of it, which is like a two-finger piano exercise that lasts seven hours. My complaint is the opposite of that put forth by the Emperor in Amadeus: Too few notes, I say. Played too many times.

Is it really likely that this enormously divergent collection of writings we call the Bible has a single “subject”? That the heartfelt outpourings of the Psalms and the lamentations of Job amount to a “strategy”? Moreover, given that the conditions of production that S&M think relevant — the shift from hunter-gatherers to agriculturalists — happened all over the world, the account they give here should be the same were they working on any surviving writings from the same era. Which means that their book on Homer and Hesiod and Sappho would say mostly the same things this book says.

This is what happens when you confine your reading to a few highly general principles of “human history” and “human social development”: all the particularity, and therefore all the interest, drains from the world. S&M may have encountered some interesting residual phenomena from the sedentarization of homo sapiens. What they have not encountered is the Bible.



After all this, I turned with some relief to A. N. Wilson’s The Book of the People, not because I expected to agree with it, but because I expected it to involve something clearly recognizable to me as reading. But I did not get quite what I thought I would.

The material of Wilson’s book arises largely from conversations with a person known only by the single initial “L.” Wilson unaccountably extends this peculiar naming convention to everyone else in the book, including his wife and daughters and an English journalist (“H.”) living in Washington who once wrote for a number of London periodicals, smoked and drank a lot, and ultimately died of throat cancer. (Couldn't we at least call him Hitch?) But in the case of L. there seems to be good reason for this limited form of identification.

Wilson met L. when he was an undergraduate and she a graduate student at Oxford. Wilson very gradually discloses details about her over the course of the book: that she was very tall and wore thick glasses; that she was a Presbyterian; that she was a disciple of the great Canadian literary scholar Northrop Frye; that she had a lifelong history of mental illness, which may have contributed to an irregular work history and a preference for moving frequently; and, above all, that she planned to write a book about the Bible.

Wilson studied theology at one point, and considered enterting the priesthood, but later became thoroughly disillusioned by Christianity and by religion in general, going so far as to write a pamphlet called Against Religion (1991). But almost as soon as he had written it he began to have reservations — “I am in fact one of life’s wishy-washies,” he confesses at one point — and eventually returned to belief, as L. had prophesied he would. L. told him that he could only come to the truth about God and the Bible after rejecting falsehoods about it, chief among those falsehoods being the two varieties of fundamentalism: theistic and atheistic.

As Wilson travels through life — and travels around the world: much of this book involves descriptions of apparently delightful journeys to romantic or historic places — he keeps thinking about the Bible, and when he does he also thinks of L. They correspond; they meet from time to time. Typically she has moved to another place and has added to her notes on her Bible book, though she never gets around to writing it. Eventually we learn that she has died. Wilson manages to get to her funeral, at an Anglo-Catholic convent in Wiltshire, and receives from the nuns there a packet containing her jottings. “It is from these notes that the present book is constructed. This is L.’s book as much as mine.”

So what does Wilson learn from L. about the Bible? It is hard to say. To give one example of his method: at one point he muses that L. must have in some sense patterned herself on Simone Weil, the great French mystic who died in 1943, which reminds him that Weil had been brought to Christian faith largely by her encounter with the poetry of the 17th-century Anglican George Herbert. This leads him to quote some of Herbert’s poems, and to note their debt to the Psalms, which in turn leads him to think about how the Psalms are used in the Gospels, which, in the last link of this particular literary chain, leads him to wonder whether the story of the Crucifixion is but poetry, a “literary construct.” A question which he does not answer: instead he turns to an account of L.’s funeral.

That’s how this book goes: it consists of a series of looping anecdotal flights that occasionally touch down and look at the Bible for a moment, before being spooked by something and lifting off again. There is at least as much about traveling to Ghent to see Van Eyck’s great altarpiece, and reading Gibbon’s Decline and Fall in Istanbul with Hagia Sophia looming portentously in the background, and meeting L. in coffeeshops, as about the Bible itself.

If there is any definitive lesson Wilson wishes us to learn from all this, it is the aforementioned folly of fundamentalism. At several points he recalls his own forays into the “historical Jesus” quests and dismisses them as pointless: none of the rock-hard evidence believers seek will ever be found, nor will unbelievers be able to find conclusive reason to dismiss the accounts the Gospels give of this peculiar and extraordinary figure.

At this point we should reflect on that literary device of using initials rather than names. More than once Wilson calls to our attention the view widely held among biblical scholars that the texts we have are composites of earlier and unknown texts: thus the “Documentary Hypothesis” about the Pentateuch, with its four authors (J, E, D, and P), and the posited source (in German Quelle) for the synoptic Gospels, Q. In light of all this we cannot be surprised when, late in the book, Wilson confesses that L. is herself a “composite figure,” one he “felt free to mythologize.”

Is he simply saying that we’re all just storytellers, that it’s mythologizing all the way down, no firm floor of fact to be discovered? If so, then while The Book of the People may in some sense live up to its subtitle — How to Read the Bible — it certainly does not tell us, any more than S&M did, why we should bother with this strange and often infuriating book.

I find it hard not to see both The Good Book of Human Nature and The Book of the People as complicated attempts to avoid encountering the Bible on its own terms, in light of its own claims for itself and for its God. I keep thinking that what Kierkegaard said about “Christian scholarship” is relevant to these contemporary versions of reading: “We would be sunk if it were not for Christian scholarship! Praise be to everyone who works to consolidate the reputation of Christian scholarship, which helps to restrain the New Testament, this confounded book which would one, two, three, run us all down if it got loose.”