Text Patterns - by Alan Jacobs

Saturday, November 30, 2013

"Cheat the Prophet" revisited



I was having some fun on Twitter this morning with this piece of prophetic silliness — silly even for the a-scientist-predicts-the-future genre, which is saying a lot. Computers will disappear! — because they will be ubiquitous, and I’m sure there’s no need even to wonder if ubiquitous computing could be useful to ubiquitous governments, because we’re told later in the piece that technology is bad for dictators. Capitalism will be perfected! — which means that there will no longer be any possibility of sales resistance, of saying No to the capitalists. That silly “digital divide” people used to talk about never happened! — which I know is the object of constant gratitude for all those kids in Bangladesh and Mozambique with their iPads. And since there’s no mention of global warming in the piece, or the provision of electricity to places and people that don’t have it, or the availability of clean water to places and people who currently don’t even have that, I’m sure all those little glitches in the March of Progress will have been straightened out by 2050, probably with a few lines of elegant code.

You know the kind of thing. So here I just want to make one comment: that whenever I read this kind of thing I find myself recalling the first chapter of Chesterton’s The Napoleon of Notting Hill, from 1904, which begins with these still-utterly-relevant words:

The human race, to which so many of my readers belong, has been playing at children’s games from the beginning, and will probably do it till the end, which is a nuisance for the few people who grow up. And one of the games to which it is most attached is called “Keep to-morrow dark,” and which is also named (by the rustics in Shropshire, I have no doubt) “Cheat the Prophet.” The players listen very carefully and respectfully to all that the clever men have to say about what is to happen in the next generation. The players then wait until all the clever men are dead, and bury them nicely. They then go and do something else. That is all. For a race of simple tastes, however, it is great fun.

Friday, November 29, 2013

a broken spell

Recently I was reading a lovely autobiographical essay by Zadie Smith about — well, in the way of the true essay, it’s about several things: gardens, civility, grief, memory. Much of it concerns her travels with her late father, and those scenes are beautifully rendered.

And then I came to her description of the Borghese Gardens in Rome, and read this:

For our two years in Rome, the Borghese Gardens became a semiregular haunt, the place most likely to drag us from our Monti stupor. And I always left the park reluctantly; it was not an easy transition to move from its pleasant chaos to the sometimes pedantic conventionality of the city. No, you can’t have cheese on your vongole; no, this isn’t the time for a cappuccino; yes, you can eat pizza on these steps but not near that fountain; in December we all go to India; in February we all ski in France; in September of course we go to New York. Everything Romans do is perfect and delightful, but it is sometimes annoying that they should insist on all doing the same things at exactly the same time. I think their argument is: given that all our habits are perfect and delightful, why would anyone stray from them?

And in an instant all my interest and sympathy evaporated. Those are the things that “Romans” do, yes? Travel to India and New York City, ski in France? These are the habits of “Romans”? But of course Smith means the tiny, tiny fraction of Romans who have the extravagant wealth to do these things — the .01 percent, the absolute elite. These are the “Romans” she knows.

To this one might reply, well, Smith herself was not born into privilege: a biracial woman from London who grew up in straitened circumstances if not absolute poverty, she knows what it’s like to struggle. Exactly: all the more reason for her not to take privilege — extraordinary privilege — as the norm. “Romans” indeed.

I am willing, seriously willing, to consider that this response may well be a failure of charity on my part, so I record it not as a confident judgment but as a snapshot of readerly experience. Whether I was right or wrong to respond as I did, I think it noteworthy that with that paragraph my involvement in the essay — which until that point had been complete, I had been absorbed — ended. I listlessly cast my eyes over the last few paragraphs. The voice that had so delighted me a few moments before now seemed to me almost precious in its complacency. A lovely little spell had broken and could not be brought back. Whether it was Smith or I who broke it I leave as an exercise for you, my readers.

Wednesday, November 27, 2013

the rich are different

In his great autobiographical essay “Such, Such Were the Joys,” George Orwell remembers his schooldays:

There never was, I suppose, in the history of the world a time when the sheer vulgar fatness of wealth, without any kind of aristocratic elegance to redeem it, was so obtrusive as in those years before 1914. It was the age when crazy millionaires in curly top-hats and lavender waistcoats gave champagne parties in rococo house-boats on the Thames, the age of diabolo and hobble skirts, the age of the ‘knut’ in his grey bowler and cut-away coat, the age of The Merry Widow, Saki's novels, Peter Pan and Where the Rainbow Ends, the age when people talked about chocs and cigs and ripping and topping and heavenly, when they went for divvy week-ends at Brighton and had scrumptious teas at the Troc. From the whole decade before 1914 there seems to breathe forth a smell of the more vulgar, un-grown-up kind of luxury, a smell of brilliantine and crème-de-menthe and soft-centred chocolates — an atmosphere, as it were, of eating everlasting strawberry ices on green lawns to the tune of the Eton Boating Song. The extraordinary thing was the way in which everyone took it for granted that his oozing, bulging wealth of the English upper and upper-middle classes would last for ever, and was part of the order of things. After 1918 it was never quite the same again. Snobbishness and expensive habits came back, certainly, but they were self-conscious and on the defensive. Before the war the worship of money was entirely unreflecting and untroubled by any pang of conscience. The goodness of money was as unmistakable as the goodness of health or beauty, and a glittering car, a title or a horde of servants was mixed up in people's minds with the idea of actual moral virtue.

What follows is purely subjective and impressionistic, but: I think in America in 2013 we’re back to that point, back, that is, to an environment in which “the worship of money [is] entirely unreflecting and untroubled by any pang of conscience.”

We have plenty of evidence that the very rich are deficient in generosity and lacking in basic human empathy, and yet there seems to be a general confidence in the very rich — a widespread belief that those who have amassed great wealth, by whatever means, can be trusted to fix even the most intractable social problems.

Consider in this light, and as just one example, the widespread enthusiasm for the rise of the MOOC. The New York Times called 2012 The Year of the MOOC in an article comprised almost wholly of MOOC-makers’ talking-points, and even when the most prominent advocate of MOOCs abandons them as a lost cause, he still gets reverential puff-pieces. Some people can do no wrong. They just have to have enough money — and to have gotten it in the right way.

I think this “entirely unreflecting” “worship of money” is sustained by one thing above all: wealth-acquisition in America today, in comparison to wealth-acquisition in the Victorian age or across the Pacific in China, feels clean. Pixel-based and sootless. No sweatshops in sight — those are well-hidden in other parts of the world. We may happen to find out that Amazon’s warehouses aren’t that different than sweatshops, but that doesn't seem to make much of a difference, in large part because our own dealings with Amazon are so frictionless and, again, clean: no handing over of cash, not even credit cards after you enter your number that first time, just pointing and clicking and waiting for the package to show up on your porch. Oh look, there it is. Not only are the actual conditions of production hidden, but even the nature of the transaction is invisible, de-materialized. (I could be talking about MOOCs here as well: they work the same way.)

It’s almost impossible to think of Jeff Bezos or Steve Jobs or Sebastian Thrun as robber baron industrialists or even as captains of industry, even if the occasional article appears identifying them as such, because what they do doesn't fit our imaginative picture of “industry.” They seem more like the economic version of the Mr. Fusion Home Energy Reactor in Doc’s DeLorean: you just throw any old crap in and pure hi-res digital money comes out.

Cue Donald Fagen:



Just machines to make big decisions
Programmed by fellas with compassion and vision
We’ll be clean when that work is done
We’ll be eternally free, yes, and eternally young

Happy Thanksgiving, everybody.

Monday, November 25, 2013

on reading and flux


Please read this lovely reflection by Frank Chimero on “what screens want” — a gloss on Kevin Kelly’s what technology wants — though Chimero makes this important and (to my mind) necessary pivot near the end: “Let me leave you with this: the point of my writing was to ask what screens want. I think that’s a great question, but it is a secondary concern. What screens want needs to match up with what we want.”

It’s a rich and subtle essay that covers several key topics, and thinks in appropriately large terms; I’ll be returning to it. But just for now I want to zero in on an especially intriguing part of the essay in which Chimero meditates on Eadweard Muybridge’s early moving pictures of a running horse.

Muybridge

Of these images Chimero writes,

And you know, these little animations look awfully similar to animated GIFs. Seems that any time screens appear, some kind of short, looping animated imagery of animals shows up, as if they were a natural consequence of screens. 

Muybridge’s crazy horse experiment eventually led us to the familiar glow of the screen. If you’re like me, and consider Muybridge’s work as one of the main inroads to the creation of screens, it becomes apparent that web and interaction design are just as much children of filmmaking as they are of graphic design. Maybe even more so. After all, we both work on screens, and manage time, movement, and most importantly, change. 

So what does all of this mean? I think the grain of screens has been there since the beginning. It’s not tied to an aesthetic. Screens don’t care what the horses look like. They just want them to move. They want the horses to change. 

Designing for screens is managing that change. To put a finer head on it, the grain of screens is something I call flux.

He then goes on to define high, medium, and low flux, and to describe some situations in which one or the other might be called for.

All this has me thinking about the degree of flux appropriate to different reading experiences. This seems to me highly variable according to genre and purpose. For instance, the New Republic’s iPad app is designed to offer higher flux than other magazine apps I’ve seen, which are minimally interactive: here you have poems that you can use your finger to slide into view, taps that activate deeper levels of content, and so on. Sometimes it’s too much, and at other times it takes too long to figure out how a given story works — they vary more than they ought to — but in general I like it. A good deal of thought has gone into the design, and more often than not the interactions are appropriate to the particular story and help me to engage more fully with it.

But I would never want to read Anna Karenina this way. The kind of concentration demanded by a long, complex, serious novel cannot bear much, if any, flux. And unnecessary flux can readily be avoided by reading it in a codex — hooray for that! But if people do gradually shift more and more towards reading on some kind of screen or another, and screens become increasingly capable of variable degrees of flux (as e-ink screens currently are not), then we readers will be ever more dependent on designers who possess a deep sensitivity to context and purpose — pixel-based designers who are widely, as a matter of basic professional competence, as flexible and nuanced in their design languages as the best print-based designers are today. Or, at the very least, they’ll need to build in the possibility of opting out of their fluxier interfaces. As someone who’s headed for a more screen-based reading future, I’m a little nervous about all this.

disrupting journalism!

(Nah, not really. Just wanted to try out that language for size.)

But: I was talking with some people on Twitter this morning about my frustrations with what has now become a very familiar set of experiences: the whole merry-go-round of publicity that accompanies the appearance of a book.

Before I go any further, I should note that my adventures on this merry-go-round amount to nothing in comparison with what people-who-make-their-living-by-writing go through. Only once in my career have I written a book that generated perceptible media attention, and doing the publicity for that absolutely exhausted me — which probably accounts for my dyspeptic attitude towards even small bouts of book-promoting exercises today. I can't even begin to imagine what it must be like to be Neil Gaiman: "I’m currently dealing with how to go back to being a writer. Rather than whatever it is that I am. A traveller, a signer, a promoter, a talker, a lecturer."

So here's how it goes: a journalist writes or calls to ask for an interview, and wants to do the interview by phone. If I agree — in violation of my profound dislike of the telephone — then commences the awkward dance of trying to find a time when we can both talk, and, when that's finally worked out, I am permitted to try to improvise on the spot answers to questions that I have already answered, with considerably greater care, in the book itself. Then I just have to hope — though the years have almost cured me of hoping — that the journalist transcribes what I say accurately and in its proper context. And, for dessert, I get to be annoyed by the way I put things and wish I could go back and express myself more clearly.

(By the way, no belief is more sacrosanct among journalists that the belief that it would be profoundly unethical to let me rewrite my comment about, say, nineteenth-century controversies over the Ornaments Rubric — even though I've yet to find anyone who can explain to me why that would be so. They always invoke politicians and political controversy, without explaining why the same rules should apply to interviewing politicians and interviewing scholars or other writers.)

Perhaps you can tell that I'm not thrilled about this way of doing things? So my common practice now is to decline phone interviews and ask to do things by email instead. Sometimes I am told that this is not permissible, in which case, Oh well. (When I've been given a reason, that reason has always been "because in email you don't get the give-and-take," which always makes me wonder whether there are email clients without Reply buttons.) But when people agree, then I sit down to answer the questions and realize, wait a minute, I'm writing the article! I'm going to do all the work and they're going to get the byline and the paycheck! Well, it was my choice, after all....

I'm supposed to be willing to do all this because it gets my book "exposure," it has "publicity value," and I suppose that once may have been true, but I wonder to what extent it now is? Certainly publishers believe in it, and promote the model; but I have my doubts that a model formed by a kind of handshake agreement among publishers (who want to get the word out about their books) and journalists (who need ever-new "content") is all that it needs to be when we all have the internet and its social media at our fingertips.

I'm just wondering — genuinely wondering — whether there might be models of doing ... this kind of thing ... don't know what to call it ... that might be more flexible and generous and less taxing to everyone concerned. Especially, of course, The Author, but I've been on both sides of this fence: I have interviewed people for articles — almost always by email, though once I bought lunch for a well-known musician for an Oxford American piece that never saw the light of day — and I've written for dailies, weeklies, bimonthlies, monthlies, quarterlies, the whole show, so I know those challenges as well. There's drudgery for journalists in the usual way of doing business, and maybe it could be made more fun for them as well.

Even small adjustments could help: Alex Massie suggested to me the value of IM interviews, and that made me remember the few times I've done those — I really enjoyed them. They have the spontaneity of conversation but also allow you to take a moment to get your thought into shape before committing to the Enter key. In another exchange that happened almost simultaneously — I like that about Twitter — Erin Kissane emphasized just this value of conversation, and I suppose that's one reason why I have always enjoyed talking with Ken Myers for his wonderful Mars Hill Audio Journal: the dialogue gradually and naturally unfolds, and while Ken always edits with care and skill to make me sound smarter than I am, he never eliminates that conversational tone. If doing publicity were always like that....

Anyway, I'd love to hear some good — disruptive! innotative! — ideas in the comments, especially from journalists. And thanks to those of you who, over the years, have helped to put my ideas before the public.

And by the way: if you don't subscribe to the Mars Hill Audio Journal, you should consider it. It's great.

Shady Characters

Well, the end of the semester is almost at our throats, so I don't have much spare time at the moment, but I've been spending some of the time I do have with Keith Houston's delightful book Shady Characters: The Secret Life of Punctuation, Symbols, & Other Typographical Marks. Nice job getting the ampersand into the title, Keith!

Follow that last link and you'll be taken to an entry on the ampersand in the Shady Characters blog, where much of the content of the book may be found — not that I'm discouraging anyone from buying the book, just the reverse, but, you know, the spirit of full disclosure and all that. And it's a wonderfully well-designed blog that happens to be chock-full of delightful information. Fair warning: if you're at all interested in printing and typography you could get lost there for hours.

Sunday, November 24, 2013

in which I try and fail to make sense of an essay on the future of the Bible

Thomas Larson writes,

Here at the end of the four-century reign of books in our culture, which is to say in the digital age, I’m curious about what happens to the Bible, publishing’s crown jewel.

Kind of an odd way to talk about the Bible, but okay. Still, are we “at the end of the four-century reign of books in our culture”? That’s the sort of claim that needs to be established, does it not? We might ask this question before we jump to a conclusion: When was the apogee? When was the point at which the highest percentage of persons in “our culture” read books? It could be embarrassing if the answer turned out to be Now.


If it’s true that the digital era is iconoclastic, muting the sacredness of religion-spawning texts, then can we still say that this “holiest” of Western books is still “holy?”

Is the digital era iconoclastic? One might more plausibly argue that it’s prolific of icons and iconography. Does it “mute” sacredness? If so, how? And what does that actually mean?

By “holy,” I mean first that the Bible is supposedly decreed by God and so inerrant; and second that its long veneration as a literary masterpiece has earned it unimpeachable value. Both of these lend it an aerie of its own. The “divinely inspired” Christian canonical book, Old testaments and New, codified in Greek in the late 4th century, translated into Latin in the 5th century and English in the 17th, sells some 25 million copies each year. Would Christianity be possible without the Bible?

Pretty sure the answer to that one is “No.” But how is that related (or not) to its status as a “literary masterpiece”? Moreover, I don't know what Larson means when he talks about the Bible being “codified,” but the Old Testament is written in Hebrew, not Greek; the canon was almost fully established long before the 4th century; there were Latin translations of both testaments before the 5th century; and translations of the Bible into English began no later than the seventh century and have been done in every century since. (Good grief.)

I’m unable to drop the quotes around “holy” since I think the idea of this particular book, a thought that extends to other revered documents like the Quran, the Vedas, and the Torah, contains a paradox: its assertion as the infallible, inalterable laws and teachings of God exists in cultures whose kings and republics have declared moral claims beyond, and in disagreement with, the Bible. In the West, neither states nor religions govern by the Bible anymore. And yet a majority of Christians still avow that the Bible’s laws apply to all human conduct — or should. After centuries of unremitting proselytization, both oral and written, the Bible continues to spread its influence across literature, government, politics, and education. That spread has made it the most sociable text in our language, in ways other books and their claims to truth only wish they could appropriate.

So it “continues to spread its influence” while “neither states nor religions govern by the Bible anymore” — but doesn't that second point suggest that its influence is waning rather than spreading?

The Old English poem Beowulf, for example, is a mighty tale; it is heroic, fiercely dramatic, mythic, first oral, then written (its finest hair-raising translation is by Seamus Heaney). But, unlike the Bible, Beowulf has not been copied, preached, interpreted, and sung via synods of revisers and popularizers over the past 1500 years. The Bible has always been spoken from pulpit and pew, in church basements and in Congress. It is spoken of and for vastly more than its printed self is read in silence. (As of 1850 only ten percent of the world could read, and during the era of the Bible’s development it was a tenth of that.) The Bible is spoken, hence: The greatest story ever told. It is, therefore, “true” because people speak it on — tributaries to a continent-crossing river.

It’s unclear to me what Beowulf is doing in this paragraph, but setting that aside, this seems an odd definition of “truth”: a book is true because “people speak it on”?

In a sense, this is the definition of a “holy book.” A book whose claims and identity are recast and testified to as true and false by every generation — the greatest story ever told and sold — for two-and-a-half millennia. As such, those who debate or believe or deny its origins participate in, indeed drive, its collective prevalence, what we might call its social authorship.

So from the definition of truth to the definition of a “holy book”: one “whose claims and identity are recast and testified to as true and false by every generation ... for two-and-a-half millennia.” So, not the Qu’ran, then? (Yes, I understand that Larson can only mean that the long history of this book makes it a holy book, but that’s not what he writes. Prose this sloppy testifies to a similar lack of rigor in thinking.)

The written Bible carries its oral tradition in its musicality. As Charles McGrath writes in the New York Times Book Review, even though the King James Bible of 1611 is “deliberately archaic” in its “grammar and phraseology,” preachers have trumpeted its dactylic prose: “God giveth and taketh away.”

Where to begin? First, that’s not a quote from any translation of the Bible. The closest approximation is Job 1:21 — “the Lord gave, and the Lord hath taken away” — or, as it is commonly paraphrased in proverbial form, “the Lord giveth, and the Lord taketh away.” But neither the paraphrase nor the actual quote from the KJV nor Larson’s half-remembered version is dactylic. Not even close. And even if they were, what would it mean to praise prose for being dactylic? What if it were iambic or anapestic? Would those be good or bad things in prose?

Let’s skip forward a bit.

So how’s the Bible doing in our device-ridden time? It seems that if it’s seldom read, and not being handled as a book, it’s less likely to be believed.

“It seems”? Does it now? Don’t we have a good bit of historical evidence to demonstrate that belief in the authority of the Bible declined a long time ago among the most educated — that is, among people who more than others in our society read and handle books?

Which is one message of literary critics and outspoken atheists.

Wait: “Literary critics and outspoken atheists” have the same message, which is ... what, precisely?

It may be part of the drying up of deep reading and scholarship, of college majors in religious studies and the humanities.

“It”?

People need to train for good-paying jobs; they have no time to engage books, even “holy” ones. And yet the millennial purveyors of the Bible seem not to lament this loss. They simply recast their message, as they’ve always done. For a century, from Cecil B. DeMille and the Jeffrey-Hunter Jesus, to Martin Scorsese and Mel Gibson movies, to the VHS tape and the CD-ROM, the Bible’s multimedia reach has exploded, re-tribalizing itself in multifarious electronic forms.

What would have been the better option for “millennial purveyors of the Bible”? (Also, what are “millennial purveyors”?)

The leap from Holy Book to Holy Multimedia has already been made. The 1979 “Jesus” film, produced by Warner Brothers, has been translated into 1000 languages; it’s exported primarily to people who cannot read and write. Mark Burnett’s 2013 ten-part miniseries, The Bible, won this year’s largest TV audience: 100 million views. There are a dizzying number of Bible apps. Among the most popular is YouVersion, for cellphone readers, which in July reached 100 million downloads. The company that produced it, lifechurch.tv, describes its products as “digital missions.” The app’s church services and worship videos are easily accessed as well. CDs of the Bible are far easier than books to get into countries (read Muslim theocracies) where Bibles are not allowed. Let’s not forget marketing to children—the most abundant font of unclaimed souls—with The Super Heroes Bible, ages six to nine, which alleges its characters “are not make-believe. These super heroes really lived.”

I guess we’re supposed to find all this appalling. (Children’s Bibles have been around since the 18th century, by the way.)

Larson goes on to quote from a the website of an organization called Faith Comes By Hearing — he didn’t give the link, but I looked it up —:

Jesus taught with stories, parables, and dialogue. Then, as now, most people in the world communicated orally, processing and remembering information only when it’s clothed in narratives, poems, songs, and similar formats. Modern research confirms that people who don’t read or write well (or at all) learn the way Jesus taught....

The 43 percent of adult Americans who test at or below basic literacy levels are clearly oral communicators. Surprisingly, so are increasing numbers of readers who would simply rather not use a literate communication style. These ‘secondary oral learners’ prefer instead to receive information through film, TV, and other electronic media. The bottom line: neither group will learn the life-giving truths of the Bible by reading it.

Larson:

I realize this is PR flap but these claims — readers who would simply rather not use a literate communication style and neither group will learn…by reading — feel ominous. They are asserting, rightly so, that the preferred mode of learning is moving from literate standards to “oral communication.” This may pander to an a-literate religious base: people who can read but don’t. A cynic might conclude that oral/visual learners are more susceptible to being swindled (or saved) than literate learners. But it hardly matters. Those who create and adapt Biblical fare for mass audiences, now in thrall to the megapolies in entertainment, media, and publishing, are uninterested in literacy and its putative civilizing benefits.

Earlier we were in the realm of what “seems”; now it is how something “feels” that matters. We are in such amorphous territory here that Larson will not even say flatly that literacy yields benefits: those are widely shared, “putative.”

Can the Bible, in its new multimedia forms, still feel sacred? Do religions need sacred texts to underpin their truth claims? What happens when the Bible is another app, another PowerPoint presentation, another Showtime movie, with Brad Pitt as the Man of Galilee? Doesn’t digitization erode the slowly burnished patina from the sacred object?

Well, does it? If so, how?

And yet, if the book as book fades, and the telling remains, will it remain “holy?” I’m not sure.

Was it holy before it was in a book? (I’m not sure what Larson would say a “book” is.) Let’s move ahead to the peroration:

Evangelicals have used social media for centuries — if by social media we mean the technological tools of a culture that ring the young around a fire to hear a theocratic worldview. The read-aloud text of the Bible is the foot in the door. Listen to others intone it and you’ll hear the truth. Internalizing it does little. The Bible is a book that has to be shared to be believed. That sharing occurs in the spoken realm — where authors are socialized — a realm acoustic, dramatic, non-reflective, in the moment. (A lot like television.) Any text that will remain true requires social authors — proselytizing showmen, unembarrassed testifiers, indefatigable repeaters, digitizing replicants.

So for much of this essay the “holiness” — in Larson’s special sense of the word — of the Bible was sustained and guaranteed by its being in book form: “If the book as book fades ... will it remain ‘holy’?” And: “It seems that if it’s seldom read, and not being handled as a book, it’s less likely to be believed.” But here, at the end of the essay, the problem seems to be that if the Bible re-renters the oral realm it will become more powerful, more sacred, more instrumental to a “theocratic worldview,” because it would belong to “a realm acoustic, dramatic, non-reflective, in the moment.” People would be hearing and responding without thinking. (Note, by the way, that this would apply only to audio or audio-visual versions, not to digitized text.)

So maybe earlier the “It seems” described other people’s views, not Larson’s? And maybe, then, his actual position is something like this: O for the good old days when the Bible was just a book and therefore of little power; now that’s it’s achieving new forms of digital life, a kind of scriptural Singularity, its power will be radically amplified and we’ll all eventually be ruled by tyrannical evangelicals.

Or maybe not. I can’t figure it out. I don’t know when I’ve read a more incoherent essay. If any of y’all can make more sense of Larson’s writing than I can, please let me know in the comments.

Friday, November 22, 2013

on philosophical religion

I haven't read the book Peter Gordon reviews here, but the conceptual frame of the review interests me. (This is sort of off-topic for this blog, by the way.)

Here's a key passage:

The grand tradition of philosophical religion thus aims at a symphônia of religion and philosophy. This term has a purely technical meaning, of course, but its cognate use in music captures the basic thought that we can harmonize the two voices. The guiding thought of Fraenkel’s study is that what may strike us as an unforgivably elitist distinction, between philosophers and non-philosophers, actually went along with a universalistic acknowledgment that diverse religious traditions share a common core. For it is precisely the social distinction between philosophers and non-philosophers that permitted philosophers to claim that, despite variations in literal content, religion bears an invariant allegorical truth—the insight that God and Reason are one. Plato, for example, believed that the laws of Crete and the laws of Sparta were essentially the same: variations in appearances could be explained by the philosopher as due to the influence of historical and cultural context. It was therefore possible for Plato, in Fraenkel’s assessment, to endorse both contextual pluralism (about variations in religious representations and practices) and universalism (about the inner meaning of religion itself).

Gordon's review is essentially a detailed précis of Carlos Fraenkel's new history of philosophical religions, and Gordon seems to share the view quite common among intellectuals that philosophical religion is a big improvement over ordinary religion because, so the argument goes, by placing religion within the sphere of civilized intellectual disputation you takes the violent edge off of the thing.

Following Fraenkel, Gordon speculates on why philosophical religion has declined — he simply assumes that it has, though perhaps Fraenkel provides evidence to support this claim — and what is noteworthy to me about those speculations is that they, like philosophical religion itself, operate strictly within the intellectual realm. It seems not to occur to Gordon that philosophical religion's fortunes might alter for reasons that are not strictly intellectual themselves.

Perhaps philosophical religion has declined (if it has) and has never been very popular (which is certainly true) because religion is not simply a matter of holding to a set of metaphysical propositions. Now, metaphysical propositions are intrinsic to most of what we call religions, but history would suggest that those propositions cannot be sustained without a strong framework of ritual and practice. This is something that all anthropologists and sociologists of religion know, and it seems like something that anyone writing about the fate of religion in a given society ought also to be aware of. Philosophical religion has never existed and never will exist in a vacuum: it always finds its place within a much larger set of beliefs, acts, and habits. A purely philosophical religion has never been sustainable because a purely philosophical religion isn't a religion at all.

Thursday, November 21, 2013

but then there's reading on an iPad

So that's why I don't like writing with my iPad. But reading — that's a different story.

Last night I picked up Robert Bringhurst's classic book on typography, The Elements of Typographic Style, and started reading. Or rather, I tried: after just a couple of minutes I realized I was struggling to see the text clearly. I moved the book a little farther away from my face; I moved it a little closer; I got off the sofa and sat in a chair where the light was better, which helped a bit. I could see the main text with little effort now, but the marginal notes, which are set in smaller type and are also quite interesting and informative (and therefore not the kind of thing I want to ignore), I couldn't read at all. I traded out the glasses I was wearing for a different pair which seem to be a little better for reading, and while that helped, again, a bit, it didn't help enough for me to be able to focus on what I was reading. I took off my glasses — I am very nearsighted — and while that enabled me to see the text perfectly clearly, it also meant that my eyes had to travel so far across the page that they quickly grew tired of the effort.

As dearly as I love the art and craft, the appearance and feel, of the codex, my future as a reader clearly lies with digital forms of text. All I can do is hope that the often painfully-bad typography of digital texts will get better in the future, and that maybe, just maybe, we will see e-ink screens — i.e., non-backlit ones, with less glare and in devices devoted largely if not exclusively to reading — with the sharpness I now enjoy on my iPad's retina display. On my iPad I can read in whatever light I happen to have available, even if that means no light at all, and with whatever glasses I happen to be wearing.

But books that don't exist in digital form — whether, as in the case of Bringhurst’s typographical treatise, for obvious and necessary reasons or just because of the luck of the draw — I guess I just won't be reading. Which makes me sad.

By the way, I wrote this post on my iPad and it was an absolute pain in the ass. So why did I do it? Because it was there.

Wednesday, November 20, 2013

why writing on the iPad remains a lousy experience

Go to a search engine and type in the words “iPad consumption creation.” You’ll be introduced to a debate that has been going on since the first iPad appeared in 2010: is the iPad — and by extension are tablets more generally — built just for consuming media, or is it a device one can make on as well?

If we’re going to get serious about this, we need to ask, “Creation of what?” Maybe tablets are better for some kinds of things than others. Not long after the iPad came out videos like this one started showing up on YouTube to to demonstrate how you can make real music — well, sort of real — with GarageBand; and the talented folks at 53 have created a tumblelog to showcase the artwork people have made with their justly-celebrated app Paper.

But what about writing? Well, there are advocates for the iPad as a writing environment, most eloquent of them being Federico Viticci, who makes a great case for using the iPad with the writing app Editorial. And I too think Editorial is a genuinely innovative, brilliantly designed app that offers the best writing experience you can get on a tablet.

However: I hate writing on my iPad. Why? Let me count the ways.

First, and most fundamentally, some of the most basic and frequently-used text-manipulation actions remain very difficult to perform on iOS — indeed, have not discernibly improved since the first iPhone appeared in 2007. Trying to select just the text I need to select is often enough to make the sweat break out on my brow: No, I wanted ALL of that word, not just part of it — oh crap, the damned thing has decided that I want the whole paragraph! I just want the last four sentences! But it won't let me choose the last four sentences! Okay, well, I’ll have to use the delete key to X out the unwanted stuff — once I get it pasted. So let me try to get my finger in the exact place where I need — no, damn it, not there! My finger must have slipped at the last instant! Okay, where’s the undo? How do I undo that? CRAP.

It’s like that all the time.

But I can already hear you saying, “Oh, you foolish boy, why aren’t you using a physical keyboard?” Yeah, well, I do use a physical keyboard, but the keyboard shortcuts and arrow keys that are so fundamental to my text-manipulations on a laptop or desktop computer work inconsistently or not at all on the iPad, so it’s still not possible to avoid altogether the finger-accuracy issues I describe above. But a keyboard helps in some ways, for sure. Now, what kind of keyboard should I get?

I’ve used one of these keyboard/cover hybrids. The good: highly portable. The bad: somewhat flimsy, and difficult to balance on my lap, which is a problem if I want to continue my long-standing practice of writing while seated in an easy chair. (Basically, I need to add a lap-desk to make it work smoothly.) And then the keyboard is smaller than standard, which leads to a lot of mistyping. All in all, a pretty frustrating experience.

So let’s try Apple’s Bluetooth keyboard — a lovely piece of engineering, I must admit, and a pleasure to type on ... once I find a way to stand up my iPad so I can see what I’m typing, that is. So I can buy a stand — but then easy-chair typing is seriously compromised, unless I get something like this workstation which gives me a somewhat shaky platform to type on and creates a situation in which I am regularly assembling and disassembling my typing environment — in which case the portability of the iPad, one of its key features, is significantly diminished.

If this post weren’t too long already, I’d go on another rant about the severe limits of iOS application switching — but you get the point. I’m typing this post on my MacBook Air, and it’s a real pleasure. It’s lightweight and fits in my lap nicely. It was trivially easy for me to insert all those links into this post, and it’ll also be trivially easy for me to upload what I've written to Blogger. When I made mistakes in typing it was simple to correct them. Unless I were compelled by economic or other necessity to use an iPad to write, why would I ever do so?

Monday, November 18, 2013

books by design

I’ve been really taken by the images in this blog post on the recent resurrection of the cover-design style of the old Pelican Specials. What strikes me, as I expect it will strike you when you click through, is that the attention given to reproducing the old cover art is, shall we say, not quite matched by the attention to typography.

The old Penguin/Pelican cover styles have been fetish objects for some time now. I am especially fond of M. S. Corley’s redesigns of the Harry Potter covers:

potter

And this application of that classic style to bands and TV shows is pretty cool:



But when you’re making a cover for an actual book it would be nice if the fidelity to lovely tradition was carried through to the text itself. The dissonance between the quality of the cover of that Jacqueline Kent book and the unimaginative flatness of its text is troubling.

Now, this is not to say that we want books whose typographic style reproduces too slavishly the aesthetic of another era: when that happens the result is, as my friend Edward Mendelson has noted, “typographic kitsch.” (American Typewriter, anyone?) But in this post-TeX world there’s really no justification for the amount of typographic blandness or incompetence that we see today.

Not incidentally, this is one reason it has been such a pleasure for me to work — three times now — with Princeton University Press. Their attention to all the details of design is really admirable. I especially commend the books in the Lives of the Great Religious Books series to which I have contributed: they are lovely to look at, delightful to read, and a real pleasure just to hold. You can see a few of their admirable features just by clicking that link, but you really need to take one in your hand to get the full effect.

Carr on automation

If you haven't done so, you should read Nick Carr’s new essay in the Atlantic on the costs of automation. I’ve been mulling it over and am not sure quite what I think.

After describing two air crashes that happened in large part because pilots accustomed to automated flying were unprepared to take proper control of their planes during emergencies, Carr comes to his key point:

The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world. That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us. Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.

And late in the essay he writes,

In schools, the best instructional programs help students master a subject by encouraging attentiveness, demanding hard work, and reinforcing learned skills through repetition. Their design reflects the latest discoveries about how our brains store memories and weave them into conceptual knowledge and practical know-how. But most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity. Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience. We pick the program that lightens our load, not the one that makes us work harder and longer. Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.

Carr isn’t arguing here that the automating of tasks is always, or even usually, bad, but rather than the default assumption of engineers — and then, by extension, most of the rest of us — is that when we can automate we should automate, in order to eliminate that pesky thing called “human error.”

Carr’s argument for reclaiming a larger sphere of action for ourselves, for taking back some of the responsibilities we have offloaded to machines, seems to be twofold:

1) It’s safer. If we continue to teach people to do the work that we typically delegate to machines, and do what we can to keep those people in practice, then when the machines go wrong we’ll have a pretty reliable fail-safe mechanism: us.

2) It contributes to human flourishing. When we understand and can work within our physical environments, we have better lives. Especially in his account of Inuit communities that have abandoned traditional knowledge of their geographical surroundings in favor of GPS devices, Carr seems to be sketching out — he can’t do more in an essay of this length — an account of the deep value of “knowledge about reality” that Albert Borgmann develops at length in his great book Holding on to Reality.

But I could imagine people making some not-obviously-wrong counterarguments — for instance, that the best way to ensure safety, especially in potentially highly dangerous situations like air travel, is not to keep human beings in training but rather to improve our machines. Maybe the problem in that first anecdote Carr tells is setting up the software so that in certain kinds of situations responsibility is kicked back to human pilots; maybe machines are just better at flying planes than people are, and our focus should be on making them better still. It’s a matter of properly calculating risks and rewards.

Carr’s second point seems to me more compelling but also more complicated. Consider this: if the Inuit lose something when they use GPS instead of traditional and highly specific knowledge of their environment, what would I lose if I had a self-driving car take me to work instead of driving myself? I’ve just moved to Waco, Texas, and I’m still trying to figure out the best route to take to work each day. In trying out different routes, I’m learning a good bit about the town, which is nice — but what if I had a Google self-driving car and could just tell it the address and let it decide how to get there (perhaps varying its own route based on traffic information)? Would I learn less about my environment? Maybe I would learn more, if instead of answering email on the way to work I looked out the window and paid attention to the neighborhoods I pass through. (Of course, in that case I would learn still more by riding a bike or walking.) Or what if I spent the whole trip in contemplative prayer, and that helped me to be a better teacher and colleague in the day ahead? I would be pursuing a very different kind of flourishing than that which comes from knowing my physical environment, but I could make a pretty strong case for its value.

I guess what I’m saying is this: I don't know how to evaluate the loss of “knowledge about reality” that comes from automation unless I also know what I am going to be doing with the freedom that automation grants me. This is the primary reason why I’m still mulling over Carr’s essay. In any case, it’s very much worth reading.

Friday, November 15, 2013

to live and die in the Anthropocene

I'm not quite sure what to do with this essay by Roy Scranton on learning to live — or rather, learning to die — in the Anthropocene. The heart of the essay may be found in its concluding paragraphs:

The biggest problem climate change poses isn’t how the Department of Defense should plan for resource wars, or how we should put up sea walls to protect Alphabet City, or when we should evacuate Hoboken. It won’t be addressed by buying a Prius, signing a treaty, or turning off the air-conditioning. The biggest problem we face is a philosophical one: understanding that this civilization is already dead. The sooner we confront this problem, and the sooner we realize there’s nothing we can do to save ourselves, the sooner we can get down to the hard work of adapting, with mortal humility, to our new reality.

The choice is a clear one. We can continue acting as if tomorrow will be just like yesterday, growing less and less prepared for each new disaster as it comes, and more and more desperately invested in a life we can’t sustain. Or we can learn to see each day as the death of what came before, freeing ourselves to deal with whatever problems the present offers without attachment or fear.

If we want to learn to live in the Anthropocene, we must first learn how to die.

Scranton, who is a doctoral candidate in English at Princeton, is here making an innovative argument for the value of the humanities: humanistic learning, or rather the deep reflection historically associated with it, is all the more necessary now as we are forced to grapple with the inevitability of cultural collapse. Much in me resonates with this argument, but I think the way Scranton develops it is deeply problematic.

A good deal of the essay links the coming civilizational collapse with Scranton's own experiences as a soldier in Iraq. But it seems to me that that is precisely the problem: Scranton assumes that the death of a civilization is effectively the same as the death of a human being, that the two deaths can be readily and straightforwardly analogized. Just as I had to learn to die, so too must our culture. But no.

The problem lies in the necessarily loose, metaphorical character of the claim that a civilization is “dead.” Scranton writes,

Now, when I look into our future — into the Anthropocene — I see water rising up to wash out lower Manhattan. I see food riots, hurricanes, and climate refugees. I see 82nd Airborne soldiers shooting looters. I see grid failure, wrecked harbors, Fukushima waste, and plagues. I see Baghdad. I see the Rockaways. I see a strange, precarious world.

Well, maybe. But maybe the end of our civilization, even should it be as certain as Scranton believes, won't look like this; maybe it will be a long slow economic and social decline in which massive violence is evaded but slow inexorable rot cannot be. (Scranton is rather too assured in the detail of his prophecies.) But in any case, whatever happens to our civilization will not be “death” in anything like the same sense that a soldier dies on the battlefield. When that soldier dies, his heart stops, his brain circuitry ceases to function, his story in this world is over. But even this catastrophically afflicted culture described by Scranton is still in some sense alive, still functioning, in however compromised a way. And this will be the case as long as human beings remain on the earth: they will have some kind of social order, which will always be in need of healing, restoration, growth in flourishing.

Which means, I think, that the absolutely necessary lessons in how to die that every one of us should learn — because our lives are really no more secure than a soldier's, though for our peace of mind we pretend otherwise — are not really the ones needed in order to deal with the coming of the Anthropocene. Scranton's dismissal of practical considerations involving the social and economic order in favor of philosophical reflection might even be a counsel of despair; he does seem, to me at least, to be saying that nothing in the material order can possibly be rescued so the only thing left to do is reconcile ourselves to death. I believe that anthropogenic global warning is happening, and I believe that its consequences for many people will be severe, but I do not accept that nothing meaningful can be done to mitigate those consequences. In short, I do not believe and do not think I am permitted to believe that our civilization is already dead.

But for me and for you, the necessity of facing death remains, and indeed is not any different now and for us than it was in the past or for any of our ancestors. For the individual facing death, the Anthropocene changes nothing. This was the point of C.S. Lewis's great sermon “Learning in Wartime”:

What does war do to death? It certainly does not make it more frequent; 100 per cent of us die, and the percentage cannot be increased. It puts several deaths earlier; but I hardly suppose that that is what we fear. Certainly when the moment comes, it will make little difference how many years we have behind us. Does it increase our chance of a painful death? I doubt it. As far as I can find out, what we call natural death is usually preceded by suffering; and a battlefield is one of the very few places where one has a reasonable prospect of dying with no pain at all. Does it decrease our chances of dying at peace with God? I cannot believe it. If active service does not persuade a man to prepare for death, what conceivable concatenation of circumstance would? Yet war does do something to death. It forces us to remember it. The only reason why the cancer at sixty or the paralysis at seventy-five do not bother us is that we forget them.

Just as wars must sometimes be fought, so the consequences of the Anthropocene must be confronted. Or so I believe. But whether or not I'm right about that, I know this: Death is coming for us all. And if Montaigne is right that “to philosophize is to learn to die,” then the humanities, in so far as they help us to be genuinely philosophical, are no more relevant in the Anthropocene than they ever have been — nor any less so.

the supernova (concluded)

See part one here

Thirty years after that supernova made its remarkable appearance in Earth’s skies, the Danish astronomer Tycho Brahe would recall his first sight of it:

Amazed, and as if astonished and stupefied, I stood still with my eyes fixed intently upon it. When I had satisfied myself that no star of that kind had ever shone forth before, I was led into such perplexity by the unbelievability of the thing that I began to doubt my own eyes.

Like John Dee and Francis Bacon in England, Tycho knew that according to the Ptolemaic system that had been firmly in place for hundreds of years, the real problem was “that [the supernova] was in the celestial, not the Elementary Region” — that is, that is was not within the cycles of the planets, which were known to move and change (the word “planet” means “wanderer”) but in the more distant realm of the so-called “fixed stars,” the supposedly unchanging backdrop to the celestial machinery. Whether or not the exploding star was a God-sent sign to King Charles of France or not, it was a powerful blow to the Ptolemaic system.

In a lucid essay on this event, the noted astronomer Owen Gingerich writes that “Tycho had, first of all, the imagination to formulate an interesting research strategy, secondly, the ingenuity to devise the instruments to carry out the research, and thirdly, the ability to draw significant conclusions from his results.” John Dee may have understood the general import of the event but only Tycho went about exploring it in a serious way. Gingerich is interested primarily in the technical challenges that Tycho faced, and triumphantly met, but he notes in passing that the Cassiopeia nova “was by no means the end of Aristotelian cosmology, but it was the beginning of the end.”

This is perhaps an understatement. C. S. Lewis in his The Discarded Image comments that “the great Nova in Cassiopeia of November 1572 was a most important event for the history of thought.” Lewis points to F. R. Johnson’s 1937 book Astronomical Thought in Renaissance England: A Study of the English Scientific Writing from 1500 to 1645 — which is still worth reading, by the way — for evidence that the community of natural philosophers in England at least, and presumably elsewhere, were deeply shaken by the nova’s appearance.

It’s a really fascinating moment in intellectual history. The Ptolemaic theory was already being challenged and would in any case have eventually fallen, but this single event did more rapid and serious harm to it than any articulated theory could have. A whole system of belief was effectively brought to its knees by a few incontrovertible astronomical observations.

Thursday, November 14, 2013

the supernova (1)

a superbright supernova


Historians have long debated the role that King Charles IX played in the great and terrible St. Bartholomew’s Day Massacre of French Protestants in 1572. It has been common to give the primary responsibility to his mother, Catherine de Medici, and to see the King as meekly complying with her wishes — but one old tradition says that Charles said “Kill them all,” thus warranting utter extermination of the Huguenots.

It was widely believed at the time that Charles had ordered the massacre. Theodore Beza, John Calvin’s successor in Geneva, who would end up taking in many refugees from the persecution, believed that Charles had openly confessed to this role — or so says Francis Bacon in his journals.

According to Bacon, Beza believed that God had sent a sign of judgment upon Charles: a stella nova, a surprising new star that appeared in the constellation of Cassiopeia soon after the massacre — a star so bright that it could even sometimes be seen in the daytime.

Theodor Beza wittily applied it to that star which shone at the birth of Christ, and to the murdering of the infants under Herod, and warned Charles the Ninth, King of France, who had confessed him self to be the author of the Massacre of Paris, to beware, in this verse: Tu vero Herodis sanguinolente, time — “And look thou bloody Herod to thy self”; and certainly he was not altogether deceived in his belief, for the fifth month after the vanishing of this star, the said Charles, after long and grievous pains, died of excessive bleeding.

We do not know whether Charles took Beza’s warning seriously — he may have been too busy dying — but the star was not visible only in France, and at least one other great prince of the age was concerned about what it might mean. England’s Queen Elizabeth I called in her great advisor on matters scientific, astronomical, astrological, and occult, John Dee, and Dee — again according to Bacon — was able to demonstrate “that it was in the celestial, not the Elementary Region; and they are of opinion that it vanished by little and little in ascending. Certainly after the eighth month all men perceived it to grow less and less.”

Dee’s discovery was more important than we can readily perceive now.

To be continued...

Wednesday, November 13, 2013

the motives for revision

draft manuscript of T. S. Eliot’s The Waste Land

I’ve been reading a fascinating new book by a young scholar named Hannah Sullivan on The Work of Revision: it’s an account of how modernist poets and novelists incorporated revision into their writerly work. Sullivan notes that for the Romantics spontaneity was essential to true art: as Keats wrote, “If Poetry comes not as naturally as leaves to a tree it had better not come at all.” But today — Sullivan illustrates this point with copious quotation — writers go on and on about how essential revising is, how constantly they are at it, how good writing cannot be achieved without a steadfast commitment to “revise, revise revise.” How did this shift happen?

Sullivan argues that the transformation occurred during the modernist era:

The aims of modernist revision might have been largely aesthetic – a feeling toward new forms and styles – but the practice was significantly enabled by technological improvements in the publishing process, including cheaper typesetting and storing and the invention of the personal typewriter, and by a culture of patronage that allowed for multiple sendings of proof and a relative lack of concern for economic profit.... On the one hand it became much easier to mark out and transmit the desire for revision: writers who owned typewriters could make and circulate neat copies of their work quickly in carbon copy, and publishers using typesetting machines were more willing to issue proofs of entire novels. On the other hand, revision still had a substantial cost. Unlike in digital environments, where a new file can be uploaded to Amazon for free, pulping a first addition to make way for a second or rewriting a novel in proof required a significant commitment of time and money. As a result, writers found themselves inhabiting a situation where revision this both tantalizingly possible and off-puttingly expensive.

So what may have begun as a need to (in Ezra Pound’s famous formulation) “Make it New,” and therefore to explore and innovate in both form and content, inevitably making mistakes along the way, was encouraged by changing technologies of print. And today, when the external costs of revision have been so greatly reduced, of course revision is especially prized.

It’s worth noting that this is not the first time such a development has occurred. Five hundred years ago the great humanist scholar Erasmus actually moved from northern Europe to Venice so he could work closely with his publisher, the great Aldus Manutius. And seventy or so years later, Michel de Montaigne, the inventor of the modern essay, took a copy of the first edition of his Essays and started making corrections and additions that became the foundation of a second edition, and then a third. The process only stopped with his death. Then too technological and generic innovation led to a culture of revision.

Michel de Montaigne's revisions and additions to his own book

Friday, November 8, 2013

in which I try to figure out what Lee Siegel is saying about fiction

Siegel:

It’s safe to say that, like life itself, fiction’s properties are countless and unquantifiable.

Well … okay. Can’t really disagree with that, though I’m not sure what it means.

If art is made ex nihilo — out of nothing —

But art isn’t made out of nothing, is it? It’s made out of pre-existing ideas, experiences, and materials. A painting is made from what the painter has seen and thought about and imagined. Also from paint and canvas. Even poems are made out of language, which pre-exists the poem and indeed the poet (a point to which I will return).

then reading is done in nihilo, or into nothing.

Okay, I have absolutely no idea what that means. I’m not sure what the preposition “into” does when you attach it to “reading,” but it seems to me that reading is always interactive with the text being read, probably with the (imagined) author, and often with other readers, including teachers, students, friends, online discussion groups. If I were forced to use Siegel’s strange syntax, I’d have to say that reading is done into many things.

Fiction unfolds through your imagination in interconnected layers of meaning that lift the heavy weight of unyielding facts from your shoulders.

Does it? Are all facts burdensome? Do no facts enter into fiction? Doesn’t the imagined, in fact, interact in powerful ways with the already-factual?

It speaks its own private language of endless nuance and inflection.

On the contrary, it speaks a public language, which others (we) can read and respond to. That’s really essential, isn’t it?

A tale is a reassuringly mortalized, if you will, piece of the oceanic infinity out of which we came, and back into which we will go.

“Mortalized”? Was it immortal before it got mortalized? And what precisely is the reassurance intrinsic to the mortalization of the previously immortal, or whatever it previously was? What did that tale look like when it was a “piece of oceanic infinity”? Also, does anything you’re saying here make any sense whatsoever?

That is freedom, and that is joy —

Wait, what is freedom and joy?

and then it is back to the quotidian challenge, to the daily grind, and to the necessity of attaching a specific meaning to what people are thinking and feeling, and to the urgency of trying, for the sake of love or money, to profit from it.

“Attaching a specific meaning” sounds pretty good right now.

Thursday, November 7, 2013

On Camus

Albert Camus is much in the literary news right now, as the world commemorates the 100th anniversary of his birth. Here is an essay on him that I published in Books and Culture
in 1996, with a few small edits and links.


When I think of Albert Camus, two photographic images come to mind. The first is of that face, both thoughtful and tough, a cigarette drooping from the lips, the collar of a trench coat showing. The second is of the crushed automobile in which he died early in January 1960. These images are not important just to me; they may be said to define the dominant impression many readers had (and perhaps still have) of Camus. If Hollywood had invented an existentialist writer, the homely, scholarly Jean-Paul Sartre, with his squat body and thick spectacles, would not have made the cut. No, it would be Camus: he looked like Humphrey Bogart and died like James Dean.

What is ironic about all this is the simple fact that Camus came closest to existentialism at the beginning of his career in his first published novel, The Stranger, and in his first book of philosophy, The Myth of Sisyphus, both of which were published in 1942 -- and Camus even claimed that the latter book was written as a conscious repudiation of existentialism. By the end of his life he had become completely alienated not just from existentialism as a philosophy but also from the whole French intellectual culture within which existentialism was then the dominant force. Perhaps if Camus had remained in lockstep with Sartre and Simone de Beauvoir he would be more popular today. Instead, he remains perhaps the most neglected major author of the second half of this century -- one of the few, along with W. H. Auden, Czeslaw Milosz, and a handful of others, who represent the nearly forgotten virtues of wisdom and courage.

Whatever we Christians aver about God’s sovereignty over our allotted span, like everyone else we regret it when it seems to us that lives are cut short, and we imagine what their possessors might have done with a few more years in which to work. It is impossible not to speculate about what Keats might have achieved had he been given more than a decade in which to write; it is hard to believe that Mozart would not have profited by living at least into his forties, or the sculptor Henri Gaudier-Brezska by surviving the Great War and making it at least to 30.

Camus died at 46, and the recent publication of The First Man, the novel he was working on when he died, suggests that he would have made very good use of another five years. The First Man, as we have it, is but a draft fragment, a direct and unedited transcription from Camus’s final notebook -- a notebook found, inside a briefcase, in the car in which he died. In the new Knopf edition it comes to over three hundred pages (albeit rather small ones), but the appended notes and outlines make it clear that this constitutes perhaps only a third of the book as Camus planned it. Beyond question, it would have been the most ambitious project of Camus’s life. One could even say that it would have been the first product of his full maturity as a writer and thinker, for, though he had won the Nobel Prize for literature in 1957 (when he was only 43), his political, philosophical, and literary vision was just beginning to achieve something like coherence. It is impossible for anyone who appreciates Camus’s work to read The First Man without a sharp pang of regret at what never came to be.

Though Sartre and Camus are often linked in the public mind, they are dramatically different figures. There was a brief period when they seemed on the verge of forming a real friendship: each had reviewed the other’s work positively, and when they met (in 1943), they discovered a mutual interest in the theater. Indeed, Sartre asked Camus to direct and act in a play he had just written, one that would prove to be his most famous: No Exit. Throughout the war, the two writers found themselves involved in the common cause of the Resistance. But their temperamental differences made a lasting friendship impossible. Sartre distrusted, and perhaps envied, Camus’s toughness and flamboyance, what one might call his Bogartisme; Camus distrusted, and perhaps envied, Sartre’s analytical and philosophical mind.

The breaking point in their tenuous relationship occurred in 1952, after Les Temps Modernes, the intellectual journal largely run by Sartre, published a hostile review by Francis Jeanson of Camus’s recent meditation on political philosophy, The Rebel. Camus directed his reply to Sartre (who he thought should at least have done the criticism himself): “I’m getting tired of seeing myself, and particularly seeing old militants who have known all the fights of their times, endlessly chastised by censors who have always tackled history from their armchairs.” Sartre retorted by saying that Camus was arrogant -- “Tell me, Camus, what is the mystery that prevents people from discussing your books without robbing mankind of its reasons to live?” -- and philosophically incompetent: “But I don’t dare advise you to consult Being and Nothingness. Reading it would seem needlessly arduous to you: you detest the difficulties of thought.”

Annie Cohen-Salal, Sartre’s biographer, is right to see ideological differences at the roots of this dispute: Sartre’s attempt to soft-pedal or even evade recognizing the evils of the Stalinist Soviet Union in hopes of sustaining the socialist vision, against Camus’s belief that Soviet Communism and fascism were morally equivalent. On this view, Sartre’s philosophical condemnation of The Rebel masks his anger at Camus’s total repudiation of violence as a means to achieve any political cause, however noble. As Cohen-Salal admits, Sartre’s tendency was to be “pragmatic” on such issues.

Pragmatic about means, perhaps, but absolutist about causes. Sartre believed, for instance, that the French in Algeria should all get out; if they did not, Algerian terrorists were justified in killing them. It was this issue -- not the disagreement over Stalinism, about which Sartre eventually admitted he had been wrong (in 1956, after the Soviet invasion of Hungary) -- that ensured lasting enmity between Sartre and Camus. And it is this issue that proves central to Camus’s plans for The First Man.

Politically speaking, one could say that Sartre never overcame the Manichaean dichotomies that were arguably appropriate during the war against the Nazis. That the Soviets had stood against fascism placed them firmly on the side of the angels. (Best not to reflect, at least publicly, on the uncomfortable fact that Stalin had signed the Pact of Steel with Hitler, and that Hitler was the one who broke it.) For this reason, Sartre could forgive, or at least avert his eyes from, the purges of the 1930s and the continuing hell of the Gulag.

In Sartre’s political world there were only oppressors and oppressed: fascism stood for the former, communism for the latter. Likewise, in Algeria, since the native Algerians were by definition the oppressed, they were incapable of sin; conversely, the pieds noirs, the French colonists, were reprobate and irredeemable. Thus Sartre endorsed the decision of the Algerian FLN (Front de Liberation Nationale) to kill any and all French men, women, and children in Algeria whenever possible, a position he was still taking in 1961 when he wrote a famous and lengthy introduction to The Wretched of the Earth, the major work by one of this century’s greatest theorists of terrorism, Franz Fanon.

Camus, on the other hand, was himself a pied noir; his family’s roots in Algeria went back a century and a half. Members of his family, including his mother, still lived in Algeria and were endangered daily by the FLN’s random shootings and bombings. Yet Camus was not, nor had he ever been, indifferent to the abuses the French had inflicted on the Arabs of Algeria. Indeed, in the 1930s, at the beginning of his career as a writer, Camus had striven ceaselessly to call attention to these abuses, but he was generally ignored -- by the French Left no less than the Right.

So he was not pleased to have a difficult and morally complex political situation reduced to an opportunity for French intellectuals to strike noble poses: to those who would “point to the French in Algeria as scapegoats (‘Go ahead and die; that’s what we deserve!’),” Camus retorted, “it seems to me revolting to beat one’s mea culpa, as our judge-penitents do, on someone else’s breast.” Those who are really so guilt-stricken at the French presence in Algeria should “offer up themselves in expiation.”

Camus boldly affirmed that his family, “being poor and free of hatred” -- and Camus really was raised in abject poverty -- “never exploited or oppressed anyone. But three quarters of the French in Algeria resemble them and, if only they are provided reasons rather than insults, will be ready to admit the necessity of a juster and freer order.” It should, then, be possible to give the proper rights and freedoms to Algerian Arabs without condemning and destroying the pieds noirs indiscriminately, or forcing them out of the only country they had ever known.

But such subtleties were lost on almost everyone involved in this conflict. When Camus received the Nobel Prize in 1957 and gave a press conference in Stockholm, he was bitterly condemned by an Arab student for failing to endorse the FLN. His reply was simple, direct, and forceful: “I have always condemned the use of terror. I must also condemn a terror which is practiced blindly on the Algiers streets and which may any day strike down my mother or my family. I believe in justice but I will defend my mother before justice.”

Michael Walzer is almost unique among Camus’s commentators in seeing the significance of this stand: he identifies Camus as an example of the “connected social critic,” that is, the critic who does not stand above the political fray and judge with Olympian disinterest, objectivity, and abstraction. That was the way of Sartre: absolutist, universalizing, committed to a single overriding binary opposition, that between the oppressors and the oppressed. But for Camus, the universal could not so easily displace the local; commitment to “Justice” in the abstract could not simply trump his love for and responsibility to his family. “I believe in justice but I will defend my mother before justice.”

Walzer points out, with regret, that Camus ceased to write about Algeria after 1958: “the silence of the connected social critic is a grim sign -- a sign of defeat, a sign of endings. Though he may not be wrong to be silent, we long to hear his voice.” But the draft of The First Man suggests that Camus was not prepared to remain silent; instead, he was seeking a new way to speak about a complex social reality with which the common political discourse of the French intelligentsia could not cope. A fragmentary note makes this clear:

The two Algerian nationalisms. Algeria 39 and 54 (rebellion). What becomes of French values in an Algerian sensibility, that of the first man. The account of the two generations explains the present tragedy.

Jacques Cormery, Camus’s alter ego, is “the first man,” a kind of Adam in that he represents a new breed of human being: a pied noir, yes, a person of “French values in an Algerian sensibility,” but one who has been forced to acknowledge the claims of the native Algerians to equality as persons and under the law. In this sense he must support the nationalism of the 1930s, which sought just that, equality; but what can he say to the later nationalism of Ahmed Ben Bella, a leader of the FLN, whose slogan was “Algeria for the Algerians” and who was ready to kill any pied noir, however supportive of Algerian independence, who would not leave the country? And what can he say to Francois Mitterand, then France’s Interior Minister, who in 1954 said that with the Algerian rebels “the only possible negotiation is war”? Ben Bella and Mitterand, for all their mutual hatred, share a conception of the political sphere that cannot comprehend the moral imperative to love and defend one’s mother.

When Camus died, Sartre responded with a handsome eulogy, which reveals that, despite all their enmity, he understood the fundamental character of Camus’s work: “Camus represented in this century, and against History, the present heir of that long line of moralists whose works perhaps constitute what is most original in French letters. His stubborn humanism, narrow and pure, austere and sensual, waged a dubious battle against events of these times. . . . He reaffirmed the existence of moral fact . . . against the Machiavellians.”

I cannot allow that last comment to pass without noting that Sartre was one of the Machiavellians against whom Camus contended. But it is indeed the moralistic tradition, the tradition of Montaigne and La Rochefoucauld, to which Camus belonged, and it is worth noting that this tradition has always had an ambivalent relationship to Christianity.

In a lecture called “The Unbeliever and Christians,” which Camus gave in 1948 at a Dominican monastery in France, he spoke in terms that eerily prefigure the Algerian crisis of the next decade: “Between the forces of terror and the forces of dialogue, a great unequal battle has begun. . . . The program for the future is either a permanent dialogue or the solemn and significant putting to death of any who have experienced dialogue.” (The primary targets of FLN terrorism, at least at first, were neither pieds noirs nor French soldiers but rather Arab and Muslim moderates, that is, would-be compromisers and dialoguers.)

And the question that Camus puts to his Christian audience is, Which side will you be on? He is not sure of the answer; he fears that the Roman Catholic Church in particular will choose terror, if only terror by means of the papal encyclical, and argues that if that happens, “Christians will live and Christianity will die.”

In Camus’s first two novels, moral questions occupy the foreground, while Christianity occasionally flickers at the margins of the reader’s attention. In The Stranger, Camus’s first and most popular novel, the protagonist, Meursault, seems to be everything an existentialist antihero should be. He is alienated and confused. He commits a murder that appears to illustrate the existentialist theme of the acte gratuit, the gratuitous or utterly unconditioned act that is supposed to indicate the terrible freedom with which we humans are burdened. He is amoral, in the sense of being unable even to understand what others, especially the priest who visits his prison cell, call morality. Camus’s later (“admittedly paradoxical”) comments on Meursault did not help those who would like to know how we should evaluate this young man. What did Camus mean when he said that Meursault was condemned because he would not lie, would not “play the game”? Still more puzzling was his claim that Meursault is “the only Christ we deserve.” And when he suggested that those unfamiliar with the Algerian culture in which the book is set were likely to misunderstand Meursault, he was simply ignored.

Rieux, the protagonist of The Plague, Camus’s allegory of fascism and the resistance to it, is a clearly and profoundly moral man -- perhaps because (not in spite) of his inability to explain and unwillingness even to think about the sources of his morality. Here religious questions are rigorously suppressed by Rieux’s own character, since he is the narrator of the story, though this is not revealed until the end of the book.

The narrator and protagonist of Camus’s last completed novel, The Fall, is almost as enigmatic as Meursault. But far from being amoral or unreflective about morality, the ex-lawyer Jean-Baptiste Clamence tells a story that concerns little other than his forced confrontation with his own moral failings. Camus’s lifelong interest in and reflection upon Christianity seems here on the verge of becoming something more serious: Clamence’s “confession” follows traditionally Christian patterns of penitence. One sees this even in the setting of the book, since Clamence, a man who always loved and craved the heights, has exiled himself to the low-lying city of Amsterdam -- a city whose concentric circles of canals he compares to the circles of Dante’s Hell. Indeed, he describes himself as no longer a legal advocate but a “judge-penitent,” who confesses his sins to those whom he thinks might profit by his tale of woe. (As noted above, Camus used the phrase “judge-penitent” in reference to the critics of The Rebel; but their penitence was on behalf of others rather than themselves).

Christian readers, therefore, might be forgiven for hoping that The First Man would mark yet further development of Camus’s interest in Christianity. But such hopes, it appears, are misplaced. The moral and spiritual introspection, the penitential self-awareness, of Clamence are absent here -- or rather, transposed into the key of filial affection, the relationship between a son and his mother. And it is the juxtaposition of this familial theme with the historical crises of modern Algeria that make The First Man a distinctive and potentially powerful work.

This is the most historically and culturally rich of all Camus’s books. Unlike his earlier protagonists, Jacques Cormery is fully situated in a social, and more particularly a familial, world. The news of Meursault’s mother’s death comes in the first line of The Stranger; in The Plague, Rieux is separated from his wife by a quarantine, and eventually he hears of her death in a sanitorium; the judge-penitent Clamence never married and lives alone in his exile. In some respects, Cormery is like these men: the ordinary social world seems absurd to him, his friendships are few and awkward, and he constantly seeks a self-understanding that he vaguely feels has been denied him by his father’s death when he was only an infant. But it is quite clear that his story is ultimately one of connectedness, emplacement, rootedness.

In the main text, one sees this in the lush romanticism of Camus’s descriptions of Cormery’s childhood: his play with friends, especially on the football field, his life with his family, his experiences at school where instruction and religion are mixed, and so on. This romantic language, whose long sentences seem to derive from Camus’s late-blooming fascination with Faulkner, contrasts rather dramatically with Camus’s typical narrative austerity. The First Man is so autobiographical that Camus sometimes forgets the fictional names he has assigned the characters and uses the real names of his family members. Moreover, in the notes for uncompleted sections of the book we see emerging with striking clarity a plan to depict not only Cormery’s relationship to his mother but his increasing awareness of the centrality of that relationship in his life and of the dignity and strength of his mother’s existence. One sees this plan with particular force and eloquence in this passage from the notes:

I want to write the story of a pair joined by the same blood and every kind of difference. She is similar to the best the world has, and he quietly abominable. He thrown into all the follies of our time; she passing through the same history as if it were that of any time. She silent most of the time, with only a few words at her disposal to express herself; he constantly talking and unable to find in thousands of words what she could say with a single one of her silences . . . Mother and son.

But the apparently timeless intensity of this bond between mother and son is always placed within the context of Algerian history. It appears that Cormery’s recognition of the depth of his love for his mother was to emerge in large part from her constant endangerment by the bombs of terrorists, whose beliefs and purposes she never understands, occupied as she is by the difficulties of living with scarce resources in a harsh world. And this attempt to live in peace and with dignity in the midst of violence dominates her experience long before the rebellion of the fifties, since it was in the Great War that she had lost her husband: “A chapter on the war of 14. Incubator of our era. As seen by the mother? Who knows neither France, nor Europe, nor the world. Who thinks shells explode of their own volition, etc.”

Thus it seems clear that the lyrical nostalgia of the drafts -- their Edenic character, evident in the book’s title, and so reminiscent of the work of Dylan Thomas -- was to be contextualized, though not, I think, discredited or ironized, by an ever-deeper immersion in the violent world of modern history. Or so Jacques Cormery, with his education and his experience of Europe in the second of its great wars, might characterize the narrative movement. Camus’s greatest narrative challenge, it appears, would have been to allow his mother’s experience its full scope: “Alternate chapters would give the mother’s voice. Commenting on the same events but with her vocabulary of 400 words.” Some people, it seems, are in history, however unwittingly or unwillingly; but only Cormery and Camus and readers like us are, strictly speaking, of it. But how can this be portrayed in art?

The late literary critic Northrop Frye once reflected on the curious fact that the nineteenth century found it obvious that Hamlet was Shakespeare’s greatest play, while the twentieth century has, for the most part, bestowed that honor on King Lear. For our predecessors, the problems of Hamlet, which revolve around the nature and stature of the individual human person, were paramount; in our century, we have come to contemplate Lear’s dilemma, which is to find the line (if it exists) that separates the tragic from the absurd. What, Frye mused, will be the essential Shakespearean play of the next century? His admittedly speculative answer was Antony and Cleopatra, because that play represents a situation that more and more people in our world will face: the confrontation of deeply personal desires with world-historical events, or, in other words, the potentially tragic consequences of the creation of a global village.

To get Frye’s point, we need only recall the now-general agreement, which has arisen among warring parties in this century, to disregard old distinctions between combatants and noncombatants, to eliminate the concept of “civilian.” But these movements are economic as well as military. I think of a Guatemalan farmer whom my wife once met: he could not get his crops to market because, suddenly, he could no longer afford the necessary gasoline, gasoline that had risen in price because of the Gulf War. So a man who had never heard of George Bush or Saddam Hussein was in danger, because of their actions, of losing the ability to feed his family. That people may find themselves implicated against their will in historical events is nothing new; but the reach of historical (political, economic) movements has gotten so long so quickly that the connections have become strange, and hard for most of us to accept.

It is precisely this bizarre juxtaposition of the personal and the historical, or this erasure of the line between the two, that Camus was seeking to elaborate in The First Man. This was to have been his answer to his critics, to those who failed to comprehend, or who found inexcusable, his decision to defend his mother before some abstract notion of justice. In recent years, similar concerns have emerged in the fiction of V. S. Naipaul, especially A Bend in the River, and in a very different way in the poetry of Czeslaw Milosz. But I think Camus was the first to see the full implications of this massive change in the nature of historical experience.

Camus never wrote a great book, though each of the three novels he published in his lifetime is nearly perfect. His plays, stories, and essays reveal a similarly high level of technical accomplishment and thematic depth. But clearly he had not found the subject that would enable him to fulfill his promise and exercise his abilities to their full -- until, perhaps, The First Man. Though it would not have been the novel that Christian readers of The Fall might have wished for, it could well have been Camus’s most impressive work. Having had his (fictional) say about Algeria, having explored and portrayed the cultural complexities that the French intelligentsia refused to acknowledge, having paid a proper tribute to the dignity and value of his mother’s life, would he have returned to the spiritual quest that so dominated The Fall? That, alas, we cannot know. But now, at least, we have stronger testimony to Camus’s moral integrity, if not to a movement toward Christian faith.

Edward Said has called Camus “the archetypal trimmer,” one who altered his opinions to gain the approval of others. If this were true, then no one could ever have trimmed more ineptly, since Camus’s simultaneous insistence upon the validity of Algerian complaints and upon the innocence of his family (and others like them) earned him nothing but contempt from both sides. In fact, Said’s statement is a monstrous calumny. Camus was a sinner, like all of us, and can be faulted for many things. But in two ways he is, I think, an exemplary figure. He had the wisdom to see that political justice is never simple and cannot be reduced to simplistic binary oppositions between the oppressors and the oppressed; and he had the courage, in the most stressful of circumstances and in the face of the bitterest opposition, to repudiate the cheap virtue that such oppositions always represent.

Perhaps this is a naive idealization, but I think that Camus’s face, in those later photographs, reveals something of his character: stubborn, as Sartre said, but upright, and willing to acknowledge just how hard it is to know what Truth or Justice is in any given case. After all, when he died he was very near the age at which, as George Orwell said, every man has the face he deserves.