Text Patterns - by Alan Jacobs

Saturday, July 26, 2014

what we can claim for the liberal arts

Please read this wonderful post by Tim Burke on what liberal-arts education can and can’t do — or rather, what we who love it can plausibly claim on its behalf and what we can’t. Excerpt:


No academic (I hope) would say that education is required to achieve wisdom. In fact, it is sometimes the opposite: knowing more about the world can be, in the short-term, an impediment to understanding it. I think all of us have known people who are terrifically wise, who understand other people or the universe or the social world beautifully without ever having studied anything in a formal setting. Some of the wise get that way through experiencing the world, others through deliberate self-guided inquiry.
What I would be prepared to claim is something close to something Wellmon says, that perhaps college might “might alert students to an awareness of what is missing, not only in their own colleges but in themselves and the larger society as well”.
But my “might” is a bit different. My might is literally a question of probabilities. A well-designed liberal arts education doesn’t guarantee wisdom (though I think it can guarantee greater concrete knowledge about subject matter and greater skills for expression and inquiry). But it could perhaps be designed so that it consistently improves the odds of a well-considered and well-lived life. Not in the years that the education is on-going, not in the year after graduation, but over the years that follow. Four years of a liberal arts undergraduate experience could be far more likely to produce not just a better quality of life in the economic sense but a better quality of being alive than four years spent doing anything else.
There are several important elements to Tim’s argument, the most important of which are: 
(a) It does no good to answer simplistic denunciations of liberal-arts education with simplistic cheerleading. Just as there are no books the reading of which will automatically make you a better person — thus the G. C. Lichtenberg line Auden liked to quote: “A book is like a mirror; if an ass looks in, you can’t expect an apostle to look out” — so too there is no form of education that will automatically create better people. But some forms of education, as Tim says, may “improve the odds.” That’s the point at which we need to engage the argument. 
(b) If we who practice and love the liberal arts want to defend them, we also have to be prepared to improve them, to practice them better — and this may well require of us a rethinking of how the liberal arts tradition related to disciplinarity. As always, Tim is refusing the easy answers here, which are two: first, that the disciplinary structures created in and for the modern university are adequate to liberal education; and second, that we should ditch the disciplines and be fully interdisciplinary. Both answers are naïve. (The problems with the latter, by the way, were precisely identified by Stanley Fish a long time ago.) The academic disciplines — like all limiting structures, including specific vocabularies, as Kenneth Burke pointed out in his still-incisive essay on “terministic screens” — simultaneously close off some options and enable others. We need more careful scrutiny of how our disciplinary procedures do their work on and in and with students. 
I’m mainly channeling Tim here, but I would just add that another major element that we need to be thinking about here is desire: What are students drawn to, what do they love? To what extent can we as teachers shape those desires? My colleague Elizabeth Corey has recently published a lovely essay — alas, paywalled — on education as the awakening of desire; and while I wholeheartedly endorse her essay, I have also argued that there are limits to what we can do in that regard. 
In any event, the role of desire in liberal education is a third vital topic for exploration, in addition to the two major points I have extracted from Tim’s post — which, let me remind you, you should read. 

Friday, July 25, 2014

you must remember this

Please forgive me for ignoring the main thrust of this post by William Deresiewicz. I'm just going to comment on one brief but strange passage:

A friend who teaches at a top university once asked her class to memorize 30 lines of the eighteenth-century poet Alexander Pope. Nearly every single kid got every single line correct. It was a thing of wonder, she said, like watching thoroughbreds circle a track.

A “thing of wonder”? Memorizing a mere thirty lines of poetry?

As I've often noted, in any class in which I assign poetry I ask students to memorize at least 50 lines (sometimes 100) and recite them to me. I've been doing that for more than twenty years now, and all the students get all the lines right. If they don't, they come back until they do. It's not a big deal. Yet to Deresiewicz, who taught for years at Yale, and his friend who teaches at a “top university,” the ability to recite thirty lines of Pope — probably the easiest major English poet to memorize, given his exclusive use of rhyming couplets — seems an astonishing mental feat. What would they think of John Basinger, who knows the whole of Paradise Lost by heart? Or even a three-year-old reciting a Billy Collins poem — which is also every bit of 30 lines?

In my school days I had to memorize only a few things: the preamble to the Constitution, the Gettysburg Address, a Shakespeare passage or two. But for previous generations, memorization and recitation were an essential and extensive part of their education. Perhaps only the classical Christian education movement keeps this old tradition alive. The amazement Deresiewicz and his friend feel at a completely trivial achievement indicates just how completely memorization has been abandoned. In another generation we'll swoon at someone who can recite her own phone number.

 

UPDATE: Via my friend at Princeton University Press Jessica Pellien, a book by Catherine Robson called Heart Beats: Everyday Life and the Memorized Poem. Here’s the Introduction in PDF.

the right tools for the job

This talk by Matthew Kirschenbaum provokes much thought, and I might want to come back to some of its theses about software. But for now I'd just like to call attention to his reflections on George R. R. Martin's choice of writing software:

On May 13, in conversation with Conan O’Brien, George R. R. Martin, author of course of the Game of Thrones novels, revealed that he did all of his writing on a DOS-based machine disconnected from the Internet and lovingly maintained solely to run … WordStar. Martin dubbed this his “secret weapon” and suggested the lack of distraction (and isolation from the threat of computer viruses, which he apparently regards as more rapacious than any dragon’s fire) accounts for his long-running productivity.

And thus, as they say, “It is known.” The Conan O’Brien clip went viral, on Gawker, Boing Boing, Twitter, and Facebook. Many commenters immediately if indulgently branded him a “Luddite,” while others opined it was no wonder it was taking him so long to finish the whole Song of Fire and Ice saga (or less charitably, no wonder that it all seemed so interminable). But WordStar is no toy or half-baked bit of code: on the contrary, it was a triumph of both software engineering and what we would nowadays call user-centered design…. WordStar’s real virtues, though, are not captured by its feature list alone. As Ralph Ellison scholar Adam Bradley observes in his work on Ellison’s use of the program, “WordStar’s interface is modelled on the longhand method of composition rather than on the typewriter.” A power user like Ellison or George R. R. Martin who has internalized the keyboard commands would navigate and edit a document as seamlessly as picking up a pencil to mark any part of the page.

There was a time when I wouldn't have understood how Martin could possibly have preferred some ugly old thing like WordStar. I can remember when my thinking about these matters started to change. It happened fifteen years ago, when I read this paragraph by Neal Stephenson:

In the GNU/Linux world there are two major text editing programs: the minimalist vi (known in some implementations as elvis) and the maximalist emacs. I use emacs, which might be thought of as a thermonuclear word processor. It was created by Richard Stallman; enough said. It is written in Lisp, which is the only computer language that is beautiful. It is colossal, and yet it only edits straight ASCII text files, which is to say, no fonts, no boldface, no underlining. In other words, the engineer-hours that, in the case of Microsoft Word, were devoted to features like mail merge, and the ability to embed feature-length motion pictures in corporate memoranda, were, in the case of emacs, focused with maniacal intensity on the deceptively simple-seeming problem of editing text. If you are a professional writer–i.e., if someone else is getting paid to worry about how your words are formatted and printed–emacs outshines all other editing software in approximately the same way that the noonday sun does the stars. It is not just bigger and brighter; it simply makes everything else vanish. For page layout and printing you can use TeX: a vast corpus of typesetting lore written in C and also available on the Net for free.

The key phrase here, for me, was the deceptively simple-seeming problem of editing text. When I read those words I realized that editing text was much of what I needed to do, and that Microsoft Word wasn't very good at that. Stephenson's essay (still a delight to read, by the way, though quite outdated now) set me off on a long quest for the best writing environment that has ended up not with emacs or vi but rather with a three-component system. I have written about these matters before, but people ask me about them all the time, so I thought I would provide a brief summary of my system.

The first component is my preferred text editor, BBEdit, which seems to me to strike a perfect balance between the familiar conventions of Macintosh software and the power typically found only in command-line text editors.

The second component is the scripts John Gruber (with help from Aaron Swartz) wrote to create Markdown, a simple and easy-to-use but powerful syntax for indicating structure in plain-text documents.

The third component is John MacFarlane's astonishing pandoc, which allows me to take my Markdown-formatted plain text and turn it into … well,almost anything this side of an ice-cream sundae. If my publisher wants a MS Word document, pandoc will turn my Markdown text into that. If I want to create an e-book, pandoc can transform that same text into EPUB. When I need to make carefully formatted printed documents, for instance a course syllabus, pandoc will make a LaTeX file. I just can't get over how powerful this tool is. Now I almost never have to write in anything except BBEdit and my favorite text editor for the iPad, Editorial.

That's it. With a good text editor and some scripts for formatting, a writer can focus all of his or her attention on the deceptively simple-seeming problem of editing text. That makes writing less frustrating and more fun. This is what George R. R. Martin has achieved with WordStar, and he's right to stick with it rather than turn to tools that do the essential job far less well.

Wednesday, July 23, 2014

breaking the spell


cows eating grass
I just got back from a brief vacation at Big Bend National Park, and when I was packing I made sure to stick a novel in my backpack. I’m not going to name it, but it is a very recent novel, by a first-time novelist, that has received a great deal of praise. Before my departure I had already read the first chapter and found it quite promising. I was excited.

The next few chapters, I discovered while on my trip, were equally compelling; they carried me some fifty pages into the book. But in the next fifty pages the narrative energy seemed to flag. The act of reading started to feel effortful. And then, about 130 pages in (about halfway through the book), I had a sudden thought: This is just someone making up a story.

And that was it; the spell was broken, my investment in the novel over and done with. I couldn’t read another paragraph. Which is an odd thing, because of course it was just someone making up a story — that’s what novels are, and I knew when I bought the book what it was. But nothing can be more deadly to the experience of reading fiction than the thought that came (quite unbidden) to my mind.

Coleridge famously wrote of literature’s power “to transfer from our inward nature a human interest and a semblance of truth sufficient to procure for these shadows of imagination that willing suspension of disbelief for the moment, which constitutes poetic faith.” (Like most writers before the twentieth century, Coleridge used “poetic” to mean what we now call “literary.”) But really, the requisite suspension of disbelief is willing only in a peculiar anticipatory sense: it has to become unwilling, involuntary, in the actual act of reading, or else all the magic of storytelling is lost.

I have found in the past few years that this has happened to me more and more often as I read fiction, especially recent fiction. There are many possible reasons for this, including an increasing vulnerability to distraction and the return to the reading habits of my youth that I describe in this essay. But I’m inclined to think that neither of those is the problem. Rather, I think that for the last fifty years or more “literary” fiction‚ and a good deal of “genre” fiction as well, has recycled far too many of the same themes and tropes. Like a number of other readers, I’m inclined to assign much of the blame for this to and capturing of so much English-language fiction by university-based creative writing programs, which suffer from the same pressures of conformity that all academic work suffers from. (And yes, the author of the novel I abandoned is a creative-writing-program graduate, though I just now looked that up.)

In other words, I have just been around the same few fictional blocks far too many times. I’m tired of them all, and only only satisfied when I’m surprised.

Maybe that’s not the problem. But I sure feel that it is.

 

P.S. Something that just occurred to me: A long time ago Northrop Frye noted — I can’t at the moment recall where — Ben Jonson's frustration that Shakespeare’s plays were far more inconsistently and incoherently put together than his own but were nevertheless, somehow, more popular, and commented that this was just it: Jonson’s plays were put together, more like “mechanical models of plays” than the real thing, whereas Shaksepeare’s plays had all the odd growths and irregular edges of organic life. This is my chief complaint with much fiction of the past fifty years, including much very highly regarded fiction, like that of John Updike: these aren’t novels, they are mechanical models of novels. Precision-engineered down to the last hidden screw, but altogether without the spark of life.

Thursday, July 17, 2014

my course on the "two cultures"

FOTB (Friends Of This Blog), I have a request for you. This fall I’m teaching a first-year seminar for incoming Honors College students, and our topic is the Two Cultures of the sciences and the humanities. We’ll begin by exploring the lecture by C. P. Snow that kicked off the whole debate — or rather, highlighted and intensified a debate had already been going on for some time — and the key responses Snow generated (F. R. Leavis, Lionel Trilling, Loren Eiseley). We’ll also read the too-neglected book that raised many of the same issues in more forceful ways, and a few years before Snow, Jacob Bronowski’s Science and Human Values.

Then we’ll go back to try to understand the history of the controversy before moving forward to consider the forms it is taking today. Most of the essays I’ll assign may be found by checking out the “twocultures” tag of my Pinboard bookmarks, but we’ll also be taking a detour into science/religion issues by considering Stephen Jay Gould’s idea of non-overlapping magisteria and some of the responses to it.

What other readings should I consider? I am a bit concerned that I am presenting this whole debate as one conducted by white Western men — Are there ways of approaching these questions by women or people from other parts of the world that might put the issues in a different light? Please make your recommendations in the comments below or on Twitter.

Thanks!

the problems of e-reading, revisited

In light of the conversation we were having the other day, here is some new information

The shift from print to digital reading may lead to more than changes in speed and physical processing. It may come at a cost to understanding, analyzing, and evaluating a text. Much of Mangen’s research focusses on how the format of reading material may affect not just eye movement or reading strategy but broader processing abilities. One of her main hypotheses is that the physical presence of a book—its heft, its feel, the weight and order of its pages—may have more than a purely emotional or nostalgic significance. People prefer physical books, not out of old-fashioned attachment but because the nature of the object itself has deeper repercussions for reading and comprehension. “Anecdotally, I’ve heard some say it’s like they haven’t read anything properly if they’ve read it on a Kindle. The reading has left more of an ephemeral experience,” she told me. Her hunch is that the physicality of a printed page may matter for those reading experiences when you need a firmer grounding in the material. The text you read on a Kindle or computer simply doesn’t have the same tangibility.

In new research that she and her colleagues will present for the first time at the upcoming conference of the International Society for the Empirical Study of Literature and Media, in Torino, Italy, Mangen is finding that that may indeed be the case. She, along with her frequent collaborator Jean-Luc Velay, Pascal Robinet, and Gerard Olivier, had students read a short story—Elizabeth George’s “Lusting for Jenny, Inverted” (their version, a French translation, was called “Jenny, Mon Amour”)—in one of two formats: a pocket paperback or a Kindle e-book. When Mangen tested the readers’ comprehension, she found that the medium mattered a lot. When readers were asked to place a series of events from the story in chronological order—a simple plot-reconstruction task, not requiring any deep analysis or critical thinking—those who had read the story in print fared significantly better, making fewer mistakes and recreating an over-all more accurate version of the story. The words looked identical—Kindle e-ink is designed to mimic the printed page—but their physical materiality mattered for basic comprehension.

Note that the printed book is being compared here to the Kindle, which means that the distractions of connectivity I talked about in the previous post aren’t relevant here. (I’m assuming that they mean an e-ink Kindle rather than a Kindle Fire, though it would be important to know that for sure.) 

My hunch, for what it’s worth, is that it is indeed “the physicality of the printed page” that makes a significant difference — in a couple of specific senses.

First of all, the stability of the text on a printed page allows us (as most readers know) to have visual memories of where passages are located: we see the page quadratically, as it were, divided into upper left, lower left, upper right, and lower right. This has mnemonic value. 

Second, the three-dimensionality of a book allows us to connect certain passages with places in the book: when we’re near the beginning of a book, we’re getting haptic confirmation of that through the thinness on one side and thickness on the other, and as we progress in our reading the object in our hands is continually providing us with information that supplements what’s happening on the page. 

A codex is then an informationally richer environment than an e-reader. 

There are, I suspect, ways that software design can compensate for some of this informational deficit, though I don’t know how much. It’s going to be interesting to see whether any software engineers interest themselves in this problem. 

As for me, I suspect I’ll continue to do a lot of reading electronically, largely because, as I’ve mentioned before, I’m finding it harder to get eyewear prescriptions that suit my readerly needs. E-readers provide their own lighting and allow me to change the size of the type — those are enormous advantages at this stage of my life. I would love to see the codex flourish, but I don’t know whether it will flourish for me, and I am going to have some really difficult decisions to make as a teacher. Can I strongly insist that my students use codexes while using electronic texts myself? 

Wednesday, July 16, 2014

DH in the Anthropocene

This talk by Bethany Nowviskie is extraordinary. If you have any interest in where the digital humanities — or the humanities more generally — might be headed, I encourage you to read it. 

It’s a very wide-ranging talk that doesn’t articulate a straightforward argument, but that’s intentional, I believe. It’s meant to provoke thought, and does. Nowviskie’s talk originates, it seems to me, in the fact that so much work in the digital humanities revolves around problems of preservation. Can delicate objects in our analog world be properly digitized so as to be protected, at least in some senses, from further deterioration? Can born-digital texts and images and videos be transferred to other formats before we lose the ability to read and view them? So much DH language, therefore, necessarily concerns itself with concepts connecting to and deriving from the master-concept of time: preservation, deterioration, permanence, impermanence, evanescence. 

For Nowviskie, these practical considerations lead to more expansive reflections on how we — not just “we digital humanists” but “we human beings” — understand ourselves to be situated in time. And for her, here, time means geological time, universe-scale time. 

Now, I’m not sure how helpful it is to try to think at that scale. Maybe the Long Now isn’t really “now” at all for us, formed as we are to deal with shorter frames of experience. I think of Richard Wilbur’s great poem “Advice to a Prophet”

Spare us all word of the weapons, their force and range,
The long numbers that rocket the mind;
Our slow, unreckoning hearts will be left behind,
Unable to fear what is too strange.

Nor shall you scare us with talk of the death of the race.
How should we dream of this place without us? —
The sun mere fire, the leaves untroubled about us,
A stone look on the stone’s face?

Maybe thinking in terms too vast means, for our limited minds, not thinking at all. 

But even as I respond in this somewhat skeptical way to Nowviskie’s framing of the situation, I do so with gratitude, since she has pressed this kind of serious reflection about the biggest questions upon her readers. It’s the kind of thing that the humanities at their best always have done. 

So: more, I hope, at another time on these themes. 

how problematic is e-reading?

Naomi Baron thinks it’s really problematic in academic contexts: 

What’s the problem? Not all reading works well on digital screens.

For the past five years, I’ve been examining the pros and cons of reading on-screen versus in print. The bottom line is that while digital devices may be fine for reading that we don’t intend to muse over or reread, text that requires what’s been called "deep reading" is nearly always better done in print.

Readers themselves have a keen sense of what kind of reading is best suited for which medium. My survey research with university students in the United States, Germany, and Japan reveals that if cost were the same, about 90 percent (at least in my sample) prefer hard copy for schoolwork. If a text is long, 92 percent would choose hard copy. For shorter texts, it’s a toss-up.

Digital reading also encourages distraction and invites multitasking. Among American and Japanese subjects, 92 percent reported it was easiest to concentrate when reading in hard copy. (The figure for Germany was 98 percent.) In this country, 26 percent indicated they were likely to multitask while reading in print, compared with 85 percent when reading on-screen. Imagine wrestling with Finnegan’s Wake while simultaneously juggling Facebook and booking a vacation flight. You get the point.

And maybe she’s right, but she also seems to be eliding some important distinctions. For instance, when she says that “digital reading ... encourages distraction and invites multitasking,” what she’s really referring to is “reading on a capable internet-connected device” — probably an iPad. A Kindle or Nook or Kobo, with either very limited internet access or none at all, wouldn’t provide such distractions. 

To be sure, digital reading is increasingly dominated by tablets, as their share of the market grows and that of the dedicated e-readers shrinks, but it’s still wrong to blame “digital reading” for a problem that’s all about internet connectivity. 

Also: Baron’s research is with university students, which is to say, people who learned to read on paper and did all their serious reading on paper until quite recently. What we don’t know is how kids who learn to read on digital devices — a still-small category — will feel about these matters by the time they get to university. That is, what Baron is attributing to some intrinsic difference between digital reading and reading on paper might well be a matter of simple familiarity. I don’t think we’ve yet reached the point where we can make that decision. 

I say all this as a lover of books and a believer that reading on paper has many great advantages that our digital devices have yet to replicate, much less to exceed. But, to judge only from this excerpt of a larger project, I doubt that Baron has an adequate experimental design. 

Friday, July 11, 2014

different strokes

Here’s a typically smart and provocative reflection by Andrew Piper. But I also have a question about it. Consider this passage: 

Wieseltier’s campaign is just the more robust clarion call of subtler and ongoing assumptions one comes across all the time, whether in the op-eds of major newspapers, blogs of cultural reviews, or the halls of academe. Nicolas Kristof’s charge that academic writing is irrelevant because it relies on quantification is one of the more high-profile cases. The recent reception of Franco Moretti’s National Book Critics Award for Distant Reading is another good case in point. What’s so valuable about Moretti’s work on quantifying literary history, according to the New Yorker’s books blog, is that we can ignore it. “I feel grateful for Moretti,” writes Joshua Rothman. “As readers, we now find ourselves benefitting from a division of critical labor. We can continue to read the old-fashioned way. Moretti, from afar, will tell us what he learns.”

We can continue doing things the way we’ve always done them. We don’t have to change. The saddest part about this line of thought is this is not just the voice of journalism. You hear this thing inside academia all the time. It (meaning the computer or sometimes just numbers) can’t tell you what I already know. Indeed, the “we already knew that” meme is one of the most powerful ways of dismissing any attempt at trying to bring together quantitative and qualitative approaches to thinking about the history of ideas.

As an inevitable backlash to its seeming ubiquity in everyday life, quantification today is tarnished with a host of evils. It is seen as a source of intellectual isolation (when academics use numbers they are alienating themselves from the public); a moral danger (when academics use numbers to understand things that shouldn’t be quantified they threaten to undo what matters most); and finally, quantification is just irrelevant. We already know all there is to know about culture, so don’t even bother.

Regarding that last sentence: the idea that “we already know all there is to know about culture, so don’t even bother” is a pathetic one — but that’s not what Rothman says. Rather, he writes of a “division of labor,” in which it’s perfectly fine for Moretti to do what he does, but it’s also perfectly fine for Rothman to do what he does. What I hear Rothman saying is not “we know all there is to know” but rather something like “I prefer to keep reading in more traditional and familiar ways and I hope the current excitement over people like Moretti won’t prevent me from doing that.” 

In fact, Rothman, as opposed to the thoroughly contemptuous Wieseltier, has many words of commendation for Moretti. For instance: 

The grandeur of this expanded scale gives Moretti’s work aesthetic power. (It plays a larger role in his appeal, I suspect, than most Morettians would like to admit.) And Moretti’s approach has a certain moral force, too. One of the pleasures of “Distant Reading” is that it assembles many essays, published over a long period of time, into a kind of intellectual biography; this has the effect of emphasizing Moretti’s Marxist roots. Moretti’s impulses are inclusive and utopian. He wants critics to acknowledge all the books that they don’t study; he admires the collaborative practicality of scientific work. Viewed from Moretti’s statistical mountaintop, traditional literary criticism, with its idiosyncratic, personal focus on individual works, can seem self-indulgent, even frivolous. What’s the point, his graphs seem to ask, of continuing to interpret individual books—especially books that have already been interpreted over and over? Interpreters, Moretti writes, “have already said what they had to.” Better to focus on “the laws of literary history”—on explanation, rather than interpretation.

All this sounds austere and self-serious. It isn’t. “Distant Reading” is a pleasure to read. Moretti is a witty and welcoming writer, and, if his ideas sometimes feel rough, they’re rarely smooth from overuse. I have my objections, of course. I’m skeptical, for example, about the idea that there are “laws of literary history”; for all his techno-futurism, Moretti can seem old-fashioned in his eagerness to uncover hidden patterns and structures within culture. But Moretti is no upstart. He is patient, experienced, and open-minded. It’s obvious that he intends to keep gathering data, and, where it’s possible, to replace his speculations with answers. In some ways, the book’s receiving an award reflects the role that Moretti has played in securing a permanent seat at the table for a new critical paradigm—something that happens only rarely.

This all seems eminently fair-minded to me, even generous. But what Moretti does is not Rothman’s thing. And isn’t that okay? Indeed, hasn’t that been the case for a long time in literary study: that we acknowledge the value in what other scholars with different theoretical orientations do, without choosing to imitate them ourselves? It mystifies me that Piper sees this as a Wieseltier-level dismissal. 

Monday, July 7, 2014

worse and worse

Another candidate for Worst Defense of Facebook, this one from Duncan Watts of Microsoft Research:

Yes, the arrival of new ways to understand the world can be unsettling. But as social science starts going through the kind of revolution that astronomy and chemistry went through 200 years ago, we should resist the urge to attack the pursuit of knowledge for knowledge's sake.

Just as in the Romantic era, advances in technology are now allowing us to measure the previously unmeasurable – then distant galaxies, now networks of millions of people. Just as then, the scientific method is being promoted as an improvement over traditional practices based on intuition and personal experience. And just as then, defenders of the status quo object that data and experiments are inherently untrustworthy, or are simply incapable of capturing what really matters.

We need to have these debates, and let reasonable people disagree. But it's unreasonable to insist that the behavior of humans and societies is somehow an illegitimate subject for the scientific method. Now that the choice between ignorance and understanding is within our power to make, we should follow the lead of the Romantics and choose understanding.

Get that? If you are opposed to the Facebook experiment, you are “attack[ing] the pursuit of knowledge for knowledge’s sake” — because, as we know, the people who work at Facebook care nothing for filthy lucre: they are perfectly disinterested apostles of Knowledge! So why do you hate knowledge?

Moreover, Why do you think “data and experiments are inherently untrustworthy”? — yes, all data, all experiments, because clearly it is impossible to criticize Facebook without criticizing “data and experiments” tout court. If you criticize the Facebook experiment, you thereby “insist that the behavior of humans and societies is somehow an illegitimate subject for the scientific method.”

There’s more of this garbage — far more:

Remember: the initial trigger for the outrage over the Facebook study was that it manipulated the emotions of users. But we are being manipulated without our knowledge or consent all the time – by advertisers, marketers, politicians – and we all just accept that as a part of life. The only difference between the Facebook study and everyday life is that the researchers were trying to understand the effect of that manipulation.

Of course. No one has ever complained about being manipulated or lied to by politicians or marketers. And note once more the purity of Facebook’s motives: they’re just trying to “understand,” that’s all. Why do you hate understanding? (Later on Watts talks about “the decisions they're already making on our behalf”: on our behalf. Facebook may be a publicly-traded, for-profit corporation, but all they really care about is helping their users. Why are you so ungrateful?)

If that still sounds creepy, ask yourself this: Would you prefer a world in which we are having our emotions manipulated, but where the manipulators ignore the consequences of their own actions? What about if the manipulators know exactly what they're doing ... but don't tell anyone about it? Is that really a world you want to live in?

As I suggested in a comment on an earlier post, if you live by A/B thinking, you end up dying (intellectually) by A/B thinking. Watts is trying pretty desperately here to tell us that we can only choose a world in which we’re manipulated without knowing it or in which we are knowingly manipulated. The one thing he doesn't want any of his readers to think is that it’s possible to try to reduce the manipulation.

At the end of this absurd screed Watts writes,

Yes, all research needs to be conducted ethically, and social scientists have an obligation to earn and keep the public trust. But unless the public truly prefers a world in which nobody knows anything, more and better science is the best answer we have.

Why do you prefer a world in which nobody knows anything? But wait — there’s a little glimmer of light here ... hard to see, but ... here it is: “social scientists have an obligation to earn and keep the public trust.” Right. And the ones from Facebook haven’t. And they’re not going to get it back by accusing everyone who’s unhappy with them of seeking darkness and ignorance.

Saturday, July 5, 2014

designing the Word

Bibliotheca3

Bibliotecha is a remarkably successful new Kickstarter project for designing and printing a Bible made to be read, in multiple volumes and with bespoke type design. Here is the Kickstarter page; here is part one of an interview with Adam Lewis Greene, the designer; and here is the second part of that interview.

Lots and lots of things to interest me here. At the moment I’m just going to mention one, an exchange from the second interview: 

J. MARK BERTRAND: Your decision not to justify the text column threw me at first. Now I think I understand, but since I’m a stickler for Bibles looking like books meant to be read, and novels are universally justified, could you explain what’s at stake in the choice to leave the right margin ragged?

ADAM LEWIS GREENE: This goes back, again, to the idea of hand-writing as the basis for legible text. When we write, we don’t measure each word and then expand or contract the space between those words so each line is the same length. When we run out of room, we simply start a new line, and though we have a ragged right edge, we have consistent spacing. The same is true of the earliest manuscripts of biblical literature, which were truly formatted to be read. I’m thinking of the Isaiah scroll, which I was privileged to see in Israel last year and is the primary model for my typesetting….

Unjustified text was revived by the likes of Gill and Tschichold early in the last century, and it continues to gain steam, especially in Europe. We are starting to see unjustified text much more frequently in every-day life, especially in digital form, and I would argue we are slowly becoming more accustomed to evenly spaced words than to uniform line-length. To me, justified type is really a Procrustean Bed. Too many times while reading have I leapt a great distance from one word to the next, only to be stunted by the lack of space between words on the very next line. I admit, I think justified text looks clean and orderly when done well, but it doesn’t do a single thing in the way of legibility. It is simply what we have been accustomed to for a long time, and since this project is partially about breaking down notions of how things “ought to be,” I ultimately decided to go with what I believe is the most legible approach; not to mention its suitability for ancient hand-written literature.

I couldn’t agree more with Greene’s decision here. I have long believed that we pay too high a price in legibility to get the perfect rectangles of fully justified text. In my experience, the single greatest source of distraction coming from a text (as opposed to the distractions that arrive from the outside) is variable spacing imposed by the demands of justification. 

When my book The Pleasures of Reading in an Age of Distraction was being typeset for publication, I made two requests of the designer. First, I wanted it set in Eric Gill’s Perpetua; and second, I wanted ragged-right justification. To my astonishment, both of my requests were granted. 

Friday, July 4, 2014

The Righteous Mind and the Inner Ring

In his recent and absolutely essential book The Righteous Mind, Jonathan Haidt tries to understand why we disagree with one another — especially, but not only, about politics and religion — and, more important, why it is so hard for people to see those who disagree with them as equally intelligent, equally decent human beings. (See an excerpt from the book here.)

Central to his argument is this point: “Intuitions come first, strategic reasoning second. Moral intuitions arise automatically and almost instantaneously, long before moral reasoning has a chance to get started, and those first intuitions tend to drive our later reasoning.” Our “moral arguments” are therefore “mostly post hoc constructions made up on the fly, crafted to advance one or more strategic objectives.”

Haidt talks a lot about how our moral intuitions accomplish two things: they bind and they blind. “People bind themselves into political teams that share moral narratives. Once they accept a particular narrative, they become blind to alternative moral worlds.” “Moral matrices bind people together and blind them to the coherence, or even existence, of other matrices.” The incoherent anti-religious rant by Peter Conn that I critiqued yesterday is a great example of how the “righteous mind” works — as are conservative denunciations of universities filled with malicious tenured radicals.

So far so vital. I can't imagine anyone who couldn’t profit from reading Haidt’s book, though it’s a challenge — as Haidt predicts — for any of us to understand our own thinking in these terms. Certainly it’s hard for me, though I’m trying. But there’s a question that Haidt doesn’t directly answer: How do we acquire these initial moral intuitions? — Or maybe not the initial ones, but the ones that prove decisive for our moral lives? I make that distinction because, as we all know, people often end up dissenting, sometimes in the strongest possible terms, from the moral frameworks within which they were raised.

So the question is: What triggers the formation of a “moral matrix” that becomes for a given person the narrative according to which everything and everyone else is judged?

I think that C. S. Lewis answered that question a long time ago. (Some of what follows is adapted from my book The Narnian: The Life and Imagination of C. S. Lewis.) In December of 1944, he gave the Commemoration Oration at King’s College in London, a public lecture largely attended by students, and Lewis took the opportunity of this “Oration” to produce something like a commencement address. He called his audience’s attention to the presence, in schools and businesses and governments and armies and indeed in every other human institution, of a “second or unwritten system” that stands next to the formal organization.

You discover gradually, in almost indefinable ways, that it exists and that you are outside it, and then later, perhaps, that you are inside it.... It is not easy, even at a given moment, to say who is inside and who it outside.... People think they are in it after they have in fact been pushed out of it, or before they have been allowed in; this provides great amusement for those who are really inside.

Lewis does not think that any of his audience will be surprised to hear of this phenomenon of the Inner Ring; but he thinks that some may be surprised when he goes on to argue, in a point so important that I’m going to put it in bold type, “I believe that in all men’s lives at certain periods, and in many men’s lives at all periods between infancy and extreme old age, one of the most dominant elements is the desire to be inside the local Ring and the terror of being left outside.” And it is important for young people to know of the force of this desire because “of all passions the passion for the Inner Ring is most skillful in making a man who is not yet a very bad man do very bad things.”

The draw of the Inner Ring has such profound corrupting power because it never announces itself as evil — indeed, it never announces itself at all. On these grounds Lewis makes a “prophecy” to his audience at King’s College: “To nine out of ten of you the choice which could lead to scoundrelism will come, when it does come, in no very dramatic colours.... Over a drink or a cup of coffee, disguised as a triviality and sandwiched between two jokes ... the hint will come.” And when it does come, “you will be drawn in, if you are drawn in, not by desire for gain or ease, but simply because at that moment, when the cup was so near your lips, you cannot bear to be thrust back again into the cold outer world.”

It is by these subtle means that people who are “not yet very bad” can be drawn to “do very bad things” – by which actions they become, in the end, very bad. That “hint” over drinks or coffee points to such a small thing, such an insignificant alteration in our principles, or what we thought were our principles: but “next week it will be something a little further from the rules, and next year something further still, but all in the jolliest, friendliest spirit. It may end in a crash, a scandal, and penal servitude; it may end in millions, a peerage, and giving the prizes at your old school. But you will be a scoundrel.”

This, I think, is how our “moral matrices,” as Haidt calls them, are formed: we respond to the irresistible draw of belonging to a group of people whom we happen to encounter and happen to find immensely attractive. The element of sheer contingency here is, or ought to be, terrifying: had we encountered a group of equally attractive and interesting people who held very different views, then we too would hold very different views.

And, once we’re part of the Inner Ring, we maintain our status in part by coming up with those post hoc rationalizations that confirm our group identity and, equally important, confirm the nastiness of those who are Outside, who are Not Us. And it’s worth noting, as Avery Pennarun has recently noted, that one of the things that makes smart people smart is their skill at such rationalization: “Smart people have a problem, especially (although not only) when you put them in large groups. That problem is an ability to convincingly rationalize nearly anything.”

In “The Inner Ring” Lewis portrays this group affiliation in the darkest of terms. That’s because he’s warning people about its dangers, which is important. But of course it is by a similar logic that people can be drawn into good communities, genuine fellowship — that they can become “members of a Body,” as he puts it in the great companion piece to “The Inner Ring,” a talk called “Membership.” (Both are included in his collection The Weight of Glory.) This distinction is what his novel That Hideous Strength is primarily about: we see the consequences for Mark Studdock as he is drawn deeper and deeper into an Inner Ring, and the consequences for Mark’s wife Jane as she is drawn deeper and deeper into a genuine community. I can’t think of a better guide to distinguishing between the false and true forms of membership than that novel.

And that novel offers something else: hope. Hope that we need not be bound forever by an inclination we followed years or even decades ago. Hope that we can, with great discipline and committed energy, transcend the group affiliations that lead us to celebrate members of our own group (even when they don't deserve celebration) and demonize or mock those Outside. We need not be bound by the simplistic and uncharitable binaries of the Righteous Mind. Unless, of course, we want to be.

Thursday, July 3, 2014

an academic farce

Peter Conn is right about one thing: college accreditation is a mess. But his comments about religious colleges are thoughtless, uninformed, and bigoted.

Conn is appalled — appalled — that religious colleges can receive accreditation. Why does this appall him? Well, because they have communal statements of faith, and this proves that in them “the primacy of reason has been abandoned.” The idea that religious faith and reason are incompatible can only be put forth by someone utterly ignorant of the centuries of philosophical debate on this subject, which continues to this day; and if it’s the primacy of reason that Conn is particularly concerned with, perhaps he might take a look at the recent (and not-so-recent) history of his own discipline, which is also mine. Could anyone affirm with a straight face that English studies in America has for the past quarter-century or more been governed by “the primacy of reason”? I seriously doubt that Conn even knows what he means by “reason.” Any stick to beat a dog.

Conn is, if possible, even farther off-base when he writes of “the manifest disconnect between the bedrock principle of academic freedom and the governing regulations that corrupt academic freedom at Wheaton.” I taught at Wheaton for twenty-nine years, and when people asked me why I stayed there for so long my answer was always the same: I was there for the academic freedom. My interests were in the intersection of theology, religious practice, and literature — a very rich field, but one that in most secular universities I would have been strongly discouraged from pursuing except in a corrosively skeptical way. Certainly in such an environment I would never have dared to write a book on the theology of reading — and yet what I learned in writing that book has been foundational for the rest of my career.

Conn — in keeping with the simplistic dichotomies that he evidently prefers — is perhaps incapable of understanding that academic freedom is a concept relative to the beliefs of the academics involved. I have a sneaking suspicion that he is even naïve enough to believe that the University of Pennsylvania, where he teaches, is, unlike Wheaton, a value-neutral institution. But as Stanley Fish pointed out years ago, “What, after all, is the difference between a sectarian school which disallows challenges to the divinity of Christ and a so-called nonideological school which disallows discussion of the same question? In both contexts something goes without saying and something else cannot be said (Christ is not God or he is). There is of course a difference, not however between a closed environment and an open one but between environments that are differently closed.” Wheaton is differently closed than Penn; and for the people who teach there and study there, that difference is typically liberating rather than confining. It certainly was for me.

It would take me another ten thousand words to exhaustively detail Conn’s errors of commission and omission — I could have fun with his apparent belief that Christian colleges generally support “creation science” — but in conclusion let me just zero in on this: “Providing accreditation to colleges like Wheaton makes a mockery of whatever academic and intellectual standards the process of accreditation is supposed to uphold.”

How do accreditation agencies “uphold” “academic and intellectual standards”? They look at such factors as class size, test scores of incoming students, percentage of faculty with terminal degrees, and the like. When they look really closely they might note the quality of the institutions from which the faculty received their terminal degrees, and the percentage of graduates who go on for further education.

These are the measures that, when the accreditation agencies come calling, schools like Wheaton are judged by — that is, the same measures that all other colleges and universities in America are judged by. Wheaton faculty in the humanities — I’ll confine my comments to that field — have recently published books on the university presses of Cambridge, Harvard, Oxford, and Princeton, among others. Former students of mine — to speak even more narrowly — have gone on to get degrees from the finest institutions in the world, and are now professors (some of them tenured) at first-rate universities here and abroad. The factual record speaks for itself, for those who, unlike Conn, are willing to look into it. And I am not even mentioning non-academic achievements.

Some of Wheaton’s most famous alumni have strayed pretty far from its theological commitments, though I think Wes Craven has done a pretty good job of illustrating the consequences of original sin. But even those who have turned aside from evangelicalism, or Christianity altogether, often pay tribute to Wheaton for providing them the intellectual tools they have used to forge their own path — see, for instance, Bart Ehrman in the early pages of Misquoting Jesus. The likelihood of producing such graduates is a chance Wheaton is willing to take. Why? Because it believes in liberal education, as opposed to indoctrination.

In this respect, the institutional attitude of Wheaton College differs considerably from the personal attitude of Peter Conn, who, it appears, cannot bear the thought that the academic world should make room for people whose beliefs he despises — even if they meet the same academic standards as other colleges and universities. What Conn wants is a purge of religion from academic life. He ought to own that desire, and stop trying to camouflage it with the verbal fig-leaves of “intellectual standards” and “academic freedom” — concepts he neither understands nor values.

the worst defense of Facebook you're likely to read

Well, I’ve seen some inept commentary on the recent Facebook fiasco, but this definitely takes the cake — and it’s from a Cesar Hidalgo, a prof at MIT, no less.

Talk about an inauspicious beginning:

First, Facebook is a “micro-broadcasting” platform, meaning that it is not a private diary or a messaging service. This is not an official definition, but one that emerges from Facebook’s design: everything you post on Facebook has the potential to go viral.

Well, first of all, no. Facebook has settings that allow you to determine how private or public you want to given post to be: see? So some of what you post on Facebook cannot go viral, unless the software malfunctions, or Facebook makes yet another unannounced change in its policies. And second: the point is completely irrelevant. Though Facebook has often been in trouble for betraying its users’ expectations of privacy — by making public what they didn't want made public —  that isn’t what this is about. The complaint is that Facebook experimented on its users without seeking their consent.

Second, the idea that the experiment violated privacy is also at odds with the experimental design. After all, the experiment was based on what is known technically as a sorting operation. Yet, a sorting operation cannot violate privacy.

That’s manifestly untrue, but it doesn't matter: the point is irrelevant. Though Facebook has often been in trouble for betraying its users’ expectations of privacy, that isn’t what this is about. The complaint is that Facebook experimented on its users without seeking their consent.

Finally, it is important to remember that Facebook did not generate the content that affected the mood of users. You and I generated this content. So if we are willing to point the gun at Facebook for sorting the content created by us, we should also point the gun at ourselves, for creating that content.

Sometimes a statement gets so Orwellian that there’s nothing to be said in response. Onward:

Is using sentiment analysis as a feature unethical? Probably not. Most of us filter the content we present to others based on emotional considerations. In fact, we do not just filter content. We often modify it based on emotional reasons. For instance, is it unethical to soften an unhappy or aggressive comment from a colleague when sharing it with others? Is that rewording operation unethical? Or does the failure of ethics emerge when an algorithm — instead of, say, a professional editor — performs the filtering?

Ah, Artie McStrawman — pleased to see you, my old friend. Of course no one has ever said that filtering information is wrong. The complaint here is that Facebook filtered people’s feeds in order to conduct experiments on them without seeking their consent. Hidalgo has written an 1100 word post that shows no sign at any point of having even the tiniest shred of understanding of what people are angry at Facebook about. This is either monumental dishonesty or monumental stupidity. I can't see any other alternative.

Fantasy and the Buffered Self

That's the title of my New Atlantis essay, now available in full online. Please check it out, and if you'd like to make comments, this is the place to do that.

Monday, June 30, 2014

modernism, revision, literary scholarship

Hannah Sullivan’s outstanding book The Work of Revision came out last year and got less attention than it deserves — though here’s a nice article from the Boston Globe. My review of the book has just appeared in Books and Culture, but it’s behind a paywall — and why, you may ask? Because B&C needs to make ends meet, that’s why, and if you haven’t subscribed you ought to, post haste.

Anyway, here’s the link and I’m going to quote my opening paragraphs here, because they relate to themes often explored on this blog. But do find a way to read Sullivan’s book.

Once upon a time, so the village elders tell us, there reigned a gentle though rather dull king called Literary Criticism, who always wore tweed and spoke in a low voice. But then, on either a very dark or very brilliant day, depending on who's telling the story, this unassuming monarch was toppled by a brash outsider named Theory, who dressed all in black, wore stylish spectacles, and spoke with a French accent. For a time it seemed that Theory would rule forever. But no king rules forever.

One can be neither definitive nor uncontroversial about such matters, given the chaotic condition of the palace records, but if I were in the mood to be sweeping, I would suggest that the Reign of Theory in Anglo-American literary study extended from approximately 1960 (Michel Foucault's Madness and Civilization) to approximately 1997 (Judith Butler's Excitable Speech: A Politics of the Performative). Its period of absolute dominance was rather shorter, from 1976 (Gayatri Spivak's English translation of Jacques Derrida's Of Grammatology) to 1989 (Stephen Greenblatt's Shakespearean Negotiations: The Circulation of Social Energy in Renaissance England). Those were heady days.

The ascendance of Theory brought about the occlusion of a set of humanistic disciplines that had for a long time been central to literary study, especially the various forms of textual scholarship, from textual editing proper to analytical bibliography. To take but one institutional example: at one time the English department of the University of Virginia, under the leadership of the great textual scholar Fredson Bowers, had been dominant in these fields, but Bowers retired in 1975, and by the time I arrived at UVA as a new graduate student in 1980, almost no one on the faculty was doing textual scholarship, and I knew no students who were interested in it. This situation would begin to be rectified in 1986 with the hiring of Jerome McGann, who renewed departmental interest in these fields and played a role in bringing Terry Belanger's Rare Book School from Columbia to Virginia (in 1992). Now Virginia is once more seen as a major player in textual scholarship, bibliography, the history of the book, and what was once called "humanities computing" — a field in which McGann was a pioneer — but is now more likely to be called "digital humanities."

Theory is still around; but its skeptical, endlessly ramifying speculations can now seem little more than airy fabrications in comparison to the scrupulous study of material texts and the very different kind of scrupulosity required to write computer programs that data-mine texts. The European theorist in black has had to give way to new icons of (scholarly) cool. Literary textual scholarship is back: more epistemologically careful, aware of the lessons of theory, but intimately connected to traditions of humanistic learning that go back at least to Erasmus of Rotterdam in the 16th century — and maybe even Eusebius of Caesarea in the 4th.

Sunday, June 29, 2014

the Empire strikes back

This defense of Facebook by Tal Yarkoni is telling in so many ways. Let me count some of them.

Yarkoni begins by taking note of the results of the experiment:

The largest effect size reported had a Cohen’s d of 0.02–meaning that eliminating a substantial proportion of emotional content from a user’s feed had the monumental effect of shifting that user’s own emotional word use by two hundredths of a standard deviation. In other words, the manipulation had a negligible real-world impact on users’ behavior. To put it in intuitive terms, the effect of condition in the Facebook study is roughly comparable to a hypothetical treatment that increased the average height of the male population in the United States by about one twentieth of an inch (given a standard deviation of ~2.8 inches). Theoretically interesting, perhaps, but not very meaningful in practice.

This seems to be missing the point of the complaints about Facebook’s behavior. The complaints are not “Facebook successfully manipulated users’ emotions” but rather “Facebook attempted to manipulate users’ emotions without informing them that they were being experimented on.” That’s where the ethical question lies, not with the degree of the manipulation’s success. “Who cares if that guy was shooting at you? He missed, didn’t he?” — that seems to be Yarkoni’s attitude.

Here’s another key point, according to Yarkoni:

Facebook simply removed a variable proportion of status messages that were automatically detected as containing positive or negative emotional words. Let me repeat that: Facebook removed emotional messages for some users. It did not, as many people seem to be assuming, add content specifically intended to induce specific emotions.

It may be true that “many people” assume that Facebook added content, but I have not seen even one say that. Does anyone really believe that Facebook is generating false content and attributing it to users? The concern I have heard people express is that they may not be seeing what their friends or family are rejoicing about or lamenting, and that such hidden information could be costly to them in multiple ways. (Imagine a close friend who is hurt with you because you didn’t commiserate with her when she was having a hard time. After all, the two of you are friends on Facebook, and she posted her lament there — you should have responded.)

But here’s the real key point that Yarkoni makes — key because it reveals just how arrogant our technological overlords are, and how deep their sense of entitlement:

It’s not clear what the notion that Facebook users’ experience is being “manipulated” really even means, because the Facebook news feed is, and has always been, a completely contrived environment. I hope that people who are concerned about Facebook “manipulating” user experience in support of research realize that Facebook is constantly manipulating its users’ experience. In fact, by definition, every single change Facebook makes to the site alters the user experience, since there simply isn’t any experience to be had on Facebook that isn’t entirely constructed by Facebook.... So I don’t really understand what people mean when they sarcastically suggest — as Katy Waldman does in her Slate piece — that “Facebook reserves the right to seriously bum you out by cutting all that is positive and beautiful from your news feed”. Where does Waldman think all that positive and beautiful stuff comes from in the first place? Does she think it spontaneously grows wild in her news feed, free from the meddling and unnatural influence of Facebook engineers?

Well, I’m pretty sure that Katy Waldman thinks “all that positive and beautiful stuff comes from” the people who posted the thoughts and pictures and videos — because it does. But no, says Yarkoni: All those stories you told about your cancer treatment? All those videos from the beach you posted? You didn't make that. That doesn't “come from” you. Yarkoni completely forgets that Facebook merely provides a platform — a valuable platform, or else it wouldn't be so widely used — for content that is provided wholly by its users.

Of course “every single change Facebook makes to the site alters the user experience” — but all changes are not ethically or substantively the same. Some manipulations are more extensive than others; changes in user experience can be made for many different reasons, some of which are better than others. That people accept without question some changes while vigorously protesting others isn’t a sign of inconsistency, it’s a sign that they’re thinking, something that Yarkoni clearly does not want them to do. Most people who use Facebook understand that they’ve made a deal in which they get a platform to share their lives with people they care about, while Facebook gets to monetize that information in certain restricted ways. They have every right to get upset when they feel that Facebook has unilaterally changed the deal, just as they would if they took their car to the body shop and got it back painted a different color. And in that latter case they would justifiably be upset even if the body shop pointed out that there was small print in the estimate form you signed permitting them to change the color of your car.

One last point from Yarkoni, and this one is the real doozy: “The mere fact that Facebook, Google, and Amazon run experiments intended to alter your emotional experience in a revenue-increasing way is not necessarily a bad thing if in the process of making more money off you, those companies also improve your quality of life.” Get that? In Yarkoni’s ethical cosmos, Facebook, Google, and Amazon — and presumably every other company you do business with, and for all I know the government (why not?) — can manipulate you all they want as long as they “improve your quality of life” according to their understanding, not yours, of what makes for improved quality of life.

Why do I say their understanding and not yours? Because you are not consulted in the matter. You are not asked beforehand whether you wish to participate in a life-quality-improving experiment, and you are not informed afterwards that you did participate. You do not get a vote about whether your quality of life actually has been improved. (Our algorithms will determine that.) The Great Gods of the Cloud understand what is best for you; that is all ye know on earth, and all ye need know.

In addition to all this, Yarkoni makes some good points, though they're generally along the other-companies-do-the-same line. I may say more about those in another post, if I get a chance. But let me wrap this up with one more note.

Tal Yarkoni directs the Psychoinformatics Lab. in the Psychology department at the University of Texas at Austin. What do they do in the Psychoinformatics Lab? Here you go: “Our goal is to develop and apply new methods for large-scale acquisition, organization, and synthesis of psychological data.” The key term here is “large-scale,” and no one can provide vast amounts of this kind of data as well as the big tech companies that Yarkoni mentions. Once again, the interests of academia and Big Business converge. Same as it ever was.

Saturday, June 28, 2014

Fermi's paradox and hegemonising swarms

Over on Twitter, Robin Sloan pointed me to this post about the Fermi paradox, which got me thinking about that idea again for the first time in a long time. And I find that I still have the same question I’ve had in the past: Where’s the paradox?

That Wikipedia article (which is a pretty good one) puts the problem that Fermi perceived this way: “The apparent size and age of the universe suggest that many technologically advanced extraterrestrial civilizations ought to exist. However, this hypothesis seems inconsistent with the lack of observational evidence to support it.” But we have no telescopes powerful enough to see what might be happening on any of the small number of exoplanets that have been directly observed. So there’s no “observational evidence” one way or the other.

Unless, of course, we mean alien civilizations that might be observed right here on earth.

From the movie Signs

That’s where this way of formulating the problem comes in (again from the Wikipedia article): “Given intelligent life's ability to overcome scarcity, and its tendency to colonize new habitats, it seems likely that at least some civilizations would be technologically advanced, seek out new resources in space and then colonize first their own star system and subsequently the surrounding star systems.” Does “intelligent life” really have a “tendency to colonize new habitats”? Wouldn't it be more accurate to say simply that some human societies have this tendency?

The assumptions here are, it seems to me, pretty obvious and pretty crude: that the more intelligent “intelligent life” becomes, the more likely it will be to have an expansionary, colonizing impulse. In other words, superior alien civilizations will be to us as Victorian explorers were to the tribes of Darkest Africa. Higher intelligence is then identified with (if we’re inclined to be critical) the British Empire at its self-confident apogee or (if we’re inclined to be really critical) the Soviet Union or Nazi Germany in their pomp. (It’s all about the galactic Lebensraum, baby!)

But I see no reason whatsoever to grant this assumption. Why would the drive to become a “hegemonising swarm” — as Iain M. Banks refers to this kind of society in his Culture novels — be a mark of high intelligence? Though the Culture itself has strong hegemonizing tendencies, which it tries with partial success to keep under control, the most sophisticated societies in those books are the ones who have chosen to “sublime”, that is, opt out of ordinary space/time altogether.

Perhaps the impulse to colonize is, or could be, merely a stage in the development of intelligence — a stage to be gotten over. Maybe truly great intelligence manifests itself in a tendency towards contemplation and a calm acceptance of limits. Maybe there are countless societies in the universe far superior to our own who are invisible to us because they have learned the great blessings to be had when you just mind your own damned business.

Tuesday, June 24, 2014

Dear Mr. Watterson

The one great impression I have from this much-lauded film — which I just got around to watching — is how imperceptive, and even incurious, it is about what makes Calvin and Hobbes the best of its genre. There are a good many vague mumbles about its being well-drawn and well-told, and imaginative, and “intimate” (whatever that means), and so on and so forth.

The film doesn’t seem to know what it’s about: the history of cartooning? The death of newspapers? Chagrin Falls, Ohio? The promise and peril of marketing?

So let’s try to get a grip on the question of the strip’s greatness. Calvin and Hobbes is about finding freedom within structures of constraint, and being able to do so through the strength that comes from knowing that you are unconditionally loved and perfectly understood, even, or perhaps especially, when the one who understands you perfectly sees your flaws and foibles as well as your charms and virtues.

The strip is therefore concerned with the interaction of complex forces that are always in tension with one another, which requires a standard visual style that is highly energetic and the creation of multiple secondary visual styles in order to illuminate particular points at which those forces intersect.

That’s enough to get us started, I think.

Sunday, June 22, 2014

laptops of the Borg

What, yet another Borg-Complex argument for laptops in the classroom? Yeah. Another one.

Laptops are not a “new, trendy thing” as suggested in the final sentence of the article – they are a standard piece of equipment that, according to the Pew Internet and American Life Project, are owned by 88% of all undergraduate students in the US (and that’s data from four years ago). The technology is not going away, and professors trying to make it go away are simply never going to win that battle. If we want to have more student attention, banning technology is a dead end. Let’s think about better pedagogy instead.

Sigh. It should not take a genius to comprehend the simple fact that the ongoing presence and usefulness of laptops does not in itself entail that they should be present in every situation. "Banning laptops from the shower is not the answer. Laptops are not going away, and if we want to have cleaner students, we need to learn to make use of this invaluable resource."

And then there's the idea that if you're not more interesting than the internet you're a bad teacher. Cue Gabriel Rossman:



Honestly. 

Robert Talbert, the author of that post, assumes that a teacher would only ban laptops from the classroom because he or she is lecturing, and we all know — don't we? —that lecturing is always and everywhere bad pedagogy. (Don't we??) But here's why I ban laptops from my classrooms: because we're reading and discussing books. We look at page after page, and I and my students use both hands to do that, and then I encourage them to mark the important passages, and take brief notes on them, with pen or pencil. Which means that there are no hands left over for laptops. And if they were typing on their laptops, they'd have no hands left over for turning to the pages I asked them to turn to. See the problem? 

I've said it before, often, but let me try it one more time: Computers are great, and I not only encourage their use by my students, I try to teach students how to use computers better. But for about three hours a week, we set the computers aside and look at books. It's not so great a sacrifice. 

Thursday, May 29, 2014

Bonhoeffer and Technopoly

As the year 1942 drew to close, Dietrich Bonhoeffer — just months away from being arrested and imprisoned by the Gestapo — sat down to write out ein Rückblick — a look back, a review, a reckoning — of the previous ten years of German experience, that is, of the Nazi years.

This look back is also a look forward: it is a document that asks, “Given what has happened, what shall we now do?” And a very subtle and important section, early in the “reckoning,” raises the questions entailed by political and social success. How are our moral obligations affected when the forces we most strenuously resist come to power anyway?

Although it is certainly not true that success justifies an evil deed and shady means, it is impossible to regard success as something that is ethically quite neutral. The fact is that historical success creates a basis for the continuance of life, and it is still a moot point whether it is ethically more responsible to take the field like a Don Quixote against a new age, or to admit one’s defeat, accept the new age, and agree to serve it. In the last resort success makes history; and the ruler of history [i.e., God] repeatedly brings good out of evil over the heads of the history-makers. Simply to ignore the ethical significance of success is a short-circuit created by dogmatists who think unhistorically and irresponsibly; and it is good for us sometimes to be compelled to grapple seriously with the ethical problem of success. As long as goodness is successful, we can afford the luxury of regarding it as having no ethical significance; it is when success is achieved by evil means that the problem arises.

It seems to me that the question that Bonhoeffer raises here applies in important ways to those of us who struggle against a rising technocracy or Technopoly, even if we don't think those powers actually evil — certainly not evil in the ways the Nazis were. But well-intentioned people with great power can do great harm.

Suppose, then, that we do not want Technopoly to win, to gain widespread social dominance — but it wins anyway (or has already won). What then? Bonhoeffer:

In the face of such a situation we find that it cannot be adequately dealt with, either by theoretical dogmatic arm-chair criticism, which means a refusal to face the facts, or by opportunism, which means giving up the struggle and surrendering to success. We will not and must not be either outraged critics or opportunists, but must take our share of responsibility for the moulding of history in every situation and at every moment, whether we are the victors or the vanquished.

So the opportunism of the Borg Complex is ruled out, but so too is huffing and puffing and demanding that the kids get off my lawn. Bonhoeffer’s reasons for rejecting the latter course are interesting: he thinks denunciation-from-a-distance is a failure to “take our share of responsibility for the moulding of history.” The cultural conditions are not what we would have them be; nevertheless, they are what they are, and we may not excuse ourselves from our obligations to our neighbors by pointing out that we have fought and lost and now will go home and shut the door. We remain responsible to the public world even when that world is not at all what it would be if we had our way. We have work to do. (Cue “Superman’s Song”, please.)

Bonhoeffer presses his point:

One who will not allow any occurrence whatever to deprive him of his responsibility for the course of history — because he knows that it has been laid on him by God — will thereafter achieve a more fruitful relation to the events of history than that of barren criticism and equally barren opportunism. To talk of going down fighting like heroes in the face of certain defeat is not really heroic at all, but merely a refusal to face the future.

But why? Why may I not wash my hands of the whole mess?

The ultimate question for a responsible man to ask is not how he is to extricate himself heroically from the affair, but how the coming generation is to live. It is only from this question, with its responsibility towards history, that fruitful solutions can come, even if for the time being they are very humiliating. In short, it is much easier to see a thing through from the point of view of abstract principle than from that of concrete responsibility. The rising generation will always instinctively discern which of these we make the basis of our actions, for it is their own future that is at stake.

In short: it’s not about me. It’s not about you. It’s about how the coming generation is to live. To “wash my hands of the whole mess” is to wash my hands of them, to leave them to navigate the storms of history without assistance. And even if the assistance I can give is slight and weak, I owe them that.

In his brilliant new biography of Bonhoeffer, Charles Marsh points out that “After Ten Years,” though addressed immediately to family and friends, is more deeply addressed to the German social elite from which Bonhoeffer came. And, Marsh suggests, what Bonhoeffer is calling for here is the rise of an “aristocracy of conscience.” Now that, it seems to me, is an elite worthy of anyone’s aspiration.

It is with these obligations to the coming generation in mind, I think, that we are to consider how to respond to the powers that reign in our world. It may be the case that those powers turn out to be less wicked than the ones Bonhoeffer had to confront; there are worse things than Technopoly, and many millions of people in this world have to face them. But if we are spared those, then so much the better for us — and so much less convincing are any excuses we might want to make for inaction.

Wednesday, May 28, 2014

Christian humanism and the Twitter tsunamis

Trigger warning: specifically Christian reflections ahead. 


The reason I want to say something about the two recent Twitter tsunamis is that they seem to have some significant, but little-noted, elements in common.

I’m going to start with something that I’ve hesitated whether to say, but here goes: I think my lack of enthusiasm for Ta-Nehisi Coates’s essay on reparations is largely a function of, ahem, age. The people in my Twitter feed who were most enthusiastic about Coates’s essay — and the enthusiasm got pretty extreme — tended to be much younger than I am, which is to say, tended to be people who don't remember the Civil Rights Movement and its aftermath. Or, to put it yet another and more precisely relevant way, people who don't remember when a regular topic of American journalism was the crushing poverty imposed on black Americans by a history of pervasive racism.

Conversely, I spent much of my adolescence and early adulthood trying to understand what was going on in my home state (Alabama) and home town (Birmingham) by reading Marshall Frady and Howell Raines and, a little later, Stanley Crouch and Brent Staples, and above all — far above all — James Baldwin, whose “A Stranger in the Village” and The Fire Next Time tore holes in my mental and emotional world. There’s nothing in Coates’s essay that, in my view, wasn’t done far earlier and far better by these writers.

Which doesn't mean, I’ve come to see, that anyone who loved Coates’s essay was wrong to do so. It seemed like old news to me, but that’s because I’m old. Samuel Johnson said that people need to be reminded far more often than they need to be instructed, and it is perhaps time for a widely-read reminder of the ongoing and grievous consequences of racism in America.

But I do think the strong response to Coates’s essay indicates that the American left has to a considerable extent lost the thread when it comes to race and poverty. (I do not mention the American right in this context because my fellow conservatives have been lastingly and culpably blind to the ongoing cruelty of racism, and have often thoughtlessly participated in that cruelty.) For that left, perhaps Coates’s essay can be a salutary reminder that there are millions of people in America whose problems are far worse than websites’ or public restrooms’ failures to recognize their preferred gender identity — which is the sort of thing I’m more likely to see blog posts and tweets about these days.

Which leads me to the second tsumani, the response to the shootings in Santa Barbara. I was interested in how this extremely rare event — of a kind that’s probably not getting more common — led to the more useful and meaningful discussion of common dangers for women, as exemplified by the #YesAllWomen hashtag. Even though I think “hashtag activism” is an absurd parody of the real thing, I thought the rise of that particular hashtag marked a welcome shift from the internet’s typical hyperattentiveness to the Big Rare Event towards the genuine problems of everyday life.

But even as some good things were happening, I also saw the all-too-typical — in social media and in life more generally — lining up into familiar camps. It’s as true as ever that These Tragic Events Only Prove My Politics — even though that site hasn’t been updated in a long time — so I was treated to a whole bunch of tweets casually affirming that mass murder is the natural and inevitable result of “heteronormativity” and “traditional masculinity.” And I saw far more comments from people attacking the #YesAllWomen hashtag as “typical feminist BS” and ... well, and a lot worse.

No surprises there. But I was both somewhat surprised and deeply disappointed to see how many of the men attacking users of the #YesAllWomen hashtag — users that in every single case I saw the attackers were not following, which means that they were going out of their way to look for women who were hurt and upset by the shootings so they could belittle those concerns — used their Twitter bios to identify themselves as Christians. (One of the most self-righteously sneering guys I saw has a bio saying he wants to “code like Jesus.”)

And if you don't see the problem with that, I would suggest that you read some of the “one another” verses in the Bible, like Romans 12:16: “Live in harmony with one another. Do not be haughty, but associate with the lowly. Never be wise in your own sight.” Or this passage from Ephesians 4: “Let all bitterness and wrath and anger and clamor and slander be put away from you, along with all malice. Be kind to one another, tenderhearted, forgiving one another, as God in Christ forgave you.” And if you’re a Christian and think those rules only apply to your interactions with your fellow Christians, well, maybe there’s something in the Bible about how you should treat your enemies. As Russell Moore has just written, “Rage itself is no sign of authority, prophetic or otherwise.”

There are women all over the world who live in daily fear of verbal harassment at best, and often much, much worse. They are our sisters, our mothers, our daughters, our wives — or just our friends. How can we fail to be compassionate towards them, or to sympathize with their fear and hurt? How can we see their fear as a cause for our self-righteous self-defense? To think of some supposed insult to our dignity in such circumstances seems to me to drift very far indeed from the spirit, as well as the commandments, of Christ.

I began this post by saying that the two recent tsunamis have something in common, and this, I think, is it: hurt and anger at the failure of powerful human beings to treat other and less powerful people as fully human. This has been a theme in my writing for a long time, but is the heart and soul of my history of the doctrine of original sin, which I’m going to quote now. This is a passage about the revulsion towards black people the great nineteenth-century Swiss scientist Louis Agassiz felt when he came to America for the first time:

Agassiz’s reaction to the black servants at his Philadelphia hotel provides us the opportunity to discuss an issue which has been floating just beneath the surface of this narrative for a long time. One of the arguments that I have been keen to make throughout this book is that a belief in original sin serves as a kind of binding agent, a mark of “the confraternity of the human type,” an enlistment of us all in what Eugen Rosenstock-Huessy called the “universal democracy of sinners.” But why should original sin alone, among core Christian doctrines, have the power to do that? What about that other powerful idea in Genesis, that we are all made in the image of God? Doesn’t that serve equally well, or even better, to bind us as members of a single family?

The answer is that it should do so, but usually does not. Working against the force of that doctrine is the force of familiarity, of prevalent cultural norms of behavior and even appearance. A genuine commitment to the belief the we are all created equally in the image of God requires a certain imagination — imagination which Agassiz, try as he might, could not summon: “it is impossible for me to repress the feeling that they are not of the same blood as us.” Instinctive revulsion against the alien will trump doctrinal commitments almost every time. Black people did not feel human to him, and this feeling he had no power to resist; eventually (as we shall see) his scientific writings fell into line with his feelings.

By contrast, the doctrine of original sin works with the feeling that most of us have, at least some of the time, of being divided against ourselves, falling short of the mark, inexplicably screwing up when we ought to know better. It takes relatively little imagination to look at another person and think that, though he is not all he might be, neither am I. It is true that not everyone can do this: the Duchess of Buckingham couldn’t. (“It is monstrous to be told you have a heart as sinful as the common wretches that crawl on the earth.”) But in general it is easier for most of us to condescend, in the etymological sense of the word — to see ourselves as sharing shortcomings or sufferings with others — than to lift up people whom our culturally-formed instincts tell us are decidedly inferior to ourselves. If misery does not always love company, it surely tolerates it quite well, whereas pride demands distinction and hierarchy, and is ultimately willing to pay for those in the coin of isolation. That the doctrine of a common creation in the image of God doesn’t do more to help build human community and fellow-feeling could be read as yet more evidence for the reality of original sin.

So you can see that my own response to the problems I’ve been seeing discussed on Twitter is a Christian one, more specifically one grounded in a theological anthropology that sees all of us as creatures made in the image of God who have (again, all of us) defaced that image. And it is in the recognition of our shared humanity — both in its glories and its failings but often starting with its failings — that we build our case against abuse and exploitation.

But to have a politics grounded in this Christian humanism is also to be at odds with most of the rhetoric I see on Twitter about the recent controversies. I mentioned earlier the “lining up into familiar camps,” and those camps are always exclusive and oppositional. The message of identity politics, as practiced in America anyway, is not only that “my experience is unlike yours” — which is often true — but “my experience can never be like yours, between us there will always be a great gulf fixed” — which is a tragic mistake. That way of thinking leads to absurdities like the claim that men like Elliot Rodger are the victims of feminism, and, from other camps, the complete failure to acknowledge that five of the seven people Rodger killed were men. It also leads, I think, and here I want to tread softly, that it’s going to be relatively simple to figure out who should receive reparations and who should pay them.

It’s not wrong to have camps, to belong to certain groups, but it’s disastrous to be unable to see beyond them, and impossible to build healthy communities if we can’t see ourselves as belonging to one another.

So why does identity politics so frequently, and so completely, trump a belief in our shared humanity? I’m not sure, but the book I’m currently writing takes up this question. It deals with a group of Christian intellectuals who suspect, as many others in the middle of the twentieth century also suspected, that democracy is not philosophically self-sustaining — that it needs some deeper moral or metaphysical commitments to make it plausible. And for T. S. Eliot and Jacques Maritain and Henri de Lubac and Simone Weil and C. S. Lewis and W. H. Auden, only the Christian account of “the confraternity of the human type” was sufficiently strong to bind us together. Otherwise, why should I treat someone as equal to me simply because he or she belongs to the same species?