Text Patterns - by Alan Jacobs

Thursday, July 31, 2014

a revolution I can get behind!

The Power of the Doodle: Improve Your Focus and Memory:

Recent research in neuroscience, psychology and design shows that doodling can help people stay focused, grasp new concepts and retain information. A blank page also can serve as an extended playing field for the brain, allowing people to revise and improve on creative thoughts and ideas.

Doodles are spontaneous marks that can take many forms, from abstract patterns or designs to images of objects, landscapes, people or faces. Some people doodle by retracing words or letters, but doodling doesn't include note-taking. 'It's a thinking tool,' says Sunni Brown, an Austin, Texas, author of a new book, 'The Doodle Revolution.' It can affect how we process information and solve problems, she says.

The Doodle Revolution! Yes!

I doodled my way through my education — always abstract patterns, usually a kind of cross-hatching — and almost never took notes. This puzzled my teachers, but I always remembered what I heard in class better when I doodled. 

When I was in college, I spent an entire semester developing an immensely intricate doodle on the back of one of my notebooks. When I finally filled in the last corner, on the last day of class, I sat back and looked with satisfaction on my achievement. Then I felt a tap on my shoulder. A guy who had been sitting behind me all term said, “I’ll give you five bucks for that.” So I carefully tore off the back cover and exchanged it for his fiver. We both went away happy. 

totem and taboo

I’ve been enjoying and profiting from James Poulos’s ongoing analysis of what he calls the “pink police state”: see installments to date here and here. This passage from the second essay strikes me as especially noteworthy: 

The new regime is not totalitarian, fascist, socialist, capitalist, conservative, or liberal, according to the accepted and common definitions of those terms. It is not even adequately described as corporatist, although corporatism is very much at home within it. The “pink police state” is not a police state in the sense that George Orwell would be familiar with, but one in which a militarized, national policing apparatus is woven into the fabric of trillions of transactions online and off. Nor is it a “pinko commie” regime in the sense of enforcing “political correctness” out of total allegiance to Party; rather, it enforces the restrictions and permissions doled out by its sense of “clean living.” To invoke Michel Foucault again, ours is an age when governance is inseparable from hygiene in the minds of the elite that rules over both the private and public sector. To them, everything is theoretically a health issue.

This hygienic impulse is indeed vital to the current regime, and has been growing in intensity for some time. It reaches into every area of culture. C. S. Lewis noted its presence fifty years ago in literary criticism, after articulating his own view of the pleasures of reading: 

Being the sort of people we are, we want not only to have but also to analyse, understand, and express, our experiences. And being people at all—being human, that is social, animals—we want to 'compare notes', not only as regards literature, but as regards food, landscape, a game, or an admired common acquaintance. We love to hear exactly how others enjoy what we enjoy ourselves. It is natural and wholly proper that we should especially enjoy hearing how a first-class mind responds to a very great work. That is why we read the great critics with interest (not often with any great measure of agreement). They are very good reading; as a help to the reading of others their value is, I believe, overestimated.

This view of the matter will not, I am afraid, satisfy what may be called the Vigilant school of critics. To them criticism is a form of social and ethical hygiene. They see all clear thinking, all sense of reality, and all fineness of living, threatened on every side by propaganda, by advertisement, by film and television. The hosts of Midian 'prowl and prowl around'. But they prowl most dangerously in the printed word. 

This idea that criticism is required to discourage people from reading (or viewing!) things that are bad for them, or not ideally good for them — or, to put it in a more pointed way, that criticism is necessary for policing cultural boundaries — has been around for a while but has become, I think, increasingly prominent. I’ve written a bit about it on this blog, for instance here. And I see it at work in my friend Ruth Graham’s critique of adults reading YA fiction. (Austin Kleon helpfully gathered some of my thoughts on the matter here.) 

So this “vigilant” attitude towards reading is just one example of the ways in which hygienic policing is intrinsic to the current cultural regime. And it strikes me that what may be needed, and what James is to some degree providing, is what I think I want to call a psycho-anthropological analysis of this policing. I am not, generally speaking, a fan of Freud, but there are passages in his Totem and Taboo that strike me as deeply relevant to the questions James raises. 

Think, for instance, of his point that taboo “means, on the one hand, ‘sacred’, ‘consecrated’, and on the other ‘uncanny’, ‘dangerous’, ‘forbidden’, ‘unclean’.” That which is taboo is automatically a matter of great fascination, simultaneously frightening and compelling. 

And this: 

Anyone who has violated a taboo becomes taboo himself because he possesses the dangerous quality of tempting others to follow his example: why should he be allowed to do what is forbidden to others? Thus he is truly contagious in that every example encourages imitation, and for that reason he himself must be shunned.

But a person who has not violated any taboo may yet be permanently or temporarily taboo because he is in a state which arouses the quality of arousing forbidden desires in others and of awakening a conflict of ambivalence in them.

Having rejected the taboos of our ancestors, especially our Christian ancestors, the current regime does not live without taboos but replaces them with others; and having created a world without gods, it places upon itself the greatest responsibility imaginable for preserving moral cleanliness. In the absence of gods, the totems and the taboos alike increase in magnitude.  

I expect James will be saying more about this kind of thing in future installments of the series, and I hope to be replying here. I want to comment especially on the totems or idols that balance out the taboos. 

Saturday, July 26, 2014

what we can claim for the liberal arts

Please read this wonderful post by Tim Burke on what liberal-arts education can and can’t do — or rather, what we who love it can plausibly claim on its behalf and what we can’t. Excerpt:


No academic (I hope) would say that education is required to achieve wisdom. In fact, it is sometimes the opposite: knowing more about the world can be, in the short-term, an impediment to understanding it. I think all of us have known people who are terrifically wise, who understand other people or the universe or the social world beautifully without ever having studied anything in a formal setting. Some of the wise get that way through experiencing the world, others through deliberate self-guided inquiry.
What I would be prepared to claim is something close to something Wellmon says, that perhaps college might “might alert students to an awareness of what is missing, not only in their own colleges but in themselves and the larger society as well”.
But my “might” is a bit different. My might is literally a question of probabilities. A well-designed liberal arts education doesn’t guarantee wisdom (though I think it can guarantee greater concrete knowledge about subject matter and greater skills for expression and inquiry). But it could perhaps be designed so that it consistently improves the odds of a well-considered and well-lived life. Not in the years that the education is on-going, not in the year after graduation, but over the years that follow. Four years of a liberal arts undergraduate experience could be far more likely to produce not just a better quality of life in the economic sense but a better quality of being alive than four years spent doing anything else.
There are several important elements to Tim’s argument, the most important of which are: 
(a) It does no good to answer simplistic denunciations of liberal-arts education with simplistic cheerleading. Just as there are no books the reading of which will automatically make you a better person — thus the G. C. Lichtenberg line Auden liked to quote: “A book is like a mirror; if an ass looks in, you can’t expect an apostle to look out” — so too there is no form of education that will automatically create better people. But some forms of education, as Tim says, may “improve the odds.” That’s the point at which we need to engage the argument. 
(b) If we who practice and love the liberal arts want to defend them, we also have to be prepared to improve them, to practice them better — and this may well require of us a rethinking of how the liberal arts tradition related to disciplinarity. As always, Tim is refusing the easy answers here, which are two: first, that the disciplinary structures created in and for the modern university are adequate to liberal education; and second, that we should ditch the disciplines and be fully interdisciplinary. Both answers are naïve. (The problems with the latter, by the way, were precisely identified by Stanley Fish a long time ago.) The academic disciplines — like all limiting structures, including specific vocabularies, as Kenneth Burke pointed out in his still-incisive essay on “terministic screens” — simultaneously close off some options and enable others. We need more careful scrutiny of how our disciplinary procedures do their work on and in and with students. 
I’m mainly channeling Tim here, but I would just add that another major element that we need to be thinking about here is desire: What are students drawn to, what do they love? To what extent can we as teachers shape those desires? My colleague Elizabeth Corey has recently published a lovely essay — alas, paywalled — on education as the awakening of desire; and while I wholeheartedly endorse her essay, I have also argued that there are limits to what we can do in that regard. 
In any event, the role of desire in liberal education is a third vital topic for exploration, in addition to the two major points I have extracted from Tim’s post — which, let me remind you, you should read. 

Friday, July 25, 2014

you must remember this

Please forgive me for ignoring the main thrust of this post by William Deresiewicz. I'm just going to comment on one brief but strange passage:

A friend who teaches at a top university once asked her class to memorize 30 lines of the eighteenth-century poet Alexander Pope. Nearly every single kid got every single line correct. It was a thing of wonder, she said, like watching thoroughbreds circle a track.

A “thing of wonder”? Memorizing a mere thirty lines of poetry?

As I've often noted, in any class in which I assign poetry I ask students to memorize at least 50 lines (sometimes 100) and recite them to me. I've been doing that for more than twenty years now, and all the students get all the lines right. If they don't, they come back until they do. It's not a big deal. Yet to Deresiewicz, who taught for years at Yale, and his friend who teaches at a “top university,” the ability to recite thirty lines of Pope — probably the easiest major English poet to memorize, given his exclusive use of rhyming couplets — seems an astonishing mental feat. What would they think of John Basinger, who knows the whole of Paradise Lost by heart? Or even a three-year-old reciting a Billy Collins poem — which is also every bit of 30 lines?

In my school days I had to memorize only a few things: the preamble to the Constitution, the Gettysburg Address, a Shakespeare passage or two. But for previous generations, memorization and recitation were an essential and extensive part of their education. Perhaps only the classical Christian education movement keeps this old tradition alive. The amazement Deresiewicz and his friend feel at a completely trivial achievement indicates just how completely memorization has been abandoned. In another generation we'll swoon at someone who can recite her own phone number.

 

UPDATE: Via my friend at Princeton University Press Jessica Pellien, a book by Catherine Robson called Heart Beats: Everyday Life and the Memorized Poem. Here’s the Introduction in PDF.

the right tools for the job

This talk by Matthew Kirschenbaum provokes much thought, and I might want to come back to some of its theses about software. But for now I'd just like to call attention to his reflections on George R. R. Martin's choice of writing software:

On May 13, in conversation with Conan O’Brien, George R. R. Martin, author of course of the Game of Thrones novels, revealed that he did all of his writing on a DOS-based machine disconnected from the Internet and lovingly maintained solely to run … WordStar. Martin dubbed this his “secret weapon” and suggested the lack of distraction (and isolation from the threat of computer viruses, which he apparently regards as more rapacious than any dragon’s fire) accounts for his long-running productivity.

And thus, as they say, “It is known.” The Conan O’Brien clip went viral, on Gawker, Boing Boing, Twitter, and Facebook. Many commenters immediately if indulgently branded him a “Luddite,” while others opined it was no wonder it was taking him so long to finish the whole Song of Fire and Ice saga (or less charitably, no wonder that it all seemed so interminable). But WordStar is no toy or half-baked bit of code: on the contrary, it was a triumph of both software engineering and what we would nowadays call user-centered design…. WordStar’s real virtues, though, are not captured by its feature list alone. As Ralph Ellison scholar Adam Bradley observes in his work on Ellison’s use of the program, “WordStar’s interface is modelled on the longhand method of composition rather than on the typewriter.” A power user like Ellison or George R. R. Martin who has internalized the keyboard commands would navigate and edit a document as seamlessly as picking up a pencil to mark any part of the page.

There was a time when I wouldn't have understood how Martin could possibly have preferred some ugly old thing like WordStar. I can remember when my thinking about these matters started to change. It happened fifteen years ago, when I read this paragraph by Neal Stephenson:

In the GNU/Linux world there are two major text editing programs: the minimalist vi (known in some implementations as elvis) and the maximalist emacs. I use emacs, which might be thought of as a thermonuclear word processor. It was created by Richard Stallman; enough said. It is written in Lisp, which is the only computer language that is beautiful. It is colossal, and yet it only edits straight ASCII text files, which is to say, no fonts, no boldface, no underlining. In other words, the engineer-hours that, in the case of Microsoft Word, were devoted to features like mail merge, and the ability to embed feature-length motion pictures in corporate memoranda, were, in the case of emacs, focused with maniacal intensity on the deceptively simple-seeming problem of editing text. If you are a professional writer–i.e., if someone else is getting paid to worry about how your words are formatted and printed–emacs outshines all other editing software in approximately the same way that the noonday sun does the stars. It is not just bigger and brighter; it simply makes everything else vanish. For page layout and printing you can use TeX: a vast corpus of typesetting lore written in C and also available on the Net for free.

The key phrase here, for me, was the deceptively simple-seeming problem of editing text. When I read those words I realized that editing text was much of what I needed to do, and that Microsoft Word wasn't very good at that. Stephenson's essay (still a delight to read, by the way, though quite outdated now) set me off on a long quest for the best writing environment that has ended up not with emacs or vi but rather with a three-component system. I have written about these matters before, but people ask me about them all the time, so I thought I would provide a brief summary of my system.

The first component is my preferred text editor, BBEdit, which seems to me to strike a perfect balance between the familiar conventions of Macintosh software and the power typically found only in command-line text editors.

The second component is the scripts John Gruber (with help from Aaron Swartz) wrote to create Markdown, a simple and easy-to-use but powerful syntax for indicating structure in plain-text documents.

The third component is John MacFarlane's astonishing pandoc, which allows me to take my Markdown-formatted plain text and turn it into … well,almost anything this side of an ice-cream sundae. If my publisher wants a MS Word document, pandoc will turn my Markdown text into that. If I want to create an e-book, pandoc can transform that same text into EPUB. When I need to make carefully formatted printed documents, for instance a course syllabus, pandoc will make a LaTeX file. I just can't get over how powerful this tool is. Now I almost never have to write in anything except BBEdit and my favorite text editor for the iPad, Editorial.

That's it. With a good text editor and some scripts for formatting, a writer can focus all of his or her attention on the deceptively simple-seeming problem of editing text. That makes writing less frustrating and more fun. This is what George R. R. Martin has achieved with WordStar, and he's right to stick with it rather than turn to tools that do the essential job far less well.

Wednesday, July 23, 2014

breaking the spell


cows eating grass
I just got back from a brief vacation at Big Bend National Park, and when I was packing I made sure to stick a novel in my backpack. I’m not going to name it, but it is a very recent novel, by a first-time novelist, that has received a great deal of praise. Before my departure I had already read the first chapter and found it quite promising. I was excited.

The next few chapters, I discovered while on my trip, were equally compelling; they carried me some fifty pages into the book. But in the next fifty pages the narrative energy seemed to flag. The act of reading started to feel effortful. And then, about 130 pages in (about halfway through the book), I had a sudden thought: This is just someone making up a story.

And that was it; the spell was broken, my investment in the novel over and done with. I couldn’t read another paragraph. Which is an odd thing, because of course it was just someone making up a story — that’s what novels are, and I knew when I bought the book what it was. But nothing can be more deadly to the experience of reading fiction than the thought that came (quite unbidden) to my mind.

Coleridge famously wrote of literature’s power “to transfer from our inward nature a human interest and a semblance of truth sufficient to procure for these shadows of imagination that willing suspension of disbelief for the moment, which constitutes poetic faith.” (Like most writers before the twentieth century, Coleridge used “poetic” to mean what we now call “literary.”) But really, the requisite suspension of disbelief is willing only in a peculiar anticipatory sense: it has to become unwilling, involuntary, in the actual act of reading, or else all the magic of storytelling is lost.

I have found in the past few years that this has happened to me more and more often as I read fiction, especially recent fiction. There are many possible reasons for this, including an increasing vulnerability to distraction and the return to the reading habits of my youth that I describe in this essay. But I’m inclined to think that neither of those is the problem. Rather, I think that for the last fifty years or more “literary” fiction‚ and a good deal of “genre” fiction as well, has recycled far too many of the same themes and tropes. Like a number of other readers, I’m inclined to assign much of the blame for this to and capturing of so much English-language fiction by university-based creative writing programs, which suffer from the same pressures of conformity that all academic work suffers from. (And yes, the author of the novel I abandoned is a creative-writing-program graduate, though I just now looked that up.)

In other words, I have just been around the same few fictional blocks far too many times. I’m tired of them all, and only only satisfied when I’m surprised.

Maybe that’s not the problem. But I sure feel that it is.

 

P.S. Something that just occurred to me: A long time ago Northrop Frye noted — I can’t at the moment recall where — Ben Jonson's frustration that Shakespeare’s plays were far more inconsistently and incoherently put together than his own but were nevertheless, somehow, more popular, and commented that this was just it: Jonson’s plays were put together, more like “mechanical models of plays” than the real thing, whereas Shaksepeare’s plays had all the odd growths and irregular edges of organic life. This is my chief complaint with much fiction of the past fifty years, including much very highly regarded fiction, like that of John Updike: these aren’t novels, they are mechanical models of novels. Precision-engineered down to the last hidden screw, but altogether without the spark of life.

Thursday, July 17, 2014

my course on the "two cultures"

FOTB (Friends Of This Blog), I have a request for you. This fall I’m teaching a first-year seminar for incoming Honors College students, and our topic is the Two Cultures of the sciences and the humanities. We’ll begin by exploring the lecture by C. P. Snow that kicked off the whole debate — or rather, highlighted and intensified a debate had already been going on for some time — and the key responses Snow generated (F. R. Leavis, Lionel Trilling, Loren Eiseley). We’ll also read the too-neglected book that raised many of the same issues in more forceful ways, and a few years before Snow, Jacob Bronowski’s Science and Human Values.

Then we’ll go back to try to understand the history of the controversy before moving forward to consider the forms it is taking today. Most of the essays I’ll assign may be found by checking out the “twocultures” tag of my Pinboard bookmarks, but we’ll also be taking a detour into science/religion issues by considering Stephen Jay Gould’s idea of non-overlapping magisteria and some of the responses to it.

What other readings should I consider? I am a bit concerned that I am presenting this whole debate as one conducted by white Western men — Are there ways of approaching these questions by women or people from other parts of the world that might put the issues in a different light? Please make your recommendations in the comments below or on Twitter.

Thanks!

the problems of e-reading, revisited

In light of the conversation we were having the other day, here is some new information

The shift from print to digital reading may lead to more than changes in speed and physical processing. It may come at a cost to understanding, analyzing, and evaluating a text. Much of Mangen’s research focusses on how the format of reading material may affect not just eye movement or reading strategy but broader processing abilities. One of her main hypotheses is that the physical presence of a book—its heft, its feel, the weight and order of its pages—may have more than a purely emotional or nostalgic significance. People prefer physical books, not out of old-fashioned attachment but because the nature of the object itself has deeper repercussions for reading and comprehension. “Anecdotally, I’ve heard some say it’s like they haven’t read anything properly if they’ve read it on a Kindle. The reading has left more of an ephemeral experience,” she told me. Her hunch is that the physicality of a printed page may matter for those reading experiences when you need a firmer grounding in the material. The text you read on a Kindle or computer simply doesn’t have the same tangibility.

In new research that she and her colleagues will present for the first time at the upcoming conference of the International Society for the Empirical Study of Literature and Media, in Torino, Italy, Mangen is finding that that may indeed be the case. She, along with her frequent collaborator Jean-Luc Velay, Pascal Robinet, and Gerard Olivier, had students read a short story—Elizabeth George’s “Lusting for Jenny, Inverted” (their version, a French translation, was called “Jenny, Mon Amour”)—in one of two formats: a pocket paperback or a Kindle e-book. When Mangen tested the readers’ comprehension, she found that the medium mattered a lot. When readers were asked to place a series of events from the story in chronological order—a simple plot-reconstruction task, not requiring any deep analysis or critical thinking—those who had read the story in print fared significantly better, making fewer mistakes and recreating an over-all more accurate version of the story. The words looked identical—Kindle e-ink is designed to mimic the printed page—but their physical materiality mattered for basic comprehension.

Note that the printed book is being compared here to the Kindle, which means that the distractions of connectivity I talked about in the previous post aren’t relevant here. (I’m assuming that they mean an e-ink Kindle rather than a Kindle Fire, though it would be important to know that for sure.) 

My hunch, for what it’s worth, is that it is indeed “the physicality of the printed page” that makes a significant difference — in a couple of specific senses.

First of all, the stability of the text on a printed page allows us (as most readers know) to have visual memories of where passages are located: we see the page quadratically, as it were, divided into upper left, lower left, upper right, and lower right. This has mnemonic value. 

Second, the three-dimensionality of a book allows us to connect certain passages with places in the book: when we’re near the beginning of a book, we’re getting haptic confirmation of that through the thinness on one side and thickness on the other, and as we progress in our reading the object in our hands is continually providing us with information that supplements what’s happening on the page. 

A codex is then an informationally richer environment than an e-reader. 

There are, I suspect, ways that software design can compensate for some of this informational deficit, though I don’t know how much. It’s going to be interesting to see whether any software engineers interest themselves in this problem. 

As for me, I suspect I’ll continue to do a lot of reading electronically, largely because, as I’ve mentioned before, I’m finding it harder to get eyewear prescriptions that suit my readerly needs. E-readers provide their own lighting and allow me to change the size of the type — those are enormous advantages at this stage of my life. I would love to see the codex flourish, but I don’t know whether it will flourish for me, and I am going to have some really difficult decisions to make as a teacher. Can I strongly insist that my students use codexes while using electronic texts myself? 

Wednesday, July 16, 2014

DH in the Anthropocene

This talk by Bethany Nowviskie is extraordinary. If you have any interest in where the digital humanities — or the humanities more generally — might be headed, I encourage you to read it. 

It’s a very wide-ranging talk that doesn’t articulate a straightforward argument, but that’s intentional, I believe. It’s meant to provoke thought, and does. Nowviskie’s talk originates, it seems to me, in the fact that so much work in the digital humanities revolves around problems of preservation. Can delicate objects in our analog world be properly digitized so as to be protected, at least in some senses, from further deterioration? Can born-digital texts and images and videos be transferred to other formats before we lose the ability to read and view them? So much DH language, therefore, necessarily concerns itself with concepts connecting to and deriving from the master-concept of time: preservation, deterioration, permanence, impermanence, evanescence. 

For Nowviskie, these practical considerations lead to more expansive reflections on how we — not just “we digital humanists” but “we human beings” — understand ourselves to be situated in time. And for her, here, time means geological time, universe-scale time. 

Now, I’m not sure how helpful it is to try to think at that scale. Maybe the Long Now isn’t really “now” at all for us, formed as we are to deal with shorter frames of experience. I think of Richard Wilbur’s great poem “Advice to a Prophet”

Spare us all word of the weapons, their force and range,
The long numbers that rocket the mind;
Our slow, unreckoning hearts will be left behind,
Unable to fear what is too strange.

Nor shall you scare us with talk of the death of the race.
How should we dream of this place without us? —
The sun mere fire, the leaves untroubled about us,
A stone look on the stone’s face?

Maybe thinking in terms too vast means, for our limited minds, not thinking at all. 

But even as I respond in this somewhat skeptical way to Nowviskie’s framing of the situation, I do so with gratitude, since she has pressed this kind of serious reflection about the biggest questions upon her readers. It’s the kind of thing that the humanities at their best always have done. 

So: more, I hope, at another time on these themes. 

how problematic is e-reading?

Naomi Baron thinks it’s really problematic in academic contexts: 

What’s the problem? Not all reading works well on digital screens.

For the past five years, I’ve been examining the pros and cons of reading on-screen versus in print. The bottom line is that while digital devices may be fine for reading that we don’t intend to muse over or reread, text that requires what’s been called "deep reading" is nearly always better done in print.

Readers themselves have a keen sense of what kind of reading is best suited for which medium. My survey research with university students in the United States, Germany, and Japan reveals that if cost were the same, about 90 percent (at least in my sample) prefer hard copy for schoolwork. If a text is long, 92 percent would choose hard copy. For shorter texts, it’s a toss-up.

Digital reading also encourages distraction and invites multitasking. Among American and Japanese subjects, 92 percent reported it was easiest to concentrate when reading in hard copy. (The figure for Germany was 98 percent.) In this country, 26 percent indicated they were likely to multitask while reading in print, compared with 85 percent when reading on-screen. Imagine wrestling with Finnegan’s Wake while simultaneously juggling Facebook and booking a vacation flight. You get the point.

And maybe she’s right, but she also seems to be eliding some important distinctions. For instance, when she says that “digital reading ... encourages distraction and invites multitasking,” what she’s really referring to is “reading on a capable internet-connected device” — probably an iPad. A Kindle or Nook or Kobo, with either very limited internet access or none at all, wouldn’t provide such distractions. 

To be sure, digital reading is increasingly dominated by tablets, as their share of the market grows and that of the dedicated e-readers shrinks, but it’s still wrong to blame “digital reading” for a problem that’s all about internet connectivity. 

Also: Baron’s research is with university students, which is to say, people who learned to read on paper and did all their serious reading on paper until quite recently. What we don’t know is how kids who learn to read on digital devices — a still-small category — will feel about these matters by the time they get to university. That is, what Baron is attributing to some intrinsic difference between digital reading and reading on paper might well be a matter of simple familiarity. I don’t think we’ve yet reached the point where we can make that decision. 

I say all this as a lover of books and a believer that reading on paper has many great advantages that our digital devices have yet to replicate, much less to exceed. But, to judge only from this excerpt of a larger project, I doubt that Baron has an adequate experimental design. 

Friday, July 11, 2014

different strokes

Here’s a typically smart and provocative reflection by Andrew Piper. But I also have a question about it. Consider this passage: 

Wieseltier’s campaign is just the more robust clarion call of subtler and ongoing assumptions one comes across all the time, whether in the op-eds of major newspapers, blogs of cultural reviews, or the halls of academe. Nicolas Kristof’s charge that academic writing is irrelevant because it relies on quantification is one of the more high-profile cases. The recent reception of Franco Moretti’s National Book Critics Award for Distant Reading is another good case in point. What’s so valuable about Moretti’s work on quantifying literary history, according to the New Yorker’s books blog, is that we can ignore it. “I feel grateful for Moretti,” writes Joshua Rothman. “As readers, we now find ourselves benefitting from a division of critical labor. We can continue to read the old-fashioned way. Moretti, from afar, will tell us what he learns.”

We can continue doing things the way we’ve always done them. We don’t have to change. The saddest part about this line of thought is this is not just the voice of journalism. You hear this thing inside academia all the time. It (meaning the computer or sometimes just numbers) can’t tell you what I already know. Indeed, the “we already knew that” meme is one of the most powerful ways of dismissing any attempt at trying to bring together quantitative and qualitative approaches to thinking about the history of ideas.

As an inevitable backlash to its seeming ubiquity in everyday life, quantification today is tarnished with a host of evils. It is seen as a source of intellectual isolation (when academics use numbers they are alienating themselves from the public); a moral danger (when academics use numbers to understand things that shouldn’t be quantified they threaten to undo what matters most); and finally, quantification is just irrelevant. We already know all there is to know about culture, so don’t even bother.

Regarding that last sentence: the idea that “we already know all there is to know about culture, so don’t even bother” is a pathetic one — but that’s not what Rothman says. Rather, he writes of a “division of labor,” in which it’s perfectly fine for Moretti to do what he does, but it’s also perfectly fine for Rothman to do what he does. What I hear Rothman saying is not “we know all there is to know” but rather something like “I prefer to keep reading in more traditional and familiar ways and I hope the current excitement over people like Moretti won’t prevent me from doing that.” 

In fact, Rothman, as opposed to the thoroughly contemptuous Wieseltier, has many words of commendation for Moretti. For instance: 

The grandeur of this expanded scale gives Moretti’s work aesthetic power. (It plays a larger role in his appeal, I suspect, than most Morettians would like to admit.) And Moretti’s approach has a certain moral force, too. One of the pleasures of “Distant Reading” is that it assembles many essays, published over a long period of time, into a kind of intellectual biography; this has the effect of emphasizing Moretti’s Marxist roots. Moretti’s impulses are inclusive and utopian. He wants critics to acknowledge all the books that they don’t study; he admires the collaborative practicality of scientific work. Viewed from Moretti’s statistical mountaintop, traditional literary criticism, with its idiosyncratic, personal focus on individual works, can seem self-indulgent, even frivolous. What’s the point, his graphs seem to ask, of continuing to interpret individual books—especially books that have already been interpreted over and over? Interpreters, Moretti writes, “have already said what they had to.” Better to focus on “the laws of literary history”—on explanation, rather than interpretation.

All this sounds austere and self-serious. It isn’t. “Distant Reading” is a pleasure to read. Moretti is a witty and welcoming writer, and, if his ideas sometimes feel rough, they’re rarely smooth from overuse. I have my objections, of course. I’m skeptical, for example, about the idea that there are “laws of literary history”; for all his techno-futurism, Moretti can seem old-fashioned in his eagerness to uncover hidden patterns and structures within culture. But Moretti is no upstart. He is patient, experienced, and open-minded. It’s obvious that he intends to keep gathering data, and, where it’s possible, to replace his speculations with answers. In some ways, the book’s receiving an award reflects the role that Moretti has played in securing a permanent seat at the table for a new critical paradigm—something that happens only rarely.

This all seems eminently fair-minded to me, even generous. But what Moretti does is not Rothman’s thing. And isn’t that okay? Indeed, hasn’t that been the case for a long time in literary study: that we acknowledge the value in what other scholars with different theoretical orientations do, without choosing to imitate them ourselves? It mystifies me that Piper sees this as a Wieseltier-level dismissal. 

Monday, July 7, 2014

worse and worse

Another candidate for Worst Defense of Facebook, this one from Duncan Watts of Microsoft Research:

Yes, the arrival of new ways to understand the world can be unsettling. But as social science starts going through the kind of revolution that astronomy and chemistry went through 200 years ago, we should resist the urge to attack the pursuit of knowledge for knowledge's sake.

Just as in the Romantic era, advances in technology are now allowing us to measure the previously unmeasurable – then distant galaxies, now networks of millions of people. Just as then, the scientific method is being promoted as an improvement over traditional practices based on intuition and personal experience. And just as then, defenders of the status quo object that data and experiments are inherently untrustworthy, or are simply incapable of capturing what really matters.

We need to have these debates, and let reasonable people disagree. But it's unreasonable to insist that the behavior of humans and societies is somehow an illegitimate subject for the scientific method. Now that the choice between ignorance and understanding is within our power to make, we should follow the lead of the Romantics and choose understanding.

Get that? If you are opposed to the Facebook experiment, you are “attack[ing] the pursuit of knowledge for knowledge’s sake” — because, as we know, the people who work at Facebook care nothing for filthy lucre: they are perfectly disinterested apostles of Knowledge! So why do you hate knowledge?

Moreover, Why do you think “data and experiments are inherently untrustworthy”? — yes, all data, all experiments, because clearly it is impossible to criticize Facebook without criticizing “data and experiments” tout court. If you criticize the Facebook experiment, you thereby “insist that the behavior of humans and societies is somehow an illegitimate subject for the scientific method.”

There’s more of this garbage — far more:

Remember: the initial trigger for the outrage over the Facebook study was that it manipulated the emotions of users. But we are being manipulated without our knowledge or consent all the time – by advertisers, marketers, politicians – and we all just accept that as a part of life. The only difference between the Facebook study and everyday life is that the researchers were trying to understand the effect of that manipulation.

Of course. No one has ever complained about being manipulated or lied to by politicians or marketers. And note once more the purity of Facebook’s motives: they’re just trying to “understand,” that’s all. Why do you hate understanding? (Later on Watts talks about “the decisions they're already making on our behalf”: on our behalf. Facebook may be a publicly-traded, for-profit corporation, but all they really care about is helping their users. Why are you so ungrateful?)

If that still sounds creepy, ask yourself this: Would you prefer a world in which we are having our emotions manipulated, but where the manipulators ignore the consequences of their own actions? What about if the manipulators know exactly what they're doing ... but don't tell anyone about it? Is that really a world you want to live in?

As I suggested in a comment on an earlier post, if you live by A/B thinking, you end up dying (intellectually) by A/B thinking. Watts is trying pretty desperately here to tell us that we can only choose a world in which we’re manipulated without knowing it or in which we are knowingly manipulated. The one thing he doesn't want any of his readers to think is that it’s possible to try to reduce the manipulation.

At the end of this absurd screed Watts writes,

Yes, all research needs to be conducted ethically, and social scientists have an obligation to earn and keep the public trust. But unless the public truly prefers a world in which nobody knows anything, more and better science is the best answer we have.

Why do you prefer a world in which nobody knows anything? But wait — there’s a little glimmer of light here ... hard to see, but ... here it is: “social scientists have an obligation to earn and keep the public trust.” Right. And the ones from Facebook haven’t. And they’re not going to get it back by accusing everyone who’s unhappy with them of seeking darkness and ignorance.

Saturday, July 5, 2014

designing the Word

Bibliotheca3

Bibliotecha is a remarkably successful new Kickstarter project for designing and printing a Bible made to be read, in multiple volumes and with bespoke type design. Here is the Kickstarter page; here is part one of an interview with Adam Lewis Greene, the designer; and here is the second part of that interview.

Lots and lots of things to interest me here. At the moment I’m just going to mention one, an exchange from the second interview: 

J. MARK BERTRAND: Your decision not to justify the text column threw me at first. Now I think I understand, but since I’m a stickler for Bibles looking like books meant to be read, and novels are universally justified, could you explain what’s at stake in the choice to leave the right margin ragged?

ADAM LEWIS GREENE: This goes back, again, to the idea of hand-writing as the basis for legible text. When we write, we don’t measure each word and then expand or contract the space between those words so each line is the same length. When we run out of room, we simply start a new line, and though we have a ragged right edge, we have consistent spacing. The same is true of the earliest manuscripts of biblical literature, which were truly formatted to be read. I’m thinking of the Isaiah scroll, which I was privileged to see in Israel last year and is the primary model for my typesetting….

Unjustified text was revived by the likes of Gill and Tschichold early in the last century, and it continues to gain steam, especially in Europe. We are starting to see unjustified text much more frequently in every-day life, especially in digital form, and I would argue we are slowly becoming more accustomed to evenly spaced words than to uniform line-length. To me, justified type is really a Procrustean Bed. Too many times while reading have I leapt a great distance from one word to the next, only to be stunted by the lack of space between words on the very next line. I admit, I think justified text looks clean and orderly when done well, but it doesn’t do a single thing in the way of legibility. It is simply what we have been accustomed to for a long time, and since this project is partially about breaking down notions of how things “ought to be,” I ultimately decided to go with what I believe is the most legible approach; not to mention its suitability for ancient hand-written literature.

I couldn’t agree more with Greene’s decision here. I have long believed that we pay too high a price in legibility to get the perfect rectangles of fully justified text. In my experience, the single greatest source of distraction coming from a text (as opposed to the distractions that arrive from the outside) is variable spacing imposed by the demands of justification. 

When my book The Pleasures of Reading in an Age of Distraction was being typeset for publication, I made two requests of the designer. First, I wanted it set in Eric Gill’s Perpetua; and second, I wanted ragged-right justification. To my astonishment, both of my requests were granted. 

Friday, July 4, 2014

The Righteous Mind and the Inner Ring

In his recent and absolutely essential book The Righteous Mind, Jonathan Haidt tries to understand why we disagree with one another — especially, but not only, about politics and religion — and, more important, why it is so hard for people to see those who disagree with them as equally intelligent, equally decent human beings. (See an excerpt from the book here.)

Central to his argument is this point: “Intuitions come first, strategic reasoning second. Moral intuitions arise automatically and almost instantaneously, long before moral reasoning has a chance to get started, and those first intuitions tend to drive our later reasoning.” Our “moral arguments” are therefore “mostly post hoc constructions made up on the fly, crafted to advance one or more strategic objectives.”

Haidt talks a lot about how our moral intuitions accomplish two things: they bind and they blind. “People bind themselves into political teams that share moral narratives. Once they accept a particular narrative, they become blind to alternative moral worlds.” “Moral matrices bind people together and blind them to the coherence, or even existence, of other matrices.” The incoherent anti-religious rant by Peter Conn that I critiqued yesterday is a great example of how the “righteous mind” works — as are conservative denunciations of universities filled with malicious tenured radicals.

So far so vital. I can't imagine anyone who couldn’t profit from reading Haidt’s book, though it’s a challenge — as Haidt predicts — for any of us to understand our own thinking in these terms. Certainly it’s hard for me, though I’m trying. But there’s a question that Haidt doesn’t directly answer: How do we acquire these initial moral intuitions? — Or maybe not the initial ones, but the ones that prove decisive for our moral lives? I make that distinction because, as we all know, people often end up dissenting, sometimes in the strongest possible terms, from the moral frameworks within which they were raised.

So the question is: What triggers the formation of a “moral matrix” that becomes for a given person the narrative according to which everything and everyone else is judged?

I think that C. S. Lewis answered that question a long time ago. (Some of what follows is adapted from my book The Narnian: The Life and Imagination of C. S. Lewis.) In December of 1944, he gave the Commemoration Oration at King’s College in London, a public lecture largely attended by students, and Lewis took the opportunity of this “Oration” to produce something like a commencement address. He called his audience’s attention to the presence, in schools and businesses and governments and armies and indeed in every other human institution, of a “second or unwritten system” that stands next to the formal organization.

You discover gradually, in almost indefinable ways, that it exists and that you are outside it, and then later, perhaps, that you are inside it.... It is not easy, even at a given moment, to say who is inside and who it outside.... People think they are in it after they have in fact been pushed out of it, or before they have been allowed in; this provides great amusement for those who are really inside.

Lewis does not think that any of his audience will be surprised to hear of this phenomenon of the Inner Ring; but he thinks that some may be surprised when he goes on to argue, in a point so important that I’m going to put it in bold type, “I believe that in all men’s lives at certain periods, and in many men’s lives at all periods between infancy and extreme old age, one of the most dominant elements is the desire to be inside the local Ring and the terror of being left outside.” And it is important for young people to know of the force of this desire because “of all passions the passion for the Inner Ring is most skillful in making a man who is not yet a very bad man do very bad things.”

The draw of the Inner Ring has such profound corrupting power because it never announces itself as evil — indeed, it never announces itself at all. On these grounds Lewis makes a “prophecy” to his audience at King’s College: “To nine out of ten of you the choice which could lead to scoundrelism will come, when it does come, in no very dramatic colours.... Over a drink or a cup of coffee, disguised as a triviality and sandwiched between two jokes ... the hint will come.” And when it does come, “you will be drawn in, if you are drawn in, not by desire for gain or ease, but simply because at that moment, when the cup was so near your lips, you cannot bear to be thrust back again into the cold outer world.”

It is by these subtle means that people who are “not yet very bad” can be drawn to “do very bad things” – by which actions they become, in the end, very bad. That “hint” over drinks or coffee points to such a small thing, such an insignificant alteration in our principles, or what we thought were our principles: but “next week it will be something a little further from the rules, and next year something further still, but all in the jolliest, friendliest spirit. It may end in a crash, a scandal, and penal servitude; it may end in millions, a peerage, and giving the prizes at your old school. But you will be a scoundrel.”

This, I think, is how our “moral matrices,” as Haidt calls them, are formed: we respond to the irresistible draw of belonging to a group of people whom we happen to encounter and happen to find immensely attractive. The element of sheer contingency here is, or ought to be, terrifying: had we encountered a group of equally attractive and interesting people who held very different views, then we too would hold very different views.

And, once we’re part of the Inner Ring, we maintain our status in part by coming up with those post hoc rationalizations that confirm our group identity and, equally important, confirm the nastiness of those who are Outside, who are Not Us. And it’s worth noting, as Avery Pennarun has recently noted, that one of the things that makes smart people smart is their skill at such rationalization: “Smart people have a problem, especially (although not only) when you put them in large groups. That problem is an ability to convincingly rationalize nearly anything.”

In “The Inner Ring” Lewis portrays this group affiliation in the darkest of terms. That’s because he’s warning people about its dangers, which is important. But of course it is by a similar logic that people can be drawn into good communities, genuine fellowship — that they can become “members of a Body,” as he puts it in the great companion piece to “The Inner Ring,” a talk called “Membership.” (Both are included in his collection The Weight of Glory.) This distinction is what his novel That Hideous Strength is primarily about: we see the consequences for Mark Studdock as he is drawn deeper and deeper into an Inner Ring, and the consequences for Mark’s wife Jane as she is drawn deeper and deeper into a genuine community. I can’t think of a better guide to distinguishing between the false and true forms of membership than that novel.

And that novel offers something else: hope. Hope that we need not be bound forever by an inclination we followed years or even decades ago. Hope that we can, with great discipline and committed energy, transcend the group affiliations that lead us to celebrate members of our own group (even when they don't deserve celebration) and demonize or mock those Outside. We need not be bound by the simplistic and uncharitable binaries of the Righteous Mind. Unless, of course, we want to be.

Thursday, July 3, 2014

an academic farce

Peter Conn is right about one thing: college accreditation is a mess. But his comments about religious colleges are thoughtless, uninformed, and bigoted.

Conn is appalled — appalled — that religious colleges can receive accreditation. Why does this appall him? Well, because they have communal statements of faith, and this proves that in them “the primacy of reason has been abandoned.” The idea that religious faith and reason are incompatible can only be put forth by someone utterly ignorant of the centuries of philosophical debate on this subject, which continues to this day; and if it’s the primacy of reason that Conn is particularly concerned with, perhaps he might take a look at the recent (and not-so-recent) history of his own discipline, which is also mine. Could anyone affirm with a straight face that English studies in America has for the past quarter-century or more been governed by “the primacy of reason”? I seriously doubt that Conn even knows what he means by “reason.” Any stick to beat a dog.

Conn is, if possible, even farther off-base when he writes of “the manifest disconnect between the bedrock principle of academic freedom and the governing regulations that corrupt academic freedom at Wheaton.” I taught at Wheaton for twenty-nine years, and when people asked me why I stayed there for so long my answer was always the same: I was there for the academic freedom. My interests were in the intersection of theology, religious practice, and literature — a very rich field, but one that in most secular universities I would have been strongly discouraged from pursuing except in a corrosively skeptical way. Certainly in such an environment I would never have dared to write a book on the theology of reading — and yet what I learned in writing that book has been foundational for the rest of my career.

Conn — in keeping with the simplistic dichotomies that he evidently prefers — is perhaps incapable of understanding that academic freedom is a concept relative to the beliefs of the academics involved. I have a sneaking suspicion that he is even naïve enough to believe that the University of Pennsylvania, where he teaches, is, unlike Wheaton, a value-neutral institution. But as Stanley Fish pointed out years ago, “What, after all, is the difference between a sectarian school which disallows challenges to the divinity of Christ and a so-called nonideological school which disallows discussion of the same question? In both contexts something goes without saying and something else cannot be said (Christ is not God or he is). There is of course a difference, not however between a closed environment and an open one but between environments that are differently closed.” Wheaton is differently closed than Penn; and for the people who teach there and study there, that difference is typically liberating rather than confining. It certainly was for me.

It would take me another ten thousand words to exhaustively detail Conn’s errors of commission and omission — I could have fun with his apparent belief that Christian colleges generally support “creation science” — but in conclusion let me just zero in on this: “Providing accreditation to colleges like Wheaton makes a mockery of whatever academic and intellectual standards the process of accreditation is supposed to uphold.”

How do accreditation agencies “uphold” “academic and intellectual standards”? They look at such factors as class size, test scores of incoming students, percentage of faculty with terminal degrees, and the like. When they look really closely they might note the quality of the institutions from which the faculty received their terminal degrees, and the percentage of graduates who go on for further education.

These are the measures that, when the accreditation agencies come calling, schools like Wheaton are judged by — that is, the same measures that all other colleges and universities in America are judged by. Wheaton faculty in the humanities — I’ll confine my comments to that field — have recently published books on the university presses of Cambridge, Harvard, Oxford, and Princeton, among others. Former students of mine — to speak even more narrowly — have gone on to get degrees from the finest institutions in the world, and are now professors (some of them tenured) at first-rate universities here and abroad. The factual record speaks for itself, for those who, unlike Conn, are willing to look into it. And I am not even mentioning non-academic achievements.

Some of Wheaton’s most famous alumni have strayed pretty far from its theological commitments, though I think Wes Craven has done a pretty good job of illustrating the consequences of original sin. But even those who have turned aside from evangelicalism, or Christianity altogether, often pay tribute to Wheaton for providing them the intellectual tools they have used to forge their own path — see, for instance, Bart Ehrman in the early pages of Misquoting Jesus. The likelihood of producing such graduates is a chance Wheaton is willing to take. Why? Because it believes in liberal education, as opposed to indoctrination.

In this respect, the institutional attitude of Wheaton College differs considerably from the personal attitude of Peter Conn, who, it appears, cannot bear the thought that the academic world should make room for people whose beliefs he despises — even if they meet the same academic standards as other colleges and universities. What Conn wants is a purge of religion from academic life. He ought to own that desire, and stop trying to camouflage it with the verbal fig-leaves of “intellectual standards” and “academic freedom” — concepts he neither understands nor values.

the worst defense of Facebook you're likely to read

Well, I’ve seen some inept commentary on the recent Facebook fiasco, but this definitely takes the cake — and it’s from a Cesar Hidalgo, a prof at MIT, no less.

Talk about an inauspicious beginning:

First, Facebook is a “micro-broadcasting” platform, meaning that it is not a private diary or a messaging service. This is not an official definition, but one that emerges from Facebook’s design: everything you post on Facebook has the potential to go viral.

Well, first of all, no. Facebook has settings that allow you to determine how private or public you want to given post to be: see? So some of what you post on Facebook cannot go viral, unless the software malfunctions, or Facebook makes yet another unannounced change in its policies. And second: the point is completely irrelevant. Though Facebook has often been in trouble for betraying its users’ expectations of privacy — by making public what they didn't want made public —  that isn’t what this is about. The complaint is that Facebook experimented on its users without seeking their consent.

Second, the idea that the experiment violated privacy is also at odds with the experimental design. After all, the experiment was based on what is known technically as a sorting operation. Yet, a sorting operation cannot violate privacy.

That’s manifestly untrue, but it doesn't matter: the point is irrelevant. Though Facebook has often been in trouble for betraying its users’ expectations of privacy, that isn’t what this is about. The complaint is that Facebook experimented on its users without seeking their consent.

Finally, it is important to remember that Facebook did not generate the content that affected the mood of users. You and I generated this content. So if we are willing to point the gun at Facebook for sorting the content created by us, we should also point the gun at ourselves, for creating that content.

Sometimes a statement gets so Orwellian that there’s nothing to be said in response. Onward:

Is using sentiment analysis as a feature unethical? Probably not. Most of us filter the content we present to others based on emotional considerations. In fact, we do not just filter content. We often modify it based on emotional reasons. For instance, is it unethical to soften an unhappy or aggressive comment from a colleague when sharing it with others? Is that rewording operation unethical? Or does the failure of ethics emerge when an algorithm — instead of, say, a professional editor — performs the filtering?

Ah, Artie McStrawman — pleased to see you, my old friend. Of course no one has ever said that filtering information is wrong. The complaint here is that Facebook filtered people’s feeds in order to conduct experiments on them without seeking their consent. Hidalgo has written an 1100 word post that shows no sign at any point of having even the tiniest shred of understanding of what people are angry at Facebook about. This is either monumental dishonesty or monumental stupidity. I can't see any other alternative.

Fantasy and the Buffered Self

That's the title of my New Atlantis essay, now available in full online. Please check it out, and if you'd like to make comments, this is the place to do that.