Text Patterns - by Alan Jacobs

Thursday, September 30, 2010

the saddest thing I have read in some time

Arikia Millikan:

Now, I am always connected to the Web. The rare exceptions to the rule cause excruciating anxiety. I work online. I play online. I have sex online. I sleep with my smartphone at the foot of my bed and wake up every few hours to check my email in my sleep (something I like to call dreamailing).

But it's not enough connectivity. I crave an existence where batteries never die, wireless connections never fail, and the time between asking a question and having the answer is approximately zero. If I could be jacked in at every waking hour of the day, I would, and I think a lot of my peers would do the same. So Hal, please hurry up with that Google implant. We're getting antsy.

There’s a name for this condition: Stockholm Syndrome.

resisting print

I’m reading and enjoying Andrew Pettegree’s The Book in the Renaissance, and as I move along I can’t stop comparing that moment of textual revolution with our own. For instance, the reluctance of the learned (by and large) to embrace a new technology:

The invention of printing was not the work of scholars. Scholars in the fifteenth century had all the books they needed: their attention was directed to the borrowing, copying and bargaining necessary to obtain more texts. It required hard, practical men, often men of little education, to see the potential of a new method of copying that would bring many hundreds of texts simultaneously to the marketplace. It was also men of this stamp who perceived how the techniques of medieval craft society could be applied to achieve this. . . .

It was swiftly becoming clear that it was the centres of trade, rather than of learning, that would provide the best locations for production of printed books in the fifteenth century. Rather against expectations print did not flourish in many places that boasted a distinguished medieval university. There was virtually no printing in Tübingen or Heidelberg; in England it would be London, rather than Oxford or Cambridge, that monopolised print. Large commercial cities proved more fertile territory.

But scholarly resistance to printed books was not wholly irrational:

The scholars who shared the excitement aroused by Gutenberg’s new invention did so for very specific reasons. They believed that print would make true texts, especially of the works of classical authors, more widely available. By this they meant that print would be employed to enable scholars and intellectuals to possess more books; humanists were less concerned that books should be made available to a broader range of the population. This is a crucial distinction; and it is in this context that the humanist criticism of print now developed. For it was swiftly realised that printed books had not necessarily produced more accurate editions. The first printed books could not live up to the standard set by manuscript production in Italy. They were often dirty, smudged and inaccurate. They included too many mistakes. The inefficiency and carelessness of printers would be a repeated lament of authors throughout the era of hand-press printing, but in this first generation it had a philosophical edge: the charge that print had debased the book.

Great stuff. I leave comparisons to our own age as an exercise for the reader. And I’ll report further as I read more.

Wednesday, September 29, 2010

discuss

"Fortune favors the connected mind." — Steven Johnson, Where Good Ideas Come From.
"City ideas, like cities, are fashionable. But fashions change quickly, so city ideas live and die on short cycles. The opposite of city ideas are 'natural ideas', which account for the big leaps forward and often appear to come from nowhere. These ideas come from nature, solitude, and meditation." — Jonathan Harris.

Ramelli's wheel

Yesterday I tweeted about Agostino Ramelli’s reading wheel, and this appears to be a subject near to the heart of my editor, Adam Keiper. He sent me a link to a picture of the great historian Anthony Grafton with his own reading wheel — right there next to his laptop, interestingly enough — it’s like a tableau vivant of old and new technologies of knowledge — and then provided this passage from Henry Petroski’s The Book on the Book Shelf:

Among the mental constructions that Ramelli describes is a revolving desk resembling a water wheel which is like nothing known to have been seen in any contemporary Western study. Indeed, Joseph Needham, the scholar of Chinese science and technology, has argued that a revolving bookcase had its origin not in the West but in China, “perhaps a thousand years before Ramelli’s design was taken there.” According to Needham “the fact that Ramelli’s was a vertical type, and that all the Chinese ones, from Fu Hsi onwards, were horizontal, would simply have been characteristic of the two engineering traditions,” illustrating ‘perfectly the preference of Western engineers for vertical, and Chinese engieners for horizontal mountings.” Whether this be a valid generalization may be argued, as may Needham’s further speculation that “probably from the beginning,’ however, the rotation was a piece of religious symbolism as much as a convenience.” . . .

Whether such devices were appreciated most on grounds of aesthetics, symbolism, or scholarly convenience may have to remain a matter for speculation. There can be little doubt, however, that many a scholar who used such rotary devices in the course of copying, translating, and explicating found them a godsend. The Ramelli wheel may or may not have been so practical, however, for while the illustration of it shows a reader able to consult a series of books as we might click backward and forward from web page to web page on the Internet today, there does not appear to be any convenient working surface on or near the wheel for the scholar who may wish to make notes or write. If a further anachronism may be allowed, the device looks like a 7- or 8-foot tall model of a Ferris wheel, with open books riding on individual lectern cars, suited for passive or recreational reading but not active scholarship involving writing.

I don't know why Petroski thinks such a wheel is for recreational reading — I would think just the opposite. Nobody reads a dozen books at once for fun: the whole purpose of such a wheel would be to keep authoritative references ready to hand. The photo of Grafton shows how you can set one of these up near your desk and simply turn to it when you need it, rotate until you find the reference you need, then turn back to your writing with the information. Very efficient.

Cloud Atlas

James Joyce once wrote to a friend that the thought of Ulysses is simple; it’s only the method that’s complex. Much the same could be said of David Mitchell’s extraordinary novel Cloud Atlas), which borrows from Joyce metaphors of reincarnation and a deep commitment to the idea that linguistic style is a way of envisioning and understanding the world. And also like Joyce’s masterpiece, Mitchell’s book has at its heart a simple and straightforward idea: a lamentation for the suffering we inflict on one another, especially when we inflict it in the name of some social identity that separates us from others whom we place lower on the Great Chain of Being. Sunt lacrimae rerum, one of the characters in Cloud Atlas writes at the end of his life, borrowing from Virgil: sunt lacrimae rerum et mentem mortalia tangunt, “They weep here / For how the world goes, and our life that passes / Touches their hearts” (Robert Fitzgerald’s free but elegant version). For all his metafictional gamesmanship, Mitchell is, I think, trying to produce a few of those tears.

To be sure, Mitchell’s book is much more accessible than Joyce’s: he weaves together multiple narratives, each with its own distinctive style, but each narrative is eminently readable, and the small knots that connect them cleverly tied. Someone once said of Tom Stoppard’s plays, and this was not a compliment, that they make the viewer feel more intelligent, and Cloud Atlas may do that as well: there are many moments when I felt a sudden surge of delight when I made a connection between stories. But what’s wrong with feeling a sudden surge of delight at aesthetic discovery? And the pleasure of finding and unraveling the knots may actually make the reader’s heart a little more vulnerable to the moments of pathos. And properly so.

Though I do wonder if in the end Cloud Atlas may (as Joyce said of Ulysses) “suffer from an excess of design.” Everything fits together so neatly, and while there is great pleasure in noting the neatness of the weave, I think it may be true that the books that stay with us most profoundly are the ones that have some of the rough-edgedness and imperfect execution of our own best-laid plans. Books that are as flawed as we are, books whose reach exceeds their grasp. It will be interesting to find out, five or ten years from now, which book is stronger in my memory, Cloud Atlas or Infinite Jest.

(Incidentally, the best review I’ve read of Cloud Atlas is by A. S. Byatt; and I would recommend that anyone interested in the book read Mitchell’s own brief essay about it.)

Monday, September 27, 2010

adding to the tech canon

The Atlantic Tech Canon is fab, but rather skewed towards the present day. I'd like to suggest a few items from most distant eras. (This list is by no means exhaustive or even especially well-considered.)

1) Hugh St. Victor, Didascalion (ca. 1120): Hugh did more than anyone else in the West to organize the knowledge of his time, develop methods of reading, and improve technologies of the book. See Ivan Illich’s brilliant In the Vineyard of the Text for more.
2) Abū al-'Iz Ibn Ismā'īl ibn al-Razāz al-Jazarī, The Book of Knowledge of Ingenius Mechanical Devices (1206): an extraordinary and extraordinarily influential compendium of engineering achievements.
3) The Notebooks of Leonardo da Vinci (late 15th and early 16th centuries). Enough said.
4) Robert Hooke, Micrographia (1665): demonstrated to all Europe the power of the microscope to reveal an extraordinarily vibrant world otherwise invisible to us.
5) Joseph Priestley, A New Chart of History (1765): the enormous influence of this chart in shaping Western thinking about historical sequence is explained in Daniel Rosenberg and Anthony Grafton’s Cartographies of Time: a History of the Timeline, my review of which will appear in a forthcoming issue of The New Atlantis. Here is Priestley’s book explaining his chart.

Among more recent contributions to the tech canon, I would like to commend Lewis Mumford’s 1934 book Technics and Civilization and Jacques Ellul’s The Technological Society.

Friday, September 24, 2010

authors' libraries

Conor Friedersdorf sends me a link to this story about attempts to recover and reassemble the libraries of dead writers. Sad and curious. . . . But of course — someone has to ask this question, so it might as well be me — what about future writers whose libraries are partly or largely contained in their e-readers? You could hold in your hand years of careful annotations and provocative underlinings. Or, conversely, you could discover that a member of the family had erased everything on Dad's device and replaced it with his or her own books. In that case — supposing the e-reader is a Kindle — could a deputation of scholars go to Amazon to see whether any of those annotations survive on Amazon's servers, and if so, whether they could be bequeathed to some library or archive? Or made available to anyone who wants to see them as a public service? (Amazon's own purely digital literary archive.) Strange times ahead, perhaps.

Thursday, September 23, 2010

more futures for the book

Please watch this video about the future of the book. (I could embed it, but it would be too small.) Did you see it? Okay, then, some thoughts:

1) Nelson is for people who don't know what they think about something until they know what other people think. Note that there’s no reading involved, but rather assessments of value, made by others.

2) There’s no reading involved in Coupland either, just an effort at bringing social cohesion to the college or workplace. Not much room in this model for the person with eccentric or even minority tastes.

3) Alice involves reading, but constantly interactive reading: hands reshape the page, characters from the book send texts to your phone, and so on.

Long-form reading, with its demands of extensive and intensive concentration, single-minded attention, is not the only kind of reading. Nor is it the only valuable kind of reading. But it rarely gets mentioned in these conversations about “the future of the book.” Is this a tacit admission that new technologies of the book have nothing to contribute to long-form reading, to focused attentiveness? Does the paper codex own that territory, without rival? I’d like to see at least someone thinking imaginatively about the contributions digital technologies can make to single-mindedness.

Wednesday, September 22, 2010

on spoilers

I’m not so interested in the question of how Wikipedia reveals the ending of The Mousetrap, but I am interested in how Ruth Franklin, in her eviscerating but largely accurate review of Jonathan Franzen’s Freedom in this week’s New Republic, reveals pretty much everything that happens in the book, even quoting its powerful final sentence, which is possibly the best thing about it. (Here’s the link, though I think the full text is available only to subscribers.)

After all, Wikipedia is known for giving complete plot summaries of books and movies, so why not The Mousetrap? But when I write book reviews of novels, I try to reveal no more of the actual plot than is necessary to explain and justify the view I take of the book as a whole. This is what most reviewers do, and sometimes it can be tricky: see Aimee Bender’s recent review of Emma Donoghue’s novel Room, in which Bender sweats a bit over this question but decides that she can't talk intelligibly about the book without revealing one key element of the story that the book itself reveals fairly early on.

Why do we reviewers try to avoid spoilers? Because we understand that other readers may not share our verdicts, and insofar as possible we want those readers to be able to read in such a way as to form their own judgments. There’s something tyrannical about Franklin’s blunt revelations of what happens in Freedom, a forestalling and preventing of other readings: it would be very difficult for anyone who had read Franklin’s review to have their own strong experience of Franzen’s novel and therefore to come to a more positive conclusion about the story than Franklin comes to (or for that matter than I came to).

For a book like Infinite Jest, which lacks a conventional plot and therefore a conventional ending, this might not matter so much. It also might not matter so much if Freedom had been out for a year or two. But Franzen was clearly writing a more traditional kind of novel, and while I don't think the book is very successful, I think his effort deserves more respect than Franklin gives it. Franklin is one of our best reviewers of fiction, I think, but in this case I don't like what she has done.

I imagine she would defend this practice by saying that everything about everything is already revealed on the internet, and that people could have found spoilers and lengthy quotations from the last pages of Freedom on hundreds of websites. And all that is true. But many book-readers and movie-watchers still try to preserve a certain innocence in relation to works of art they anticipate encountering, and that desire deserves more respect than Franklin gives it. The amour-propre of The New Republic presumably would not allow it to insert the phrase “SPOILER ALERT,” but I think that would have been a good idea in this case.

UPDATE: Check out these letters to the Times, especially the second one. (Hat tip to my friend Garnette Cadogan.)

Tuesday, September 21, 2010

persons, not relays

And from the same issue of the New York Times Magazine, these important thoughts from Jaron Lanier:

We see the embedded philosophy bloom when students assemble papers as mash-ups from online snippets instead of thinking and composing on a blank piece of screen. What is wrong with this is not that students are any lazier now or learning less. (It is probably even true, I admit reluctantly, that in the presence of the ambient Internet, maybe it is not so important anymore to hold an archive of certain kinds of academic trivia in your head.)

The problem is that students could come to conceive of themselves as relays in a transpersonal digital structure. Their job is then to copy and transfer data around, to be a source of statistics, whether to be processed by tests at school or by advertising schemes elsewhere.

What is really lost when this happens is the self-invention of a human brain. If students don’t learn to think, then no amount of access to information will do them any good.

I am a technologist, and so my first impulse might be to try to fix this problem with better technology. But if we ask what thinking is, so that we can then ask how to foster it, we encounter an astonishing and terrifying answer: We don’t know.

I will just add that in these matters the great challenge for teachers is to figure out how to get students to be smart, resourceful, critical users of the information available to them without becoming mere “relays in a transpersonal digital structure.” The more you teach students about digitized information, the more likely it is that their minds are conformed to the (after all, rather limited) ways that information is structured and presented. We want to help people be in the digital world but not of it. Unless they want to be.

Monday, September 20, 2010

technology and homeschooling

I tend to get frustrated by Kevin Kelly’s technophilia, but this account of his experiences teaching his son at home (a) resonates with my own homeschooling adventures and (b) makes a ton of sense. I especially like this set of principles about technology that he and his wife tried to impart to their eighth-grader:

  • Every new technology will bite back. The more powerful its gifts, the more powerfully it can be abused. Look for its costs.
  • Technologies improve so fast you should postpone getting anything you need until the last second. Get comfortable with the fact that anything you buy is already obsolete.
  • Before you can master a device, program or invention, it will be superseded; you will always be a beginner. Get good at it.
  • Be suspicious of any technology that requires walls. If you can fix it, modify it or hack it yourself, that is a good sign.
  • The proper response to a stupid technology is to make a better one, just as the proper response to a stupid idea is not to outlaw it but to replace it with a better idea.
  • Every technology is biased by its embedded defaults: what does it assume?
  • Nobody has any idea of what a new invention will really be good for. The crucial question is, what happens when everyone has one?
  • The older the technology, the more likely it will continue to be useful.
  • Find the minimum amount of technology that will maximize your options.

The last two are particularly noteworthy, and wise also. And this is best of all: “Technology helped us learn, but it was not the medium of learning. It was summoned when needed. Technology is strange that way. Education, at least in the K-12 range, is more about child rearing than knowledge acquisition. And since child rearing is primarily about forming character, instilling values and cultivating habits, it may be the last area to be directly augmented by technology.”

Thursday, September 16, 2010

information, please!

Yes, the desk is made of books! (Via Survival of the Book.) (Not really the kind of survival we book-lovers want, but still: awesome.)
I'll be traveling for the next few days, so please forgive the upcoming radio silence.

Wednesday, September 15, 2010

science fiction oven mitts

I think a lot about — and probably will be writing more about — my feeling that what literary types like to call “genre fiction” is, pretty inexorably, displacing conventional realistic literary fiction as “the abstract and brief chronicle of our time.” In a recent interview, William Gibson gives a partial explanation for this displacement:

Well, when I started writing in my late 20s, I knew that I was a native of science fiction. It was my native literary culture. But I also knew that I had been to a lot of other places in literature, other than science fiction. When I started working I had the science fiction writer's specialist toolkit. I used it for my version of what it had been issued for. As I used it, though, and as the world around me changed, because of the impact of contemporary technologies, more than anything else, I found myself looking at the toolkit and thinking, you know, these tools are possibly the best tools we have to describe our inherently fantastic present—to describe it and examine it, and take it down and put it back together and get a handle on it. I think without those tools I don't really know what we could do with it.

Whenever I read a contemporary literary novel that describes the world we're living in, I wait for the science fiction tools to come out. Because they have to — the material demands it. Global warming demands it, and the global AIDS epidemic and 9/11 and everything else — all these things that didn't exist 30 years ago require that toolkit to handle. You need science fiction oven mitts to handle the hot casserole that is 2010.

Tuesday, September 14, 2010

DFW as teacher

The teaching materials now online at the Ransom Center are pretty darn fascinating.

heed the Marxist critique

Kevin Kelly has a book coming out soon called What Technology Wants. Kevin, meet Leo Marx:

We amplify the hazardous character of the concept by investing it with agency — by using the word technology as the subject of active verbs. Take, for example, a stock historical generalization such as: “the cotton-picking machine transformed the southern agricultural economy and set off the Great Migration of black farm workers to northern cities.” Here we tacitly invest a machine with the power to initiate change, as if it were capable of altering the course of events, of history itself. By treating these inanimate objects — machines — as causal agents, we divert attention from the human (especially socioeconomic and political) relations responsible for precipitating this social upheaval. Contemporary discourse, private and public, is filled with hackneyed vignettes of technologically activated social change — pithy accounts of “the direction technology is taking us” or “changing our lives.”

. . . To attribute specific events or social developments to the historical agency of so basic an aspect of human behavior makes little or no sense. Technology, as such, makes nothing happen. By now, however, the concept has been endowed with a thing-like autonomy and a seemingly magical power of historical agency. We have made it an all-purpose agent of change. As compared with other means of reaching our social goals, the technological has come to seem the most feasible, practical, and economically viable. It relieves the citizenry of onerous decision-making obligations and intensifies their gathering sense of political impotence. The popular belief in technology as a — if not the — primary force shaping the future is matched by our increasing reliance on instrumental standards of judgment, and a corresponding neglect of moral and political standards, in making judgments about the direction of society. To expose the hazards embodied in this pivotal concept is a vital responsibility of historians of technology.

Monday, September 13, 2010

lethargie

In my last post about Infinite Jest I mentioned the philosophical-theological-spiritual problem of the interesting. With that in mind, it’s . . . um . . . interesting? — no, let’s say it’s thought-provoking to note this excerpt from The Pale King, the novel Wallace left unfinished at his death. Here Lane Dean, Jr., a worker for the IRS, is thinking about boredom — and I will indicate by ellipsis the many sentences I am leaving out, which (as you will see if you read the excerpt) tell us about all the things that are (of course) distracting Lane Dean, Jr. as he tries to think about boredom:

Donne, of course, called it lethargie, and for a time it seems conjoined somewhat with melancholy, saturninia, otiositas, tristitia; that is, to be confused with sloth and torpor and lassitude and eremia and vexation and distemper and attributed to spleen — for example, see Winchilsea’s “black jaundice,” or, of course, Burton. . . . Quaker Green in, I believe, 1750 called it “spleen-fog.” . . . And then suddenly up it pops. Bore. As if from Athena’s forehead. Noun and verb, participle as adjective, whole nine yards. Origin unknown, really. We do not know. Nothing on it in Johnson. Partridge’s only entry is on “bored” as a subject complement and what preposition it takes, since “bored of,” as opposed to “with,” is a class marker, which is all that ever really concerns Partridge. Class class class. The only Partridge Lane Dean knew was the same TV Partridge everybody else knew. He had no earthly idea what this guy was talking about, but at the same time it unnerved him that he’d been thinking about “bore” as a word as well, the word, many [tax] returns ago. Philologists say it was a neologism, and just at the time of industry’s rise, too, yes? Of the mass man, the automated turbine and drill bit and bore, yes? Hollowed out? Forget Friedkin, have you seen “Metropolis”? . . . Look, for instance, at L. P. Smith’s “English Language,” ’56, I believe, yes? . . . Posits certain neologisms as “arising from their own cultural necessity” — his words, I believe. Yes, he said. When the kind of experience that you’re getting a man-sized taste of becomes possible, the word invents itself. . . . Someone else had also called it that: “soul-murdering.” Which now you will, too, yes? In the nineteenth century, then, suddenly the word’s everywhere; see, for example, Kierkegaard’s “Strange that boredom, in itself so staid and solid, should have such power to set in motion.” . . .

. . . Note, too, that “interesting” first appears just two years after “bore.” 1768. Mark this, two years after. Can this be so? . . . Invents itself, yes? Not all it invents.

Friday, September 10, 2010

defining books down

Hugh McGuire: "A book properly hooked into the Internet is a far more valuable collection of information than a book not properly hooked into the Internet. " Okay. But what about books whose purpose is not to be a "collection of information"?

one reader's report

So I recently got an interesting email from my friend and editor Rod Dreher — you do read Big Questions Online, don't you? — who tells a thought-provoking story about the combined effects on a reader, namely him, of (a) an iPad and a (b) sabbatical from blogging. With his permission I share it with you:

So, I burrowed in last night to read an hour of [Jonathan Franzen’s] "Freedom," and ended up staying on the couch for two hours, until I finished the book ... er, novel; I was reading it on my iPad, so it wasn't really a book. This morning, I tried to recall the last time I had finished a novel, or finished any book (I've always got several going at any given moment). I couldn't. Partly this is because Franzen's novel is such a good read, but I think mostly it's because I was in the habit of stopping whatever I was doing to blog about a compelling insight, or even simply to blog a moving passage of whatever I was reading. It occurred to me this morning that this way of reading worked hard against allowing a narrative to sink its hook into me. I was never able to give myself over completely to the narrative, fictional or non-fictional, because I was always standing outside of it, ready to talk about it online — and I would stop reading cold to go do that. It made for good blogging, I think, but a book never was able to cast its spell. Being away from blogging for three weeks may — may — have given me back my ability to experience a book as it ought to be experienced. It was kind of, I dunno, exhilarating.

Thursday, September 9, 2010

Lehrer on "the future of reading"

Here’s a terrific post from Jonah Lehrer, with this sparkful idea:

The act of reading observes a gradient of awareness. Familiar sentences printed in Helvetica and rendered on lucid e-ink screens are read quickly and effortlessly. Meanwhile, unusual sentences with complex clauses and smudged ink tend to require more conscious effort, which leads to more activation in the dorsal pathway. All the extra work – the slight cognitive frisson of having to decipher the words – wakes us up.

So here’s my wish for e-readers. I’d love them to include a feature that allows us to undo their ease, to make the act of reading just a little bit more difficult. Perhaps we need to alter the fonts, or reduce the contrast, or invert the monochrome color scheme. Our eyes will need to struggle, and we’ll certainly read slower, but that’s the point: Only then will we process the text a little less unconsciously, with less reliance on the ventral pathway. We won’t just scan the words – we will contemplate their meaning.

See also this post by Tim Carmody. The technologies associated with reading — and punctuation and printing styles are technologies — have always been in flux, but now they’re fluxing faster than they used to. But what if developing technologies allow us to situate our reading environment at any of the previous points in the history of reading? What if the future of reading can also be the past of reading? Now that would be cool.

Wednesday, September 8, 2010

tweet news

Dear readers, if you look to the right side of the screen you'll see a change in the Twitter feed: instead of tweets coming from the @textpatterns account, you'll see tweets tagged #textpatterns. I expect that most of these will be from me, but it would be a great way for my readers to contribute to the conversation of this blog. So if you come across a story or post online that seems relevant to the concerns of this blog — or you have a thought of your own that seems relevant — tag it #textpatterns and it'll show up in the box on the right. And it may well give me (and others) some new and interesting things to think about.

Rortyan hacking

Cathy Davidson:

A “hack” is a reconfiguration or reprogramming of a system to function in a way different than that built into it but its owner, designer, or administrator. The term can run the gamut from a clever or quick fix to a messy (kludgy) temporary solution that no one’s happy with. It can refer to ingenuity and innovation — or sinister practices that border on the criminal. We hope to avoid the kludge and don’t plan on breaking any laws. But reprograming traditional learning institutions so they function in a different, more original, and more efficient way than is intended by current owners and administrators? Sign me up!

When David Theo Goldberg and I came up with our incendiary definition of “institution” as a “mobilizing network,” deconstructing the very solidity and uniformity of “institution” by emphasizing the potential for unruliness among its constituent members, we were hacking the institution.

Cathy Davidson has some good ideas at times, but heavens! — the self-regard is pretty thick. Saying that you’re going to define “institution” as a “mobilizing network” — not actually doing anything, but just choosing in your own conversations with people you already know to redefine a term — is “incendiary”? And is “hacking”?

I blame Richard Rorty, because it was Rorty who argued (in Contingency, Irony, and Solidarity and elsewhere) that the chief task of philosophy is not to make iron-clad arguments but to redescribe the world. “The method is to redescribe lots and lots of things in new ways, until you have created a pattern of linguistic behavior which will tempt the rising generation to adopt it, thereby causing them to look for appropriate forms of nonlinguistic behavior.”

Nice work if you can get it, because this “method” never asks you to change how you live one iota. all you have to do is talk, and leave “appropriate forms of nonlinguistic behavior” to the “rising generation.” So all Cathy Davidson and Theo Goldberg have to do is to say “an institution is a mobilizing network,” and Shazam! — the university is incendiarily hacked.

It’s always good in this context to note Umberto Eco’s account, in Kant and the Platypus, of a debate he had with Rorty on these matters:

Rorty also alluded to the right we would have to interpret a screwdriver as something useful to scratch our ears with. . . . A screwdriver can serve also to oen a parcel (given that it is an instrument with a cutting point, easy to use in order to exert force on something resistant); but it is inadvisable to use it for rummaging about in your ear, precisely because it is sharp and too long to allow the hand to control the action requires for such a delicate operation; and so it would be better to use not a screwdriver but a light stick with a wad of cotton at its tip.

Words may not be particularly resistant to redescription, especially if you’re among like-minded people; but screwdrivers and institutions (such as the university) and other things are much more recalcitrant. Genuinely hacking them is harder and riskier, which makes it tempting to follow the safer route of redescription. Leave the hard labor of tangible change for the “rising generation.”

As for me, I'm putting more trust in the alt-ac crew to actually, you know, do things differently.

Tuesday, September 7, 2010

let the reader understand

Via BoingBoing. You're welcome.

read, mark, learn

Over the years I have developed a personal vocabulary of book annotation: I underline, I star, I write marginal comments, I use arrowing lines to link words that I have circled, I employ a range of punctuation marks that (for me) have distinctly different meanings. When I am reading on my Kindle — and the third generation Kindle is a major upgrade, by the way — I can really only use underlining and commenting, and only the underlining is visible as I read: to see anything else I have to click on a link and go to a different screen, which makes starring and question-marking and the like pretty useless.
So how will this situation change? Will e-reading software provide more sophisticated means of annotation — as PDF readers have, for instance — or will readers simply adapt their annotative habits to the limitations of the new technology?

Monday, September 6, 2010

pockets and watches

I try to wear jeans or jeans-like trousers whenever possible — “five-pocket style,” it’s sometimes called, the fifth being the watch-pocket that’s tucked just above the right front one. Of course, such a pocket is in one sense as atavistic as an appendix, since no one carries pocket watches anymore, but in most of my jeans it serves as a nearly ideal receptacle for an iPhone. It was utterly ideal when I carried a smaller phone, but the iPhone sticks out a little too much, usually, and I am hoping that future trouser designers will take us smartphone users into account and (a) keep the fifth pocket where it is while (b) making it a little deeper.

I love pocket watches, though, and mechanical watches and clocks in general. Whenever I’m in London I have to visit the tiny museum of the Worshipful Company of Clockmakers and gaze once more at the John Harrison timepieces at the Royal Observatory in Greenwich. If I were ever filthy rich, the great indulgence tempting me would not be a second home in the Virgin Islands or a Lamborghini but rather a watch made by George Daniels, the greatest living horologist.

My love of pocket watches in particular has a clear source. My paternal grandfather, Elisha Creel Jacobs, was an engineer for the Frisco railroad, and since we lived with my grandparents — we were too poor to have our own house — I was always around when he came home from one of his long runs west. I loved his leather bag, like a doctor’s, which contained his distinctive engineer’s hat and some red kerchiefs and a couple of plain aluminum cans of purified water and a few fusees. The fusees fascinated and terrified me. But more than I loved the bag I loved his engineer’s pocket watch on its gold chain, given to him after some period of service — twenty-five years, I think — by the railroad. He kept that not in his bag but tucked into the watch pocket of his dungarees or overalls, and I suppose he got tired of having to take it out and let me play with it, but if so he never showed it.

When he was dying of lung cancer — in our home, thank God, and not in the hospital — and could no longer get out of bed, I helped my grandmother turn him over to wash his ruined body and dress his bed sores, and several times he told me what while he had put away the watch for safekeeping, he wanted me to have it when he died. I was honored and thrilled by this wish. But after he died no one could find the watch; or so I was told. Only years later did I learn that my father had pawned it to fund an alcoholic binge. It had disappeared and could never be recovered.

On one visit to the Royal Observatory my wife bought me a present: a lovely little pocket watch with a glass face that allows me to watch the works enact their precise and regular dance. I never get tired of watching this performance, but I rarely carry the watch around. Largely that’s because it’s easier to carry the all-in-one iPhone; but it’s also true that as much as I love pocket watches, they tend to make me a little sad.

Thursday, September 2, 2010

the last jest

So Infinite Summer, last year’s online book group for reading Infinite Jest, concluded with some questions for a round-table of readers. I thought I would wrap up my own thoughts about IJ by answering those questions. I would say that spoilers follow, but IJ really isn't that kind of book. After all, the first chapter occurs about a year after the book’s other events.

How about that ending, huh?

People who are disappointed by the ending are people who want closure; but it’s kind of hard to imagine that you could get very far into IJ and expect it to end neatly. I find the ending moving but ambiguous. “And when he came back to, he was flat on his back on the beach in the freezing sand, and it was raining out of a low sky, and the tide was way out.” It doesn't seem to me that this can be interpreted in any strictly literal sense. What would have to happen for a man in a hospital recovering from a gunshot wound to awaken on a freezing cold beach? Who would have deposited him there, and why? And would he survive such treatment? I tend to think that Gately is dying, and he doesn't “come back to” this world but passes into another one.
Many readers will say that this can't be, because we know that later he helps Hal dig up the buried head of JOI, but I don't think we do know that. I tend to believe that that is not an actual event but a shared vision (or dream, or nightmare) that results from the increasingly overlapping consciousnesses of Hal and Gately.
However, the best attempt I have seen to draw together the plot’s loose threads is that of Aaron Swartz, and it’s a very different take than mine.
What do you think happened to Hal?
I blame the mold he ate.

Do you feel bad about Orin’s fate?

Sure. Orin is cynical but also wounded, so his transformation from someone who thinks of women as Subjects to being a Subject himself seems both thematically appropriate and materially excessive. Also, considering DFW’s general resistance to neat endings, it’s rather too appropriate, and the echo of 1984 strangely literal and explicit. So that part of the ending is less like DFW than anything else in the book, I think. (Though if we conclude that Orin was the one who sent the Entertainment to the medical attaché, possibly seeking payback for the man's affair with Avril, and if we conclude that he knew what watching the Entertainment would do to someone, then Orin becomes more deserving of a harsh fate.)
What about the other unanswered questions. Was Joelle truly disfigured? Was the wraith real?
I think that when Joelle tells Gately, “Don, I’m perfect. . . . I am so beautiful I am deformed,” she is telling the exact literal truth. The status of the wraith is necessarily ambiguous, in much the same way as the status of the elder Hamlet’s ghost is ambiguous.

Looking back, do parts of the novel that seemed superfluous at the time now make sense?

Not at the moment. The parts that seemed superfluous as I was reading still seem superfluous now. But I suspect that this is the rare book that changes in one’s memory. Who knows what I will think in a year’s time?

Were the hours (days, weeks…) spent reading the book well spent? Do you regret reading the book at all?

I think the time was well spent. There were many times when I was reading it that I looked longingly over at two other books I contemplated choosing instead of IJ for my end-of-the-summer reading — Mark Helprin’s A Soldier of the Great War and Christina Stead’s The Man Who Loved Children — but those, while they would have probably been more purely enjoyable for me, would also have been less challenging and not as closely connected to my chief interests.

Did Infinite Jest change your life?

I don't think so, but again, we’ll see. I think it’s probably the most incisive exploration of what Kierkegaard called the aesthetic life — the need for, the addiction to, the interesting — that we’ve seen since, well, Kierkegaard. In this context Auden once wrote, “All sin tends to be addictive, and the terminal point of addiction is what is called damnation.” That strikes me as a pretty good one-sentence summary of Infinite Jest. But of course the very idea of a “one-sentence summary of Infinite Jest” is intrinsically laughable. A bad jest.

I still think it would have been a better 600-page book than it is at 1100 pages. But hey, I know people who feel the same way about Black Lamb and Grey Falcon, and they’re wrong, so maybe I’m wrong about this one. Let me just say finally that I have never read a book which so combines multiple varieties of intellectual ambition and sheer big-heartedness, and for that distinctive combination alone Infinite Jest is rightly going to be read and celebrated for a very long time.