Text Patterns - by Alan Jacobs

Saturday, July 31, 2010

judging Twitter

A while back I commented on David Carr's enthusiasm for Twitter: "Twitter helps define what is important," etc. etc. Now Peggy Orenstein comes to the same magazine with fear and trepidation: "Each Twitter post seemed a tacit referendum on who I am, or at least who I believe myself to be."
My verdict? Y'all are seriously overthinking this.
I love Twitter, though I don't think it defines me for myself or for anyone else. The two best things about Twitter: asymmetry (you don't have to follow anyone else just because they're following you, and vice versa) and the ability to have multiple overlapping conversations with different sets of friends.

Friday, July 30, 2010

iPad update

I returned it.

three days of the iPad

1) If you're just browsing the web, and don't need to look at any Flash sites, it’s hard to beat. Fast and smoothly intuitive. You can't switch pages as quickly as you can switch tabs on a standard web browser, though, which can be a little awkward at times. But the double-tap-to-zoom-text thing, borrowed from the iPhone, is brilliant. It makes the wonderful Readability and Safari 5’s Reader feature completely unnecessary.

2) Gmail on the iPad is very cool: a combination of the built-in Mail app’s two-pane system with Gmail’s conversation view.

3) If I end up using this thing much, other than for travel, it will probably be to make Keynote presentations for class. (And for class use more generally.) I’ve tried creating a few sample Keynotes, and moving content around — copying and pasting links and images, for instance — is pretty awkward right now: lots of tapping. But in this case I don't think it’s a design problem, but rather a UI I’m not really used to. I expect it will get better with time.

The question will be whether the portability and usability of the iPad make it worth my trouble to deal with an extra gadget, when I can just make Keynote presentations on my MacBook and carry that to class. Right now the iPad feels like it may be one gadget too many — but if (when?) Apple comes up with a way to transfer files seamlessly and wirelessly between Macs and iPads, via a Dropbox-like system, then we’ll be golden. I think we’re headed towards a time when people can have multiple devices that stay in constant sync with one another, so whichever one you happen to pick up will have the resources you need (adapted to the limitations of the device, of course). That’ll be a good day — at least for those who aren't worried about living in the Cloud.

4) The on-screen keyboard is okay, but awkwardly sized, as can be seen even in Apple’s promotional videos. It’s too big to type thumbstyle, too small to type conventionally (and there’s nowhere to rest your palms). So your hands have to hover over the screen and peck gently at the keyboard, like especially delicate little birds. I have an Apple Bluetooth keyboard, and that works fine, but you have to go into the Settings and pair it with the iPad, then unpair it when you’re done to reactivate the on-screen keyboard. That’s cumbersome.

There are no serious text-editing apps for the iPad; I wonder whether there ever will be. This is an issue for me because I detest proprietary text formats and avoid them at all costs. As I have noted before, I write almost everything in plain text with Markdown formatting, and since I wrote that earlier post I have pretty much abandoned Pages and write anything I need to print out in LaTex. Nothing about my research-and-writing workflow is compatible with the iPad, and I can't see that it ever will be.

5) As a book-reading device it has virtues and vices. I've used the Kindle app, and the size of the screen solves many of the Kindle's typographical nightmares, but the iPad is HEAVY. That becomes a real problem after a while. Reading with one hand is not an option for the long term unless you're wanting to body build as you read. The iPad’s highly reflective screen makes it impossible to read anything on it not only outdoors but any place where there’s strong direct light. Marco Arment nails these problems and others, but Marco is also the maker of Instapaper, one of the greatest web services evar, and Instapaper looks utterly magnificent on the iPad.

6) You can't use an iPad without hooking it up to another computer first, but this is just plain stupid.

Overall: it’s cool. It’s magnetically attractive. But it won't help me do much of what I need to do, and I kinda wish I didn't have it.

Thursday, July 29, 2010

one more reason to deplore the rise of e-readers, I guess

David Barnett:

Well, is it just me, or … look, does anyone else have an unhealthy obsession not just with what people have on their bookshelves but what they're actually reading right there and then? Does anyone else stare unashamedly at the paperback that is tucked under someone's arm while they sort through their purse for change in the queue at Boots? Does anyone else have a better memory for the novel poking out of a new acquaintance's pocket than that person's face or name?

And is anyone else facing up to the prospect of summer with a slight feeling of nameless dread, because they know they'll be walking through a park or by a pool or along a beach with their head at an angle, craning to see the spine or cover of whatever the nearest person is reading?

Ah well, perhaps it just is me, then. But if there are others like me, they'll understand why summer can be problematic. Have you ever tried to explain to someone in a pair of Speedos or a tiny bikini that, no, actually, you were looking at the book they had balanced on their tummy? Me neither. But that day will surely come.

The context for these reflections is praise for The Book Depository, especially this little feature. And yes, it’s pretty darn cool.

Wednesday, July 28, 2010

assignment bleg

So, in the next few days I'm going to be working on my syllabi for the upcoming semester. One of my classes is called "Classical and Early British Literature": it's one of those big surveys of Western Culture's Greatest Hits that runs, in this case, from Homer to Shakespeare. Most of the people taking the class are likely English majors; most of them are freshmen. I am required to assign them one big research paper, and I often assign two — but this time I want the second chief assignment to be something different, something that helps them to be more thoughtful and creative users of the online resources that they otherwise will draw on pretty unreflectively. I have some ideas, but I'd like to hear from my readers. Can y'all come up with come creative assignments for me?
(I should add that while for other classes I sometimes create class blogs, I probably won't do that this time.)

structured and unstructured

One of the coolest applications for the Mac is Notational Velocity, an extraordinarily simple yet also innovative note-taking program. I’ve been using it for the past year or so and really digging its UI: when I want to make a note about something, I use a hotkey combination to activate the program, and then I just start typing. I can keep on typing until I’m done, or I can type a title, hit return, and then just continue, because what I type after that will be the body of the note. To find something, I just type a word into the same box and NV runs an instantaneous search. I can have my notes in rich text or plain text: I choose plain, but even in plain text NV recognizes links and makes them clickable. Also, it saves everything you type automatically and instantaneously, and can be synchronized with Simplenote or WriteRoom for the iPad and iPhone. It’s fantastic. It’s also free.

However, I recently stopped using it. Weird, huh? But here’s my reason: what’s great about NV is that it’s totally simple and unstructured and makes text entry utterly frictionless. But, oddly, that has become a problem for me. I can get things into NV with an absolute minimum of effort and delay — but then I tend to forget what’s in there. Yes, I could search, but I don't often think to search. I forget that NV is there until I need to put something in it — but we put stuff into apps like NV because we want to get it out at some point, right?

Let me try to make this more clear. In 2005 I started using Backpack and have used it off and on ever since — and that’s what I just went back to. Entering information in Backpack is much clunkier that entering it in NV: you have to click a link to create a list or a note or a new entry in a list, then you type it in, then you click a button to save it. If you want to edit it you have to click an “edit” link before you can type anything. You have to decide whether something is going to be a note or a list, and if you have multiple pages, which pretty much every Backpack user does, you have to decide what page you want to put it on. A lot of trouble.

But see, trouble in this case equals structure: Backpack effectively forces you to impose a structure on your data, to organize it to some degree before you even enter it. And I have found that for this very reason, when I enter data in Backpack I remember it better and can find it more readily later. Or maybe it would be better to say that I am (internally) prompted to find it because of the structure I have had to impose before entering it. I also find myself scanning my Backpack pages from time to time, which reminds me what I’ve put in them.

Beyond that, I don't really understand why Backpack works with my brain, but it does, and Notational Velocity — even though it’s manifestly cooler and more innovative and features my beloved plain text — really doesn't. That’s just the way it is.

And let me just take this opportunity to say that still, after several years, nothing has come close to matching the combination of simplicity and structure of Stikkit. I would probably have been using Stikkit for the rest of my life if its developers hadn’t abandoned it and refused even to put it up for sale. Bizarre, and immensely frustrating. The product was effectively abandoned by mid-2007 and shut down completely in late 2008, and I still haven't gotten over it.

Tuesday, July 27, 2010

where "we" are

Peggy Nelson:

We’ve moved from the etiquette of the individual to the etiquette of the flow.

Question: Who are “we”?

This is not mob rule, nor is it the fearsome hive mind, the sound of six billion vuvuzelas buzzing. This is not individuals giving up their autonomy or their rational agency. This is individuals choosing to be in touch with each other constantly, exchanging stories and striving for greater connection. The network does not replace the individual, but augments it. We have become individuals-plus-networks, and our ideas immediately have somewhere to go. As a result we’re always having all of our conversations now, flexible geometries of nodes and strands, with links and laughing and gossip and facts flying back and forth. But the real message is movement. . . .

Eventually I learned to stop worrying and love the flow. The pervasiveness of the new multiplicity, and my participation in it, altered my perspective. Altered my Self. The transition was gradual, but eventually I realized I was on the other side. I was traveling with friends, and one of them took a call. Suddenly, instead of feeling less connected to the people I was with, I felt more connected, both to them and to their friends on the other end of the line (whom I did not know). My perspective had shifted from seeing the call as an interruption to seeing it as an expansion. And I realized that the story I had been telling myself about who I was had widened to include additional narratives, some not “mine,” but which could be felt, at least potentially and in part, personally. A small piece of the global had become, for the moment, local. And once that has happened, it can happen again. The end of the world as we know it? No — it’s the end of the world as I know it, the end of the world as YOU know it — but the beginning of the world as WE know it. The networked self is a verb.

Question: In the Flow, is there any reason not to text one person while you’re having sex with another one?

How might this apply to storytelling? It does not necessarily mean that every story must be, or will become, hopelessly fragmented, or that a game mentality can or should replace analysis. It does mean that everyone is potentially a participant in the conversation, instead of just an audience member or consumer at the receiving end. I think the shift in perspective from point to connection enables a wider and more participatory storytelling environment, rather than dictating the shape of stories that flow in the spaces.

Ah, it’s consumption vs. creation again. Question: In the Flow, is there ever any value to listening? Or, to put it another way: In the Flow, are “listening” and “consuming” distinguishable activities?

Monday, July 26, 2010

technologies of the seminar room

There’s a really interesting conversation at Brian Croxall’s site on integrating digital technology into English classes — more particularly, into graduate seminars. Turns out that that it’s hard to think of ways to do this well. And while Brian is anything but a thoughtless technophile, there is something rather telling in how he poses the problem: “We had a lively discussion, but at the end we felt a bit stumped. What was getting in our way was the format in which the English graduate seminar tends to be taught. . . . Any way you take it, the result is that much of the seminar’s time ends up being devoted to discussion that is centered around a couple of texts.”

The implicit assumptions here — which Brian in the comments says he does not endorse — are that (a) we have these technologies available, (b) available technologies need to be used, and (c) a method of teaching that doesn't invite these technologies is “getting in the way.” In the post’s comments Amanda French picks up on these assumptions and responds to them very helpfully:

I think you’re bang-on that “The integration of technology into an English graduate seminar classroom, in other words, poses questions about how we’re training the next group of scholars, about our pedagogy, and about how we’ve done things for the last X-number of years.” “How to do grad education better” is a very different conversation from “What’s the role of technology in the classroom,” though, and I think the former is the conversation you should be having, because I have to say that this post is a bit tech-foisty. To write that “it will be difficult for faculty to integrate the new tools into their graduate seminars” implies that faculty are under some kind of mandate to integrate the new tools, which I for one would emphatically deny. It wouldn’t be difficult for some people to integrate new tools, but they don’t want to, and are right not to, given that their goal is teaching “the methodology of literary studies.” Which is another way of saying that the point of most grad seminars is to professionalize the grad student.

Amanda goes on — I’m using first names as if I know these people, though I don’t; hope that’s okay — Amanda goes on to describe the problems with this model of socializing the graduate students into a profession that is rapidly becoming unrecognizable, so this post and its comments give us a twofer: a thoughtful discussion of technology in the classroom and a thoughtful discussion of how my profession is changing. And of course those two themes are linked in ways that we scarcely yet understand.

(By the way, Brian just talks about “Technology in the Graduate English Seminar” — I added the adjective “digital.” I wish people would think more of all classroom experiences as experiments in technologies: after all, the traditional English grad. seminar is deeply invested, though often unconsciously, in exploring the widely varying technologies of the book and of print culture more generally. My own inclination, which I should probably spell out in detail some time, is to say that digital technologies are most helpful to the student of the humanities in her work outside the classroom: blogs, wikis, multimedia assignments, etc. All of those are great additions to the pedagogical toolbox. But in the classroom the best use of our time is usually to investigate books together.)

Friday, July 23, 2010

a new endeavor

Folks, just posted is the first of my monthly columns for Big Questions Online. It deals with topics congruent with those of this blog. Please check it out.

reply to a (s)critic

In a comment to an earlier post that I should have replied to long ago, scritic wrote:

But most people don't have the kind of tastes you do, they don't want to read Tolstoy and then blog about it; but they do have other interests. So they participate in discussion forums about TV shows, they post pictures to Lolcats or Flickr, etc etc. I'm just saying that someone posting to Lolcats is still doing something more productive (for society) than someone who reads Proust and keeps it to himself.

That last sentence got a good reply from Michael Straight:

1) Someone who spends 99% of his time reading to himself and 1% writing about it might be contributing more to society than the person who makes a LOLcat everyday.

2) Someone who only reads without ever blogging about it or otherwise producing anything directly related to their reading might, as a result of being formed by their reading, become the sort of person who contributes more to society than the LOLcat artist.

3) I value the intrinsic worth of some reading more than I value making a personal “contribution to society” in the sense you are talking about.

I endorse all three of those points.

I also want to add this: that I don't think that Tolstoy vs. lolcats is just a matter of taste. To be sure, not everyone needs to read Tolstoy; most people don't need to read Tolstoy. It would be nice if more people did, but it’s not socially or personally necessary.

What is necessary, I think, is for all of us to be engaged in some activity that challenges us, that tests our intellectual limits. For some people that might be reading Tolstoy, while for others it might involve writing code or learning Klingon. But as Lanier says, “You have to be somebody before you can share yourself,” and being somebody is an achievement. It requires intentional labor, and a degree of personal ambition — and anyone can work and strive, though some have farther to go than others. But a lot of fooling around on the internet is just that, fooling around: it doesn't test our resources or stretch our capacities. In many cases that’s fine, because we shouldn't be working all the time: but even if fooling around on the internet really does somehow increase social creative capital — which I have no reason to believe — it doesn't achieve a damned thing for the person doing it.

Wednesday, July 21, 2010

all that said . . .

. . . aren't we debating all kinds of really fascinating things these days? In that sense, and in many others, it's a great time to be alive. As someone who grew up torn between an interest in science — throughout my childhood I was sure I would grow up to be an astronomer and when I started college I was still pretty sure that was my path — and a love of stories and poems, I am especially excited by all the ways the sciences and humanities are converging. Oh to be young again!

sentenced to read

Come on, I’m an English professor, you think I’m not going to link to this?

Rouse is one of thousands of offenders across the US who, as an alternative to prison, are placed on a rehabilitation programme called Changing Lives Through Literature (CLTL). Repeat offenders of serious crimes such as armed robbery, assault or drug dealing are made to attend a reading group where they discuss literary classics such as To Kill a Mockingbird, The Bell Jar and Of Mice and Men.

Rouse's group was run by part-time lecturer in liberal studies at Rice University in Houston, Larry Jablecki, who uses the texts of Plato, Mill and Socrates to explore themes of fate, love, anger, liberty, tolerance and empathy. "I particularly liked some of the ideas in John Stuart Mill's On Liberty," says Mitchell, who now wants to do a PhD in philosophy.

Groups are single sex and the books chosen resonate with some of the issues the offenders may be facing. A male group, for example, may read books with a theme of male identity. A judge, a probation officer and an academic join a session of 30 offenders to talk about issues as equals.

Of the 597 who have completed the course in Brazoria County, Texas, between 1997 and 2008, only 36 (6%) had their probations revoked and were sent to jail.

A year-long study of the first cohort that went through the programme, which was founded in Massachusetts in 1991, found that only 19% had reoffended compared with 42% in a control group. And those from the programme who did reoffend committed less serious crimes.

Seems like a lot of trouble when you could just link them to some lolcats pages or show them how to post abusive comments on blogs. . . . okay, okay, just kidding. That horse I’m beating is so dead.

I will say, though, that that one guy should be sent back to jail until he promises not to add to the ranks of philosophy PhDs looking for jobs.

last round with Shirky

So to get to the heart of the matter that I’ve been discussing in the previous two posts: I doubt that Clay Shirky writes lolcat captions. It would be a waste of his time, wouldn't it? He has better things to do, doesn't he? After all, as his Wikipedia bio shows, this is a guy who has spent his whole adult life in culturally elite institutions — where, let me add, is just where he ought to be, given his sheer smarts, his lively imagination, and his intellectual ambition.

But when a guy like that says to millions of other people, “You folks just go ahead and make your lolcats and add stuff to your MySpace pages; we all have our own contributions to make, however small they might be, to the collective knowledge” — isn't there something deeply condescending about that? Isn't the implication quite strong that people should content themselves with their jokes and status updates because they really aren't capable of anything more demanding?

Which, I think, accounts for my excessive annoyance at Shirky’s line of thought: I come from the lolcats-and-MySpace classes. But because lolcats and MySpace didn't exist when I was growing up, and because my parents happened to be readers, I was able to assemble — largely by myself, because my education up through high school was poor at best — a framework of intellectual possibility that I was ultimately able to pursue and inherit. Though not without a great deal of work and many blind stumbles and detours along the way. (And, I might add, I acquired this vision largely through reading the kinds of books that many, perhaps most, people in my profession dismiss as trash. But that’s a topic for another post, or essay, or book.)

So I suppose I’m a little touchy about Shirky’s arguments because they diminish, or perhaps dismiss altogether, the value of my own early aspirations, and the labor I put in to achieve them. But there’s another, less personal and perhaps less subjective, way to resist Shirky’s model, and that’s the one that Jaron Lanier offers in You Are Not a Gadget: for Lanier, the celebration of the crowd or the hive comes at an unacceptably high price when it leads to the diminishment of the person — indeed, of the very idea of personhood. I’ll return to these ideas at some point, but for now I just want to note and endorse something Lanier says at the outset of his book, in a sentence that incisively undermines Shirky’s blithe confidence that every form of online participation is both generous and creative: “You have to be somebody before you can share yourself.” And the process of becoming somebody takes time, effort, discipline, and study. It doesn't happen through posting lolcats.

Tuesday, July 20, 2010

Shirky and me, continued

So, again, how do those exciting autobiographical revelations from my last post relate to Clay Shirky’s ideas about cognitive surplus? Let me explain. No, there is too much. Let me sum up. . . .

So Shirky’s key idea — which is expressed fully enough in a talk he gave in 2008; the new book doesn't add all that much, as far as I can tell — goes something like this:

1) Thanks primarily to a series of technological developments, many millions of the people in the developed nations have more free time than they used to have;
2) Many of them are using this time to think, make, and do, especially online (which is a lot better than sitting around consuming TV programs and maybe books also);
3) The activities generated by this “cognitive surplus” have already had some wonderful consequences, best exemplified in the great hive-mind achievement that is Wikipedia;
4) And these activities will produce greater achievements in the future, thanks to the power of the wisdom of crowds: eventually quantity simply produces quality. So — to use the example from Shirky’s book that has drawn the most attention — go ahead and play with your lolcats, because by so doing you are making your tiny but fundamentally "generous"and "creative" contribution to the hive mind.

It’s the last point that’s the key. An example often given to illustrate the wisdom of crowds is this: bring an ox into the village square, get everyone in the village to guess its weight, average their guesses, and you’ll end up with something closer to the correct number than any one expert is likely to produce. (Sometimes the illustration involves guessing the number of marbles in a jar, or something like that — you get the idea.) Shirky is banking heavily on this principle being operative in every aspect of human culture, in such a way that while any one person’s contribution to any particular endeavor may have infinitesimal value, economies of scale mean that the total achievements of the crowd taken as a whole will be vast.

But is this true? Are there really no limits to the wisdom of crowds? And will economies of scale really take care of everything? Well, Jaron Lanier — whose book You Are Not a Gadget, while it came out before Shirky’s, is a kind of refutation of Shirky’s key claims — isn't buying it. There are some problems that crowds just don't have the wisdom to solve, because some problems call for expertise. “It seems to me,” Lanier writes, “that even if we could network all the potential aliens in the galaxy — quadrillions of them, perhaps — and get each of them to contribute some seconds to a physics wiki, we would not replicate the achievements of even one mediocre physicist, let alone a great one.”

Lanier points out that the go-to example for celebrants of such wisdom is Wikipedia, but the (this is my point, not Lanier's) only area in which Wikipedia provides anything that can't be found elsewhere is popular culture, and even in that case all it gives us is lists and summaries. Really cool lists and summaries, in many cases — how about the names of all the spaceships in Iain M. Banks's Culture novels? — but nothing that creates or innovates. (In fact, creation, innovation, and "original research" are forbidden on Wikipedia.) In the sciences, Wikipedia simply copies and pastes what has been discovered elsewhere and can be found elsewhere; and in the humanities, the less said the better. Much is taken verbatim from public-domain print dictionaries.

There's a funny moment in You Are Not a Gadget when Lanier imagines his 1980s self looking into the future to discover the great achievements of the hive mind, only to discover that they are a variant of Unix (Linux) and an encyclopedia. That's it? That's the best you got?

And in the meantime, what becomes of the person who is devoting most of her cognitive surplus to making lolcat captions, because Clay Shirky told her that by doing so she’s doing her part for the hive mind? That’s the question that leads us back to my own story — but that will require at least one more post.

Monday, July 19, 2010

Are you talking to me??

Here, via Slashdot.

Shirky and me

My suspicions of Clay Shirky’s techno-optimism — detailed in several recent posts on this blog — have a genuine intellectual foundation, but I’m also aware that Shirky’s arguments annoy me more than they ought to. I’ve been trying to figure out what’s bugging me, and I think I may have isolated it. Bear with me while I descend through the mists of memory. . . .

I have always disliked it when people use stories of their “humble origins” for rhetorical and moral leverage, which has led me to keep generally silent about my own history. But briefly: I grew up in a working-class family in Birmingham, Alabama. Through most of my childhood my father was in prison; my mother worked to pay the bills, which meant that I was effectively raised by my paternal grandmother, with whom we lived. My grandfather was an engineer for the Frisco railroad, whose frequent long freight hauls meant that we rarely saw him; and when my father got out of prison he worked as a night dispatcher for a trucking company, so he was a ghostly presence as well. (I mainly remember him sleeping on the sofa.) No one in my family had ever attended college, nor did anyone see the value in it: when I decided I wanted to go, my parents had no objection as long as I paid for it myself — which, in the end, I did; every penny. I never worked less than 24 hours a week when I was a full-time student, until my senior year, when I took out loans so I could take an overload to finish my degree. (Even so, it took me five years to graduate, because twice I had to sit out semesters to work full-time to save for tuition.)

But one great gift I received from my family: reading. My mother, grandmother, and father all read constantly. We had televisions, and we used them: my father never turned off the TV: in his house it remained on 24 hours a day once full-time programming became the norm, and he was known to leave it on even when he left town on vacation. But what happened on the tube was largely background noise for readers.

It was all genre fiction: romances for my mom, mysteries for my grandmother, science fiction and Westerns for my dad. I learned to read when I was three, and after a brief period with Dr. Seuss I moved on: I wanted to read what others in my family were reading — well, except for mom’s romances (yuck). So by the time I was six or seven I was deep into the collected works of Erle Stanley Gardner, Louis L’Amour, and Robert A. Heinlein. And that kind of reading would be all that I aspired to until I was about sixteen, at which point certain other vistas opened up for me.

What does all this have to do with Clay Shirky? I’ll explain in a later post.

Friday, July 16, 2010

man in an iFugue

Gary Shteyngart:

“This right here,” said the curly-haired, 20-something Apple Store glam-nerd who sold me my latest iPhone, “is the most important purchase you will ever make in your life.” He looked at me, trying to gauge whether the holiness of this moment had registered as he passed me the Eucharist with two firm, unblemished hands. “For real?” I said, trying to sound like a teenager, trying to mimic what all these devices and social media are trying to do, which is to restore in us the feelings of youth and control.

“For real,” he said. And he was right. The device came out of the box and my world was transformed. I walked outside my book-ridden apartment. The first thing that happened was that New York fell away around me. It disappeared. Poof. The city I had tried to set to the page in three novels and counting, the hideously outmoded boulevardier aspect of noticing societal change in the gray asphalt prism of Manhattan’s eye, noticing how the clothes are draping the leg this season, how backsides are getting smaller above 59th Street and larger east of the Bowery, how the singsong of the city is turning slightly less Albanian on this corner and slightly more Fujianese on this one — all of it, finished. Now, an arrow threads its way up my colorful screen. The taco I hunger for is 1.3 miles away, 32 minutes of walking or 14 minutes if I manage to catch the F train. I follow the arrow taco-ward, staring at my iPhone the way I once glanced at humanity, with interest and anticipation.

Wayfaring

My new collection, Wayfaring: Essays Pleasant and Unpleasant, is now available. Get 'em while they're hot! (Because, you know, what's hotter than a collection of literary-cultural-theological essays? I sure can't think of anything.)

slow reading

I hate it when this happens: I recently sent off the O-Lord-I-hope-it's-finished manuscript of my book on reading to Cynthia Read, my editor at Oxford UP, and just today, in reading this excellent article by Patrick Kingsley, I found several links to books and articles that I did not know about but wish I had. My book has a whole section about the value of reading more slowly, which I'm sure could have been better if I had known about this book (note the sample chapter available on the site). I will probably have a chance to incorporate a reference to Miedema's book, but not much more.
This kind of thing happens all the time, of course; most writers have nightmares about books coming out just before their own volumes do that ruin everything. Nobody wants to review your new book on X because everyone just reviewed that other new book on X. Or your new book on X looks shabbily outdated because that other author discovered something or claims to have discovered something that you didn't know about — or worse, that you knew about but had scholarly or ethical scruples that kept you from exploiting that knowledge. Maybe that other book on X makes yours seem like a frivolous popularization, or, conversely, like the work of a lifeless pedant. And many of these nightmares come true!
Can't be helped. If you are deeply interested in something, it's likely that someone else in the world will be also. You just have to hope that your book doesn't get lost in the shuffle — a particular danger if you're writing on a trendy topic, and my own topic, reading, is getting a great deal of attention these days. I have this sneaking suspicion that by the time my book comes out, sometime in 2011, several other books on the subject will have hit the shelves. Will reviewers say, "Ho-hum, yet another book on reading"? Or might they say — oh please let it be so — "Finally, among all these mediocre books on reading, one that we can whole-heartedly recommend"?

Thursday, July 15, 2010

the bookshelf as memory theater

From a wonderful essay by Nathan Schneider:

What concerns me about the literary apocalypse that everybody now expects—the at least partial elimination of paper books in favor of digital alternatives—is not chiefly the books themselves, but the bookshelf. My fear is for the eclectic, personal collections that we bookish people assemble over the course of our lives, as well as for their grander, public step-siblings. I fear for our memory theaters. . . .

So far, for all the wonders they offer, the digital alternatives to a bookshelf fail to serve its basic purposes. The space of memory and thinking must not be an essentially controlled, homogenous one. Amazon’s Kindle and Apple’s iPad are noxious ruses that must be creatively resisted—not simply because they are electronic but because they propose to commandeer our bookshelves. I will defend the spirit of mine tooth and nail.

You’d do well to read it all.

our de-browsered future

I think Michael Hirschorn may be right:

After 15 years of fruitless experimentation, media companies are realizing that an advertising-supported model is not the way to succeed on the Web and they are, at last, seeking to get consumers to pay for their content. They are operating on the largely correct assumption that people will be more likely to pay for consumer-friendly apps via the iPad, and a multitude of competing devices due out this year, than they are to subscribe to the same old kludgy Web site they have been using freely for years. As a result, media companies will soon be pushing their best and most timely content through their apps instead of their Web sites. Meanwhile, video-content services are finding that they don’t even need to bother with the Web and the browser. Netflix, for one, is well on its way to sending movies and TV shows directly to TV sets, making their customers’ experience virtually indistinguishable from ordering up on-demand shows by remote control. It’s far from a given that this shift will generate the kinds of revenue media companies are used to: for under-30s whelped on free content, the prospect of paying hundreds or thousands of dollars yearly for print, audio, and video (on expensive new devices that require paying AT&T $30 a month) is not going to be an easy sell. . . .

All of this suggests that the era of browser dominance is coming to a close. . . . Years from now, we may look back at these past 15 years as a discrete (and thrillingly indiscreet) chapter in the history of digital media, not the beginning of a new and enlightened dispensation.

And while people will pay for entertainment if they have to, will they pay for news and other substantial information? I have my doubts about that.

Wednesday, July 14, 2010

Acts of Mercy

These extraordinary paintings by an artist previously unknown to me, Frederick Cayley Robinson, once hung in the entrance hall of London's Middlesex Hospital. They are currently on view at the National Gallery. Click on them for larger versions, which I got here. The stillness of the compositions and the subtlety of the palette strike me very forcefully.

algorithmic culture

I’ve written here about my interest in Amazon’s recently implemented “Popular Highlights” feature, which lets Kindle readers know what passages other Kindle readers are taking note of. But Ted Striphas points to a rather worrisome aspect of this technology:

When people read, on a Kindle or elsewhere, there’s context. For example, I may highlight a passage because I find it to be provocative or insightful. By the same token, I may find it to be objectionable, or boring, or grammatically troublesome, or confusing, or…you get the point. When Amazon uploads your passages and begins aggregating them with those of other readers, this sense of context is lost. What this means is that algorithmic culture, in its obsession with metrics and quantification, exists at least one level of abstraction beyond the acts of reading that first produced the data.

I’m not against the crowd, and let me add that I’m not even against this type of cultural work per se. I don’t fear the machine. What I do fear, though, is the black box of algorithmic culture. We have virtually no idea of how Amazon’s Popular Highlights algorithm works, let alone who made it. All that information is proprietary, and given Amazon’s penchant for secrecy, the company is unlikely to open up about it anytime soon.

In the old cultural paradigm, you could question authorities about their reasons for selecting particular cultural artifacts as worthy, while dismissing or neglecting others. Not so with algorithmic culture, which wraps abstraction inside of secrecy and sells it back to you as, “the people have spoken.”

Tuesday, July 13, 2010

the Ark

By Rintala Eggertsson, at the Victoria and Albert Museum in London. More pictures here; story here.

creativity in crisis

Well, this does not seem to be good news:

With intelligence, there is a phenomenon called the Flynn effect — each generation, scores go up about 10 points. Enriched environments are making kids smarter. With creativity, a reverse trend has just been identified and is being reported for the first time here: American creativity scores are falling.

Kyung Hee Kim at the College of William & Mary discovered this in May, after analyzing almost 300,000 Torrance scores of children and adults. Kim found creativity scores had been steadily rising, just like IQ scores, until 1990. Since then, creativity scores have consistently inched downward. “It’s very clear, and the decrease is very significant,” Kim says. It is the scores of younger children in America — from kindergarten through sixth grade — for whom the decline is “most serious.”

On the surface, this seems to run counter to Clay Shirky’s thesis that the internet and related technologies are yielding a “cognitive surplus” that allows us greater scope for creativity. It will therefore be interesting to hear how Shirky responds to these findings. Presumably he won't reconsider his thesis; it’s possible that he will find flaws in the research, or in the definition of “creativity” the studies use.

But my bet is that he’ll say something like this: These studies identify a decline in creativity that begins before the digital era, which means that the blame cannot be placed on use of the internet, but rather on the preceding dominant technology, television; therefore, as our attention shifts more and more completely to the interactive media enabled by the internet, the decline in creativity will be arrested and then reversed. I don't think such a response is adequate to the facts on the ground, but I’m guessing this is what we’ll hear from Shirky and other congenital optimists.

I’m finding typing too laborious to give my own response in any detail, but I’m inclined to blame not the internet but rather our culture of managerial parenting, in which children are given almost no opportunity, from toddlerhood through late adolescence, to engage in unstructured play. Which would not be the worst news in the world: it’s more likely that parents learn to back off a bit than that we abandon online life.

Monday, July 12, 2010

whoops

Well, I may be renewing my hiatus for a day or two more: I dislocated a finger this morning playing basketball, and typing is rather painful right now. Also slow. I'll be back soon, though!

Britannica recycled

Saturday, July 10, 2010

a Kierkegaardian thought from Walker Percy

Not only should connoisseurs of Bourbon not read this article, neither should persons preoccupied with the perils of alcoholism, cirrhosis, esophageal hemorrhage, cancer of the palate, and so forth — all real dangers. I, too, deplore these afflictions. But, as between these evils and the aesthetic of Bourbon drinking, that is, the use of Bourbon to warm the heart, to reduce the anomie of the late twentieth century, to cut the cold phlegm of Wednesday afternoons, I choose the aesthetic. What, after all, is the use of not having cancer, cirrhosis, and such, if a man comes home from work every day at five-thirty to the exurbs of Montclair or Memphis and there is the grass growing and the little family looking not quite at him but just past the side of his head, and there’s Cronkite on the tube and the smell of pot roast in the living room, and inside the house and outside in the pretty exurb has settled the noxious particles and the sadness of the old dying Western world, and him thinking: ‘Jesus, is this it? Listening to Cronkite and the grass growing?’

— Walker Percy, “Bourbon”

Friday, July 2, 2010

notification

Friends, I'll be taking next week off. I expect to be back on the 12th or thereabouts. Happy Independence Day to my fellow Americans!

Jonathan Franzen and the family novel

Reading Jonathan Franzen's commendation of Christina Stead’s novel The Man Who Loved Children I see that it is also a commendation of a particular kind of novel, the realistic family-centered novel, like his own The Corrections, from what he fears is a permanent dismissal. "Haven't we had enough of that?" — and you know, I think we have, for now anyway. Why is Franzen so much less interesting than, say George Eliot or Trollope? — not, please note, why isn't he as good as they are, for few novelists are, but why is he not as interesting?

I am inclined to think that that kind of novel depends on a certain kind of society, a society with elaborate explicit and implicit rules, and without the necessity of characters navigating those rules, just isn't worth writing. In our society people can be whatever they want in relation to any other people and in relation to any branch of society, or that's what we think anyway, and so there just aren't enough structures of resistance to make the realistic social-familial novel work. We're all internal, or (again) think we are, and the proper media for that kind of experience are the essay, the memoir, the blog, and maybe the lyric poem. The novel isn't dead, but the kind of novel Franzen wants to commend will not have the kind of resonance he wants it to have until society becomes more formally and thoroughly structured. Which at some point will happen.

Thursday, July 1, 2010

imagined interlocutors

James Sturm's experiment continues.

bodies and minds revisited

Scott Adams:

My wife and I designed our new house as a brain supplement, although we never spoke of it in those words. Every element of the home is designed to reprogram the brains that enter it to feel relaxed in some of its spaces and inspired in others. The language I used at the time of the design was that every space should be an invitation. (I'll talk more on that topic in an upcoming post.) When guests walk through the house for the first time, we can watch the house change people's attitudes and emotions in real time. It's fascinating.

How about your guests’ bodies? Do they have bodies? Do you and your wife have bodies? Do houses have anything to do with bodies, or just brains? Also, you say you see people's attitudes and emotions change "in real time" — is there some other kind of time in which they might change?