Text Patterns - by Alan Jacobs

Thursday, December 30, 2010

posts unwritten, end-of-year edition, part 2

How interesting would it be to have a writer’s every keystroke recorded and played back? Pretty interesting, perhaps, but I don't want it to happen to me.

Though I think Pynchon’s Mason & Dixon is one of the greatest novels of the twentieth century, I somehow never got around to reading his next, Against the Day — but Dale Peck makes me think I should. I could blog my way through it right here. . . .

Carlin Romano worries, intelligently, about whether professors will retain the strength of will to assign whole books, given shortening attention spans. This here professor will, but that’s just one data point. Reading tough books can be challenging in a fun way.

More than a year ago Matthew Battles warned us against un-historical invocations of Gutenberg.
C. W. Anderson has a really cool annotated syllabus for Print Culture 101.

I think Heart of Darkness is a really bad choice for a graphic novel retelling — too much of its power lies in the magnificent narrative voice, e.g.:

I looked at him, lost in astonishment. There he was before me, in motley, as though he had absconded from a troupe of mimes, enthusiastic, fabulous. His very existence was improbable, inexplicable, and altogether bewildering. He was an insoluble problem. It was inconceivable how he had existed, how he had succeeded in getting so far, how he had managed to remain -- why he did not instantly disappear. 'I went a little farther,' he said, 'then still a little farther -- till I had gone so far that I don't know how I'll ever get back. Never mind. Plenty time. I can manage. You take Kurtz away quick -- quick -- I tell you.' The glamour of youth enveloped his parti-coloured rags, his destitution, his loneliness, the essential desolation of his futile wanderings. For months -- for years -- his life hadn't been worth a day's purchase; and there he was gallantly, thoughtlessly alive, to all appearances indestructible solely by the virtue of his few years and of his unreflecting audacity. I was seduced into something like admiration -- like envy. Glamour urged him on, glamour kept him unscathed. He surely wanted nothing from the wilderness but space to breathe in and to push on through. His need was to exist, and to move onwards at the greatest possible risk, and with a maximum of privation. If the absolutely pure, uncalculating, unpractical spirit of adventure had ever ruled a human being, it ruled this bepatched youth. I almost envied him the possession of this modest and clear flame. It seemed to have consumed all thought of self so completely, that even while he was talking to you, you forgot that it was he -- the man before your eyes -- who had gone through these things. I did not envy him his devotion to Kurtz, though. He had not meditated over it. It came to him, and he accepted it with a sort of eager fatalism. I must say that to me it appeared about the most dangerous thing in every way he had come upon so far.

This can't be represented graphically any more than a Picasso can be represented textually.

Wednesday, December 29, 2010

does anything change anything?

Marshall Poe says that “the Internet changes nothing”:

The media experts, however, tell us that there really is something new and transformative about the Internet. It goes under various names, but it amounts to “collaboration.” The Internet makes it much easier for people to do things together. Look, they say, at email discussion lists, community blogs, auction sites, product rating pages, gaming portals, wikis, and file trading services. Collaboration abounds online. That’s a fair point. But “easier” is not new or transformative. There is nothing new about any of the activities that take place on the aforementioned sites. We did them all in the Old World of Old Media. As for transformative, the evidence is thin. The basic institutions of modern society in the developed world—representative democracy, regulated capitalism, the welfare net, cultural liberalism—have not changed much since the introduction of the Internet. The big picture now looks a lot like the big picture then. . . .

Following this logic, let me also affirm that the printing press changed nothing: sure, it made making book easier, but “easier” is not new or transformative. People wrote and read books before the printing press, and they continued to write and read them afterwards. What’s the big deal?

Similarly, the internal combustion engine changed nothing. Before it was invented, we went to Grandma’s house, and even traveled from New York to Chicago — it just took a little longer. And “faster” is not new or transformative, you know.

I could go on for a while. . . . But in all seriousness, Poe makes some good points along the way. He’s just generating page views with an outrageous thesis. I bet he also advocates using federal municipal bonds to forcibly bus known Communists into your homes to Kill your puppies!

Tuesday, December 28, 2010

posts unwritten, end-of-year edition

Looking at my Pinboard and Instapaper pages — how I love those tools — I see so many stories I want to blog about but will probably not find time to. There’s no strict reason why there should be a statute of limitations on such things, and there remains a chance that I’ll come back to some of these stories later, but the end of the year just feels like a time for closing the books on some options and turning to others. So let me take note of a few worthwhile pursuits that I didn't manage to . . . pursue:

Robert Darnton offered “Three Jeremiads” about research libraries, concluding with a plea:

Would a Digital Public Library of America solve all the other problems—the inflation of journal prices, the economics of scholarly publishing, the unbalanced budgets of libraries, and the barriers to the careers of young scholars? No. Instead, it would open the way to a general transformation of the landscape in what we now call the information society. Rather than better business plans (not that they don’t matter), we need a new ecology, one based on the public good instead of private gain. This may not be a satisfactory conclusion. It’s not an answer to the problem of sustainability. It’s an appeal to change the system.

Natalie Binder has a really smart series of posts about the powers and limits of Google’s Ngrams. On the same subject, Geoffrey Nunberg is smart, sobering, and sardonic:

It's unlikely that "the whole field" of literary studies—or any other field—will take up these methods, though the data will probably figure in the literature the way observations about origins and etymology do now. But I think Trumpener is quite right to predict that second-rate scholars will use the Google Books corpus to churn out gigabytes of uninformative graphs and insignificant conclusions. But it isn't as if those scholars would be doing more valuable work if they were approaching literature from some other point of view.

This should reassure humanists about the immutably nonscientific status of their fields. Theories of what makes science science come and go, but one constant is that it proceeds by the aggregation of increments great and small, so that even the dullards have something to contribute. As William Whewell, who coined the word "scientist," put it, "Nothing which was done was useless or unessential." Humanists produce reams of work that is precisely that: useless because it's merely adequate. And the humanities resist the standardizations of method that make possible the structured collaborations of science, with the inevitable loss of individual voice. Whatever precedents yesterday's article in Science may establish for the humanities, the 12-author paper won't be one of them.

I can't decide how much of Jaron Lanier’s warning against “The Hazards of Nerd Supremacy” I agree with, if any, but as he always does, here Lanier provokes a great deal of thought.

I may list a few more of these stories-I-didn’t-write-about in the coming days.

opting out, revisited

Regular readers, if I have any regular readers, will know that this is the kind of thing I strongly disagree with:

Overwhelmed by all the noise, some have simply chosen to block it out — to opt out, say, of social networks and microblog platforms like Twitter. Alternatively, others have hewn close to these social networks, counting on them to sort through all the information coming at us.

But to be informed in the distributed world we live in, opting out isn’t really an option. For better or worse, we are watching a C-Span version of our lives trying to fast-forward to the good parts.

I love this almost-always-on connected life, Lord knows I do, but of course opting out is an option even for those who want to be “informed,” at least for now. I could subscribe to and read only print magazines — even just monthly and quarterly magazines — and be fully informed about everything I need to be informed about.

We tell ourselves, by way of self-justification, that we need Twitter, need our RSS feeds, need Facebook. But no, we don't. We just like them very much. And as far as I’m concerned that’s good enough. It’s just necessary always to remember that we’re making choices and could, if we wished, make different ones about how we’re informed and what we’re informed about.

In this light it’s good to be reminded of a passage from John Ruskin’s Modern Painters that I recently quoted on my tumblelog:

To watch the corn grow, and the blossoms set; to draw hard breath over ploughshare or spade; to read, to think, to love, to hope, to pray — these are the things that make men happy; they have always had the power of doing these, they never will have the power to do more. The world’s prosperity or adversity depends upon our knowing and teaching these few things: but upon iron, or glass, or electricity, or steam, in no wise.

Thursday, December 23, 2010

decision time

Here’s a fascinating little essay by Cory Doctorow on . . . well, it’s complicated. He’s explaining why he’s happy with his decision to self-publish his new collection of stories, but he’s using that situation to explore the problem — or the “problem” — of having too much information and too many options:

I'm not sorry I decided to become a publisher. For one thing, it's been incredibly lucrative thus far: I've made more in two days' worth of the experiment than I made off both of my previous short story collections' entire commercial lives (full profit/loss statements will appear as monthly appendices in the book). And I'm learning things about readers' relationship to writers in the 21st century.

But more than ever, I'm realising that the old problem of overcoming constraints to action has been replaced by the new problem of deciding what to do when the constraints fall away. The former world demanded relentless fixity of purpose and quick-handed snatching at opportunity; the new world demands the kind of self-knowledge that comes from quiet, mindful introspection.

That last sentence is great, and worthy of much reflection. When opportunities for acquiring and disseminating knowledge were fewer, we had to act quickly to seize them: who know when another would come by? But now, with so much we can know and so many ways to get our ideas out into the world, we need to seek time and space to filter through the options. We need, as never before, the virtues of discernment.

There’s something to think about in the holiday season. I’ll be back in a few days. In the meantime, a Merry Christmas to all, and God bless us every one!

Wednesday, December 22, 2010

portability policies

A typically thoughtful and thought-provoking essay by Jonathan Zittrain, emphasizing the need for internet users — especially those reliant on the cloud for storage of their data — to think about portability as much as they think about privacy:

We enjoy access to massive archives of our digital trail in the form of emails, chats, comments, and other bits of personal ephemera, all stored conveniently out in the cloud, ready to be called up or shared in a moment, from wherever we happen to be, on whatever device we choose. The services stowing that data owe a commitment of privacy defined by a specific policy—one that we can review before we commit. Yet if any of the cloud services we use restrict our ability to extract our data, it can become stuck—and we can become locked into those services. The solution there is for such services to offer data portability policies to complement their privacy policies before we begin to patronize them, to help preserve our freedom to choose services over the long term. By dismissing the principle of net neutrality, however, we endanger that ability not just by one cloud service provider but across the board: ISPs can perform deep packet inspection to glean whatever they can about us as we correspond with different sites across the Internet, and our data can become stranded in places as the shifting sands of our ISPs’ access policies constrict access to places they disfavor. Just as international diplomacy depends on the principles of the inviolable embassy, la valise diplomatique, and mutual reciprocity to operate in the ultimate best interests of all involved, so does net neutrality depend on maintaining an online environment that preserves those aspects that made it such a valuable and central part of modern life in the first place.

The analogy to international diplomatic law is especially interesting. My view of cloud storage for my own data seems to change day by day. . . .

Tuesday, December 21, 2010

the inconsistent relativist

Martin Rundkvist writes:

I'm a cultural and aesthetic relativist. This means that I acknowledge no objective standards for the evaluation of works of art. There are no definitive aesthetic judgements, there is only reception history. There is no objective way of deciding whether Elvis Presley is better than Swedish Elvis impersonator Eilert Pilarm. It is possible, and in fact rather common, to prefer Lady Gaga to Johann Sebastian Bach. De gustibus non est disputandum.

And he continues,

Conan the Barbarian, or Aragorn son of Arathorn, or Ronia the Robber's Daughter all represent something of central importance to the heritage sector and to the humanities in general. At the same time, on the one hand they embody something we must always seek to achieve, that is the wondersome excitement of discovering a fantastic past - and on the other something we must avoid if we are to fill any independent purpose at all, as these characters and the worlds they inhabit are fictional. Historical humanities, excepting the aesthetic disciplines, deal with reality. This is our unique competitive selling point that we must never lose sight of.

So here’s my question: if we have no grounds on which to say that one thing is better than another, on what grounds can we say that a particular story or character is “of central importance to the heritage sector and to the humanities in general”? That is, if it’s impossible to make “definitive aesthetic judgments,” what makes it possible to make such definitive judgments about what’s important and what isn't? I’m not sure you can consistently be a relativist about the one and a dogmatist about the other.

Rundkvist, like a lot of people, allows the word “objective” to get him off track. “Universal” is problematic in the same way. Aesthetic judgments, like moral and historical ones, are never made from nowhere and by no one; they’re made by real people in concrete situations, and the needs of both people and situations vary. But such judgments have to be made, and they can be made reliably. It’s really not that hard to make the case that Bach’s music is better than Lady Gaga’s, though there will be situations in which Bach’s music won't be the thing called for. And in much the same way one can make a case for the cultural value and historical importance of pulp fiction, though that importance will be rather different than the kind of importance that attaches itself to Bach's music. We make these kinds of judgments all the time, and only paralyze ourselves when we start invoking terms like "objective" and "universal."

Maybe more about this later. . . .

Monday, December 20, 2010

metadata and our discontents

See that? Huge spike on the word "internet" in . . . 1903. Natalie Binder explains why Google's really bad metadata is going to limit the usefulness of the word-hoard it is assembling.
But hey: it's going to get better.

getting started with Ngrams

Ben Schmidt writes the smartest thing I’ve yet seen about Google’s Ngram project:

But for now: it's disconnected from the texts. This severely compromises its usefulness in most humanities applications. I can't track evolutionary language in any subset of books or any sentence/paragraph context; a literary scholar can't separate out pulp fiction from literary presses, much less Henry James from Mark Twain. It was created by linguists, and treats texts fundamentally syntactically--as bags of words linked only by very short-term connections--two or three words. The wider network of connections that happen in texts is missing.

Don't doubt that it's coming, though. My fear right now is that all of the work is proceeding without the expertise that humanists have developed in understanding how to carefully assess our cultural heritage. The current study casually tosses out pronouncements about the changing nature of 'fame' in 'culture' without, at a first skim, at least, acknowledging any gap at all between print culture and the Zeitgeist. I know I've done the same thing sometimes, but I'm trying to be aware of it, at least. An article in Science promising the "Quantitative Analysis of Culture" is several bridges too far.

So is it possible to a) convince humanists they have something to gain by joining these projects; b) convincing the projects that they're better off starting within conversations, not treating this as an opportunity to reboot the entire study of culture? I think so. It's already happening, and the CHNM–Google collaboration is a good chance. I think most scholars see the opportunities in this sort of work as clearly as they see the problems, and this can be a good spur to talk about just what we want to get out of all the new forms of reading coming down the pike. So let's get started.

Yes, yes, yes. Let the traditional humanists stop sneering; let those on the digital frontier shun the language of “reinvention” and avoid suggesting that they have rendered other approaches to the humanities obsolete. (There aren't many that arrogant, but there are a few.) Also, this project does not create a new field. Let’s get started is just the right note to strike.

(The article from Science that Schmidt mentions is here.)

UPDATE: If you want to get some thoughts from someone who, unlike yours truly, actually knows what he's talking about, check out Dan Cohen.

Thursday, December 16, 2010

my philosophy of life

Posting will be light to nonexistent for a few days, as I travel to Alabama to visit my family and try to wrap up the end-of-semester . . . stuff, so let me just leave you to meditate on these words of wisdom, words worthy of governing a wise man's life: You have to play the ball where the monkey drops it.

Wednesday, December 15, 2010

paginating, embedding

Two cool posts from if:book: first, a defense of the value of pagination, even in digital texts:

It's important to realise what you're doing when you're scrolling. You're gazing at the line you were reading as you draw it up the screen, to near the top. When it gets to the top, you can continue reading. You do this very quickly, so it doesn't really register as hard work. Except that it changes your behaviour -- because a misfire sucks. A misfire occurs when you scroll too far too rapidly, and the line you were reading disappears off the top of the screen. In this case, you have to scroll in the other direction and try to recognise your line -- but how well do you remember it? Not necessarily by sight, so immediately you have to start reading again, just to find where you were. . . .

Beyond this, even if you have startling accuracy, still you are doing a lot of work, because your eyes must track your current line as it animates across the screen. For sustained reading, this quickly gets physically tiring.

Pagination works for long text, not because it has a real-world analogy to printed books or whatever, but because it maximises your interface: you read the entire screenful of text, then with a single command, you request an entirely new screenful of text. There's very little wastage of attention or effort. You can safely blink as you turn.

A strong argument.

Second, I didn't realize that the Internet Archive has created a cool tool for embedding whole books in webpages. I’m still trying to decide how useful this is, and what its uses might be, but anyway, it is cool.

Tuesday, December 14, 2010

reading personally

In support of Sarah Werner’s thoughtful comment on my previous post, I’d like to cite this wonderful passage from Edward Mendelson’s book The Things That Matter:

Anyone, I think, who reads a novel for pleasure or instruction takes an interest both in the closed fictional world of that novel and in the ways the book provides models of examples of the kinds of life that a reader might or might not choose to live. Most novels of the past two centuries that are still worth reading were written to respond to both of those interests. They were not written to be read objectively or dispassionately, as if by some nonhuman intelligence, and they can be understood most fully if they are interpreted and understood from a personal point of view, not only from historical, thematic, or analytical perspectives. A reader who identifies with the characters in a novel is not reacting in a naïve way that ought to be outgrown or transcended, but is performing one of the central acts of literary understanding.

Can “identifying” with characters, or reading in order to learn more about yourself, be done badly? Of course. But that would be a poor reason for repudiating such reading altogether. Academic criticism can be done badly too. Or so I hear.

Oprah's Dickens

Hillary Kelly at The New Republic is experiencing considerable agitá about Oprah’s selection of two Dickens novels for her book club:

But what can Oprah really bring to the table with these books? Oprah has said that, together, the novels will “double your reading pleasure.” But is that even true? And do the novels even complement each other? Can you connect Miss Havisham’s treatment of time to Carton’s misuse of his “youthful promise”? Well, don’t ask Oprah herself, as she “shamefully” admits she has “never read Dickens.” . . .

Even more confusingly, Oprah’s comments about Dickens making for cozy reading in front of a winter fire misinterprets the large-scale social realism of his work. It stands to reason that her sentimentalized view of Dickens might stem from A Christmas Carol — probably his most family-friendly read and one of his most frequently recounted tales. But her quaint view of Victoriana, as she’s expressed it, belies an ignorance of Dickens’s authorial intentions. Indeed, both A Tale of Two Cities and Great Expectations are dark and disturbing, with elaborate ventures into the seedy underbelly of London and the bloody streets of Paris. How can we trust a literary guide who, ignorant of the terrain ahead, promises us it will be light and easy? . . .

Indeed, Oprah’s readers have been left in the dark. They must now scramble about to decipher Dickens’s obscure dialectical styling and his long-lost euphemisms. . . .

And so on — at considerable length. Yes, we certainly can't countenance such a thing — masterpieces of literature being read by naïve people lacking certified professional instruction! Oh, the humanity! What if someone got caught up in Pip’s love for Estella, or Sydney Carton’s noble self-sacrifice, but failed to parse Dickens’s incisive critique of the Victorian social order? Can you imagine the consequences?

Kelly’s core concern is summed up here: “the sad truth is that, with no real guidance, readers cannot grow into lovers of the canon.” Cannot? Actually, that isn't a sad truth — it’s not a truth at all — though it is quite sad that someone thinks the world’s greatest works of art are so powerless to reach an audience without academic assistance. As a corrective to such dark thoughts she should read another Dickens novel, David Copperfield, especially this passage:

My father had left a small collection of books in a little room upstairs, to which I had access (for it adjoined my own) and which nobody else in our house ever troubled. From that blessed little room, Roderick Random, Peregrine Pickle, Humphrey Clinker, Tom Jones, the Vicar of Wakefield, Don Quixote, Gil Blas, and Robinson Crusoe, came out, a glorious host, to keep me company. They kept alive my fancy, and my hope of something beyond that place and time, — they, and the Arabian Nights, and the Tales of the Genii, — and did me no harm; for whatever harm was in some of them was not there for me; I knew nothing of it. . . . It is curious to me how I could ever have consoled myself under my small troubles (which were great troubles to me), by impersonating my favourite characters in them — as I did — and by putting Mr. and Miss Murdstone into all the bad ones — which I did too. I have been Tom Jones (a child's Tom Jones, a harmless creature) for a week together. I have sustained my own idea of Roderick Random for a month at a stretch, I verily believe.

“This was my only and my constant comfort,” David concludes. “When I think of it, the picture always rises in my mind, of a summer evening, the boys at play in the churchyard, and I sitting on my bed, reading as if for life.”

Reading as if for life. And with no teachers in sight. A miracle indeed; but one repeated every day. Oprah is giving many, many people an incentive to have experiences like David Copperfield’s, and by my lights that’s not a bad thing.

Francecso Franchi

For the chart lover, much to love in Francesco Franchi's Flickr photostream. Via Ministry of Type. (Click on the photo for a much larger and awesomer version.)

Monday, December 13, 2010

closed minds

Peter Conrad:

Hillier compares Chesterton to Dr Johnson, whom he physically resembled thanks to his dropsical belly and rolling gait, and whom he often impersonated in pageants. But Johnson's gruff dismissals – of Scotland, of opera, of Sterne's Tristram Shandy, and of anything or anyone who irritated him – were the expression of quirky prejudice; unlike Chesterton, he never pretended to papal infallibility. Johnson prevailed by bullying Boswell, but Chesterton threatened anathema, as when he disposed of the Enlightenment by announcing: "I know of no question that Voltaire asked which St Thomas Aquinas did not ask before him – only St Thomas not only asked, but answered the questions." There speaks a man with a closed mind, a neo-medievalist who abhorred Jews and pined for the return of an agrarian feudal economy in which with every man would be allocated "two acres and a cow".

Once someone says that Johnson — a man who by his own admission “talked for victory,” and was labeled "The Great Cham [Khan] of Literature" (by Oliver Goldsmith) for good reason — “never pretended to papal infallibility,” you need to be on your guard when he says anything else. Though Johnson is the incomparably greater writer, he and Chesterton manifested a similarly complex balance of confidence and vulnerability. And only a very closed mind — or a very ill-informed one — could deny that Aquinas did indeed anticipate and respond to the key questions posed against the Deity by Voltaire. One may not agree with the Angelic Doctor’s answers to questions Voltaire and his admirers thought unanswerable, but they are there.

Chesterton in that passage is merely trying to point out that our ancestors did not believe as they did merely out of ignorance. They thought about many of the same issues Whiggishly self-congratulatory late-moderns think about, but often came to different conclusions. And it’s actually rather instructive to discover what those conclusions are, and how they reached them. Aidan Nichols’s Discovering Aquinas is quite helpful in this regard.

Friday, December 10, 2010

information considered and reconsidered

Here's a wonderful little post by James Gleick about the meaning of the word "information," according to the OED. A palace indeed. This reminds of of one of my favorite books, Jeremy Campbell's Grammatical Man: Information, Entropy, Language, and Life — probably the first book I read that suggested serious connections among my own work (the interpretation of texts), cognitive science, and computers. It was this book that told me who Claude Shannon is and why he matters so very, very much to the world we now inhabit.

Grammatical Man is almost thirty years old now and much of the science is clearly outdated, but it's still fascinating, and I wish someone brilliant would tell the same story again, in light of present knowledge. Maybe a job for Steven Johnson?

Thursday, December 9, 2010

adventurousness and its enemies, part 3

Nick Carr's post on Craig Mod's brief for interactive storytelling is more incisive and cogent than mine. Not that that's any great achievement in itself . . . but just read Nick's post.


Susan Orlean has written a beautiful, melancholy post about the challenges of dealing with her mother's physical and mental decline — and having to deal with it from hundreds of miles away. She writes,
Sometimes I’m dazzled by how modern and fabulous we are, and how easy everything can be for us; that’s the gilded glow of technology, and I marvel at it all the time. And then my mom will call, and in the course of the conversation she’ll say something disjointed that disturbs me and reminds me of her frailty, and then she’ll mention that it’s snowing hard in Ohio and I’ll wonder how she’s going to get to the grocery store, and I look at my gadgets and gizmos, and I realize none of them will help me. If anything, they’ve filled me with the unreal idea that everything is possible; that virtual is actual; that you can delete things you don’t like; that you can find and have whatever it is you want whenever you want it; but instead I’m learning that the truest, immutable facts of life are a lot harder and slower and sometimes sadder, and always mystifying.
Please do read the whole little essay, which is touching and true.
The first commenter on the post responds in this way: "Susan, why does your note seem a notch too precious to me? We're all amateurs, but we all muddle through. Perhaps it's the Manhattan lifestyle, but most of us expect to have to do these things, take care of children and parents."
Now, there are answers to this comment. One might note that it's one thing to expect to deal with suffering, another thing altogether to be thrown into the midst of it. Theory and practice, you know. One might note that this comment could be equally directed towards someone who wrote a post about being diagnosed with cancer: "Perhaps it's the Manhattan lifestyle, but most of us expect that we will suffer and die." (So quit your whining.) One might also ask what "the Manhattan lifestyle" has to do with anything.
But nobody is likely to bother, because we all know that a person who writes something like this is one of two kinds of sociopath: the simple kind, who genuinely has no compassion for someone else's pain, or the complex kind, who suppresses any compassion in order to try to hurt someone he doesn't know, just for kicks. Obliviousness or intentional cruelty, those are the options. And in either case a critique is futile: the first kind of sociopath wouldn't understand that there's a problem, and the second would just smirk with satisfaction at having gotten a rise out of someone.
Yes, I know, I come back to this over and over again. But I think it matters. This kind of response is so common in online discourse that it forces, or should force, all of us to ask just what kind of people surround us, just how many of these sociopaths there are, and what variety they tend to be. And why they're like that.
Just after I read the post by Orlean and (unfortunately) allowed my eyes to drift down to the comments, I read at Letters of Note the fifteen-year-old John Updike's commendation of the Little Orphan Annie comic strip. Among other things, Updike wrote:
I admire the magnificent plotting of Annie’s adventures. They are just as adventure strips should be—fast moving, slightly macabre (witness Mr. Am), occasionally humorous, and above all, they show a great deal of the viciousness of human nature. I am very fond of the gossip-in-the-street scenes you frequently use. Contrary to comic-strip tradition, the people are not pleasantly benign, but gossiping, sadistic, and stupid, which is just as it really is.
That about sums it up.

Wednesday, December 8, 2010

The Pleasures of Reading are coming at you! (also in a good way)

Not on Amazon yet, but at OUP's site.

The Age of Anxiety is coming at you! (in a good way)

Pre-order here.

class blogs

On Twitter this morning I asked for thoughts on how best to run a class blog, and replies are coming in. People are reminding me of Mark Sample's excellent post on "blog audits," and are tossing around other ideas too.
When I set up blogs for class I tell students that there are five kinds of participation they can engage in:
    • Offering an interpretation of something we've read;
    • Asking a question about something we've read;
    • Linking to, quoting from, and responding to online articles or essays about what we're reading;
    • Providing contextual information — biographical, historical, whatever — about the authors we're reading and their cultural and intellectual worlds;
    • Commenting on the posts of their fellow students.
    One question I have is whether I should value some of these kinds of post more highly than others, and reward them accordingly. Any thoughts about that? Any other suggestions?

    adventurousness and its enemies, part 2

    It's not just in writing that the social can militate against innovation: it happens in teaching too. Some administrators want teachers to be willing to tweak their assignments, their syllabi, and their use of class time on a weekly, or even daily, basis, in response to student feedback — and then simultaneously insist that they want teachers to be imaginative and innovative.
    But these imperatives are inconsistent with one another, because students tend to be quite conservative in such matters; and the more academically successful they are, the more they will demand the familiar and become agitated by anything unfamiliar and therefore unpredictable. It is possible for a good teacher to manage this agitation, but it's not easy, and it requires you to have the courage of your convictions.
    You get this courage, I think, by being willing to persist in choices that make students uncomfortable. Now, some student discomfort results from pedagogical errors, but some of it is quite salutary; the problem is that you can't usually tell the one from the other until the semester is over — and sometimes not even then. I have made more than my share of boneheaded mistakes in my teaching, but often, over the years, I have had students tell me, "I hated reading that book, but now that I look back on it I'm really glad that you made us read it." Or, "That assignment terrified me because I had never done anything like it, but it turned out to be one of the best things I ever wrote." But if I had been faced to confront, and respond to, and alter my syllabus in light of, in-term opposition to my assignments, I don't know how many of them I would have persisted in. It would have been difficult, that's for sure.
    The belief that constant feedback in the midst of an intellectual project is always, or even usually, good neglects one of the central truths of the life of the mind: that the owl of Minerva flies only, or at least usually, at night.

    Tuesday, December 7, 2010

    adventurousness and its enemies

    Yesterday I wrote that insofar as writing becomes social, it will become less, not more, adventurous. Here’s why: imagine that James Joyce drafts the first episode of Ulysses and posts it online. What sort of feedback will he receive, especially from people who had read his earlier work? Nothing very commendatory, I assure you. By the time he posts the notoriously impenetrable third episode, with its full immersion in the philosophical meditations of a neurotic hyperintellectual near-Jesuit atheist artist-manqué, the few readers who haven't jumped ship already will surely be drawing out, and employing, their long knives. Then how will they handle the introduction of Leopold Bloom, and all the attention given to the inner life of this seemingly unremarkably and coarse-minded man? And, much later, the nightmare-fantasia in Nighttown? It doesn't bear thinking of.

    Would Joyce be able to resist the immense pressures from readers to give them something they recognize? Of course he would; he’s James Joyce. He doesn't give a rip about their incomprehension. (Which is why he wouldn't post drafts online in the first place, but never mind.) But how many other writers could maintain their commitment to experimentation and innovation amidst a cacophony of voices demanding the familiar? — which is, after all, what the great majority of voices always demand.

    Monday, December 6, 2010


    Umberto Eco always makes me think:
    I once had occasion to observe that technology now advances crabwise, i.e. backwards. A century after the wireless telegraph revolutionised communications, the Internet has re-established a telegraph that runs on (telephone) wires. (Analog) video cassettes enabled film buffs to peruse a movie frame by frame, by fast-forwarding and rewinding to lay bare all the secrets of the editing process, but (digital) CDs now only allow us quantum leaps from one chapter to another. High-speed trains take us from Rome to Milan in three hours, but flying there, if you include transfers to and from the airports, takes three and a half hours. So it wouldn’t be extraordinary if politics and communications technologies were to revert to the horse-drawn carriage.

    brave new digital world (number 3,782 in a series)

    Craig Mod on how the digital world changes books:

    The biggest change is not in the form stories take but in the writing process. Digital media changes books by changing the nature of authorship. Stories no longer have to arrive fully actualised. On the simplest level, books can be pushed to e-readers in a Dickensian chapter-by-chapter format — as author Max Barry did with his latest book, Machine Man. Beyond that, authorship becomes a collaboration between writers and readers. Readers can edit and update stories, either passively in comments on blogs or actively via wiki-style interfaces. Authors can gauge reader interest as the story unfolds and decide whether certain narrative threads are worth exploring.

    For better or for worse, live iteration frees authors from their private writing cells; the audience becomes directly engaged with the process. Writers no longer have to hold their breath for years as their work is written, edited and finally published to find out if their book has legs. In turn they can be more brazen and spelunk down literary caves that would have hitherto been too risky. The vast exposure brought by digital media also means that those risky niche topics can find their niche audiences.

    “Dickensian” in the first paragraph above gives away too much of the game; maybe all of it. In the Victorian era, books were pushed to magazines in a chapter-by-chapter format, and indeed, in era and in all others allowing for serial publication, “Authors [could] gauge reader interest as the story [unfolded] and decide whether certain narrative threads [were] worth exploring.”

    As for readers editing and updating stories, that was certainly a major feature of the publishing world before copyright laws became clear and enforceable: consider, for instance, the unknown writer who altered and continued Don Quixote, pausing only to mock Cervantes for his poverty and his war injuries (Cervantes had lost the use of an arm in the Battle of Lepanto).

    There are many literary activities that digital technology makes easier; I’m not sure there are any that it makes possible. And not all the things it makes easier are worth doing. We need to consider these matters on a case-by-case basis.

    As for the claim that digital technologies will make writers bolder, I think just the reverse is true, but I’ll explain that in a later post.

    Friday, December 3, 2010

    opting out of the monopolies

    At the Technology Liberation Front, Adam Thierer has been reviewing, in installments, Tim Wu’s new book The Master Switch, and has received interesting pushback from Wu. One point of debate has been about the definition of “monopoly”: Wu wants an expansive one, according to which a company can have plenty of competition, and consumers multiple alternatives, and yet that company can still be said to have a monopoly. (Thierer responds here.)

    I think Wu’s definition is problematic and not, ultimately, sustainable, but I see and sympathize with his major point. I can have alternatives to a particular service/product/company, and yet find it almost impossible to escape it because of what I’ve already invested in it. When I read stories like this, or talk to friends who work for small presses, I tell myself that I should never deal with Amazon again — and yet I do, in part because buying stuff from Amazon is so frictionless, but also because I have a significant number of Kindle books now, and all those annotations that I can access on the website. . . . I don't want to lose all that. I can feel my principles slipping away, just as they did when I tried to escape the clutches of Google.

    Amazon is not, technically speaking, a monopoly, and neither is Google. But they have monopoly-like power over me — at least for now. And I need to figure out just how problematic that is, and whether I should opt out of their services, and (if so) how to opt out of them, and what to replace them with. . . . Man, modern life is complicated. These are going to be some of the major moral issues of the coming decades: ones revolving around how to deal with services that have a monopolistic role in a given person’s life. Philip K. Dick saw it all coming. . . .

    Thursday, December 2, 2010

    less than singular

    Cosma Shalizi (click through to the original for important links):

    The Singularity has happened; we call it "the industrial revolution" or "the long nineteenth century". It was over by the close of 1918.

    Exponential yet basically unpredictable growth of technology, rendering long-term extrapolation impossible (even when attempted by geniuses)? Check.

    Massive, profoundly dis-orienting transformation in the life of humanity, extending to our ecology, mentality and social organization? Check.

    Annihilation of the age-old constraints of space and time? Check.

    Embrace of the fusion of humanity and machines? Check.

    Creation of vast, inhuman distributed systems of information-processing, communication and control, "the coldest of all cold monsters"? Check; we call them "the self-regulating market system" and "modern bureaucracies" (public or private), and they treat men and women, even those whose minds and bodies instantiate them, like straw dogs.

    An implacable drive on the part of those networks to expand, to entrain more and more of the world within their own sphere? Check. ("Drive" is the best I can do; words like "agenda" or "purpose" are too anthropomorphic, and fail to acknowledge the radical novely and strangeness of these assemblages, which are not even intelligent, as we experience intelligence, yet ceaselessly calculating.)

    Why, then, since the Singularity is so plainly, even intrusively, visible in our past, does science fiction persist in placing a pale mirage of it in our future? Perhaps: the owl of Minerva flies at dusk; and we are in the late afternoon, fitfully dreaming of the half-glimpsed events of the day, waiting for the stars to come out.

    Beautifully put. But I would argue that any Singularity that can happen without our noticing, or whose happening is a matter for debate, cannot be the Singularity that its worshippers hope for. The unrecognized eschaton is no eschaton at all.

    Wednesday, December 1, 2010

    Daily Lit

    Here's the place to go if you would like to have books emailed to you in installments, at a frequency you set yourself. At first I strongly disliked this idea: I thought, "But what if I get to an end of an installment and still have the time and the inclination to read further?" On further reflection, though, I remembered that many of the books one might choose to read through this service — the novels of Dickens most notably — came originally on the installment plan. So in one sense this returns fiction to an early mode of delivery.
    The pleasures of reading plot-driven books are often increased by such periodicity of encounter. During the Harry Potter Years I took delight in the periods between books when we fans could speculate about what was coming next. Conversely, when I recently read the massive cartoon epic Bone I found it thoroughly mediocre — but then suspected that I would have enjoyed it a good deal more if I had read it in its original installments and therefore had that time for speculation. So maybe the Daily Lit can exploit those pleasures.
    Only for plot-driven books, though. Character studies require something more like immersion in their intellectual environment. Can you imagine trying to read Mrs. Dalloway at the rate of a few pages a week?

    Tuesday, November 30, 2010

    novelty, once more

    There have been some interesting reflections recently on the advantages and disadvantages of the blog as a medium for literary criticism and reflection: see here, here, and here.
    I have mixed feelings on these points. On the one hand, since blogs tend to be personal, non-professional, and unpaid, they ought to be ideal venues for people to reflect on whatever they happen to be reading, whether it's brand-new or only new to them — or not even new to them: over at Tor.com there is a long-running blog series on re-reading Robert Jordan's Wheel of Time series, which I think I have mentioned before in these pages. I'm pretty sure I've also mentioned group reading/blogging projects like Crooked Timber's Miéville Seminar and the Valve's Book Events, like the one on Theory's Empire.
    But there is not nearly enough of this kind of thing online, and I blame, as I have so often blamed in the past, blog architecture itself, with its relentless emphasis on novelty and (relative) brevity. We need to fight against this — I need to fight against it. Why shouldn't I spend a month blogging my way through a big old book? Maybe someday I will. . . .

    Monday, November 29, 2010

    email, we hardly knew ye

    Cringely is sad about the decline and fall of email. Me? Not so much. I like the lightweight minimalism of text/IM/Twitter, and use them when I can in preference to email.
    That said, there's one very important way in which email is superior to those other technologies: it is completely asynchronous. People may send emails hoping for a quick reply, but they generally know better than to expect one. But if you've tweeted recently, people expect quick responses to replies and direct messages, and of course, nothing says "Interrupt me!" like that green light next to your name in someone's IM client. (Whether texting is similarly always-on depends on how old you are, I suppose.)
    I haven't figured out quite how to manage all this, and maybe I never will. Typically I set my IM status to "invisible," but I don't want my friends to do the same — if they did, how would I know when to send them a message? So I fall short of the categorical imperative there. Basically, I am coming to realize, I want a medium of communication which allows me to interrupt friends whenever I want to without ever allowing them to interrupt me. I ain't asking for much.

    Wednesday, November 24, 2010


    If I think of Copia as a standard, everyday way of reading, it seems like a nightmare to me. But if I think of it as a way to conduct a focused, purposeful conversation about a book — social reading and commentary for the classroom, or for scholarly collaboration — it seems like a dream.
    Via the always-provocative Matthew Battles.

    material developments

    So what should e-readers be made of? How about, let’s see — yes: paper.

    This article reports on the use of paper as the substrate for the formation of displays based on the effect of electric fields on the wetting of solids, the so-called electrowetting (EW) effect. One of the main goals of e-paper is to replicate the look-and-feel of actual ink-on-paper. We have, therefore, investigated the use of paper as the perfect substrate for EW devices to accomplish e-paper on paper. The motion of liquids caused by the EW effect was recognized early on by Beni and Hackwood to have very attractive characteristics for display applications: fast response time, operation with low voltage, and low power consumption. More recent EW structures utilize the voltage-induced contact angle (CA) change of an aqueous electrolyte droplet placed on the surface of a hydrophobic fluoropolymer layer and surrounded by oil. Insulating, nonpolar oils (usually alkanes) are used for this purpose because they (unlike water) do not respond directly to the applied electric field. EW technology is used in many applications, including reflective and emissive displays, liquid lenses, liquid-state transistors, and bio/medical assays.

    Seriously? I’m imagining letters re-forming on the page as on the Marauder’s Map.

    Via tech.blorge.

    Monday, November 22, 2010

    university presses

    After reading yet another story this morning about the problems university presses find themselves in, with all-too-brief suggestions about the ways that digital publishing could help rectify these problems, I thought, "I need to write a post on this. After all, scholarly writing is tailor-made, more than any other kind of writing, for digital publication" — and then I remembered that someone has already said all that I might say on this subject.

    reader's report: Jane Smiley

    Well, the recent traveling and busyness may have kept me from posting, but it didn't keep me from reading. Nothing keeps me from reading. So here's what's been going on:

    I've been working my way through Tony Judt's magisterial Postwar, but it's a very large book — exactly the kind of thing the Kindle was made for, by the way — and I've been pausing for other tastes. For instance, I read Jane Smiley's brief and brisk The Man Who Invented the Computer: The Biography of John Atanasoff, Digital Pioneer, the title of which is either misleading or ironic or, probably, both. Smiley begins her narrative with a straightforward claim:

    The inventor of the computer was a thirty-four-year-old associate professor of physics at Iowa State College named John Vincent Atanasoff. There is no doubt that he invented the computer (his claim was affirmed in court in 1978) and there is no doubt that the computer was the most important (though not the most deadly) invention of the twentieth century.

    But by the end of the narrative, less than half of which is about Atanasoff, she writes, more realistically and less definitively,

    The computer I am typing on came to me in a certain way. The seed was planted and its shoot was cultivated by John Vincent Atanasoff and Clifford Berry, but because Iowa State was a land-grant college, it was far from the mainstream. Because the administration at Iowa State did not understand the significance of the machine in the basement of the physics building, John Mauchly was as essential to my computer as Atanasoff was — it was Mauchly who transplanted the shoot from the basement nursery to the luxurious greenhouse of the Moore School. It was Mauchly who in spite of his later testimony was enthusiastic, did know enough to see what Atanasoff had done, was interested enough to pursue it. Other than Clifford Berry and a handful of graduate students, no one else was. Without Mauchly, Atanasoff would have been in the same position as Konrad Zuse and Tommy Flowers — his machine just a rumor or a distant memory.

    Each person named in that paragraph gets a good deal of attention in Smiley’s narrative, along with Alan Turing and John von Neumann, and by the time I finished the book I could only come up with one explanation for Smiley’s title and for her ringing affirmation of Atanasoff’s role as the inventor of the computer: she too teaches at Iowa State, and wants to bring it to the center of a narrative to which it has previously been peripheral. A commendable endeavor, but not one that warrants the book’s title.

    In the end, Smiley’s narrative is a smoothly readable introduction to a vexed question, but it left me wanting a good deal more.

    Friday, November 19, 2010

    busyness continues . . .

    . . . which means that I haven't been able to blog here. I have managed, though, to post some quotes to my tumblelog. A poor substitute, I know, but hey, they're pretty cool quotes.

    Saturday, November 13, 2010

    brief hiatus

    I'll be traveling for a few days, folks, so no new posts until late next week.

    Friday, November 12, 2010

    education as a public good

    In a typically smart column about online education, my friend Reihan Salam quotes Anya Kamenetz:

    The only way to restore the concept of higher education as a public good is to reinvent it as a truly public good: not subject to antiquated notions of scarcity and hierarchical expertise, but adapted to the current reality of free, open, and immediate sharing of knowledge.

    Reihan says, “That sounds right to me,” but I can't say that because I have no idea what it means. I see the same problem here that I saw in Kamenetz’s book: enthusiasm misted over by terminal vagueness. To wit:

    1) We restore something as a public good by reinventing it as a public good? Seems close to tautological, but beyond that: who is “we”? Who is going to go about the task of reinvention? College presidents working at the institutional level? Faculty reconfiguring their classes and intellectual activities whether they have administrative support or not? Congress passing new laws?

    2) About “the current reality of free, open, and immediate sharing of knowledge”: certainly a great deal of knowledge is free, open, and easily shared. On the other hand, a great deal of knowledge is proprietary and controlled by patents, trademarks, copyrights, and various forms of institutional secrecy. Is the proportion of information that is free greater than it used to be? (I have no idea. Free information is more easily accessed than it used to be, but that’s not the same thing.) In any case, how will “reinventing the concept of higher education as a public good” change the current regime of knowledge control and regulation? How could it do so?

    3) What does Kamenetz mean by “hierarchical expertise”? Obviously, expertise itself can't be hierarchical, so she probably (?) means something like, “a system in which people receive official rewards — jobs, promotions, accreditations, certifications, etc. — for demonstrated expertise.” But there’s a lot to be said for such a system. I like being able to choose a doctor by learning, among other things, where she got her medical degree and what board certifications she has earned. Even in Kamenetz’s book the people whom she celebrates for spreading their knowledge are people connected to, drawing funding from, and accredited by elite institutions. Is that a bad thing? Whether it is or not, it ain’t DIY education.

    Frankly, I’d love to see a system — or rather (this is the point) a non-system — in which the circulation of knowledge through informal and fluid networks plays a much greater role than it does now. A non-system which circumvents much of the bureaucratic sclerosis of the modern university, perhaps with the help of universities that are willing to reconfigure themselves. ("Reinvention" is too Utopian a term for me.) It all sounds very cool, in the abstract. I’d just like someone to tell me how we're going to get there.

    Thursday, November 11, 2010

    making connections

    One of Tim Burke’s colleagues is a little concerned about the breadth of interests represented by Tim’s syllabi:

    My colleague suggested to me that I had to be responsible first (and last) to my discipline and my specialization in my teaching, that there was something unseemly about the heavy admixture of literature and popular culture and journalistic reportage and anthropology that populates some of my syllabi. I’ve heard similar sentiments expressed as an overall view of higher education in some recent meetings. At a small liberal-arts college and maybe even at a large research university, this strikes me as substantially off the mark. Or at least we need some faculty who are irresponsible to their disciplines and responsible first to integrating and connecting knowledge.

    Let me repeat that for you: We need some faculty who are irresponsible to their disciplines and responsible first to integrating and connecting knowledge. This is a precise and concise summation of what I’ve tried to do for many years now. There’s a price to be paid for this kind of thing, of course: expanded interests do not yield expanded time. The day’s number of hours remain constant, and then there's the matter of sleep. So the more I explore topics, themes, books, films — whatever — outside the usual boundaries of my official specialization, the less likely it is that I will read every new article, or even every new book, in “my field.” But, to rephrase Tim’s point as a series of questions, Is the unswerving focus on a specifically bounded area of specialization the sine qua non of scholarship? Is it even intrinsic to scholarship? Is there not another model of scholarship whose primary activity is “integrating and connecting knowledge”?

    I think there is such a model, and I think it deserves to be called scholarship, but I’m not going to fight about the point. Call it what you want, it’s what I love to do, and God willing, I’ll be looking for new and interesting connections for the rest of my life. That’s how my mind works, in any event, but it’s also what makes sense given my institutional situation. Tim and I both teach at liberal arts colleges where we are asked to teach a variety of courses, and to try to maintain a narrow specialization in auch an environment is to set one’s teaching at odds with one’s research. I prefer to seek ways to make my teaching and my research feed each other, and since I can't do that by narrowing the range of courses I teach, I will do it by expanding the range of topics I research and write about.

    And I love it this way. Had I ended up at a big research university, I seriously doubt I would have had the luxury of developing some of the major interests that I’ve pursued in the past decade (e.g., the issues pursued on this blog). And from my point of view, that would be a shame.

    plusses and minuses

    . . . of the iPad as a reading device. The minuses: Terrible screen glare, even indoors. Fingerprints on the screen are a major problem: they're more visible than on the iPhone, especially when reading (because your eyes are on the screen for a long time without a break). It's awfully heavy in comparison to the Kindle, especially the Kindle 3. Also, major distractions are one click away.
    The plusses: the Kindle app is good, especially with two-column layout in landscape orientation. Quite lovely, really. Annotating is much quicker and easier than on the Kindle (though productive of those annoying fingerprints). Also, it's nice sometimes to read without a lamp on.
    Summary judgment: while I'll probably be doing most of my web reading on the iPad, any long-form reading I'll save for the Kindle. And for actual codex books.

    Tuesday, November 9, 2010

    the commodification of intimacy

    This sobering post from Nick Carr suggests that we ought to be worried, or at least seriously reflective, about “web revolutionaries” who are pushing the commercialism and commodification of human intimacy:

    What most characterizes today's web revolutionaries is their rigorously apolitical and ahistorical perspectives — their fear of actually being revolutionary. To them, the technological upheaval of the web ends in a reinforcement of the status quo. There's nothing wrong with that view, I suppose — these are all writers who court business audiences — but their writings do testify to just how far we've come from the idealism of the early days of cyberspace, when online communities were proudly uncommercial and the free exchanges of the web stood in opposition to what John Perry Barlow dismissively termed "the Industrial World." By encouraging us to think of sharing as "collaborative consumption" and of our intellectual capacities as "cognitive surplus," the technologies of the web now look like they will have, as their ultimate legacy, the spread of market forces into the most intimate spheres of human activity.

    I think Nick is right about this — as is Jaron Lanier when he sounds a similar note — and I say that as someone generally enthusiastic about the entrepreneurial possibilities of online culture.

    On some level we all know this commodification of intimacy is happening: no thoughtful person can possibly believe that Mark Zuckerberg’s crusade for “radical transparency” is a genuine Utopian ethic; we know that he’s articulating a position that, if widely accepted, yields maximum revenue for Facebook. But we are just beginning to think about how radically transparent we are becoming, and if Nick Carr is right, we very much need some “web revolutionaries” who really are revolutionary in their repudiation of these trends.

    In other words, the problem isn't the businessmen who want to dig around in our brains — of course the business world wants to dig around in our brains: haven't you seen “Mad Men”? — the problem is the failure of influential wired intellectuals to provide the necessary corrective pushback.

    more changes

    Speaking of personal vacillation and changeableness, remember how I returned my iPad? Yeah, well, I got another one, and basically for one purpose: teaching.

    The iPad, it turns out, is a great tool for teachers. I don't rely heavily on presentation software (Keynote is of course my software of choice, though one of these days I’m going to figure out Beamer): most days I don't use it at all, and when I do use it, it tends to be for just a few minutes, after which I want to return to conversation. If I had to build a massive Keynote deck, I’d want to use my MacBook for that, but for brief (under a dozen slides) presentations, the iPad version of Keynote works great.

    So I create my presentation and then, on my MacBook probably, write or update my class notes. These are always in plain text, and are in folders that are automagically and instantly backed up via Dropbox. Now Dropbox has a very nice iOS client, which means that I can write up those notes on my MacBook, then pick up the iPad and head to class, plug it in, make the Keynote presentation, unplug it, open the Dropbox client and lead discussion from those just-updated text notes.

    I think this system is going to work for me.

    Monday, November 8, 2010

    changes to the System

    It’s just not in my nature, I guess, to stick with one organizational method for very long. A few months ago, I wrote about my return to Backpack. Well, it only took about two weeks for me to leave Backpack again. There were a couple of reasons. First, I realized that while Backpack works well for me on my Mac — aside from some awkwardness of text entry — it didn’t work so well on my iPhone. The people at 37signals have done almost nothing to accommodate to mobile devices, and the third-party Backpack client, Satchel, is usable but awkward.

    And then, second, I discovered something I should have known already: that the app I had abandoned, Notational Velocity, has a fabulous Service: “New Note with Selection.” And even cooler, when you select a passage on a webpage and then invoke the service — which I do with a simple keystroke combination I assigned for the purpose — NV adds the URL also. Fabulous!* And then, because I sync my NV notes with Simplenote, I have these clippings available to me on my iPod — and any other iOS device I might have.** (More about that later.) Once NV can read and sync Simplenote’s tags I’ll be nearing my own private Singularity.

    Also, I have been using Remember the Milk as my to-do list and calendar, largely because of its excellent iPhone app — the iPhone seems to me the natural place to manage to-do’s.

    This has been my system for about three months now, which for me is a long time. Might I even stick with it? Stay tuned. . . .

    * The URL-addition works in Safari and OmniWeb, but not currently in Chrome. Chrome adds the selected text only.

    ** I can't find it now, but just a few days ago I read a blog post about how iOS devices need their own Services menu. This is very, very true.

    Friday, November 5, 2010

    I resemble that remark

    Keith Gessen: I want to move us into life choices. Does anybody regret the profession they have chosen?

    Mark Greif: I have no profession. Whatever profession I do, I regret it.

    Benjamin Kunkel: What do you . . . mean? What are you talking about?

    Mark Greif: I regret it!

    Benjamin Kunkel: What?

    Mark Greif: Whatever it is that I’ve become.

    Keith Gessen: You’ve become a philosopher.

    Mark Greif: No philosopher would think so.

    Keith Gessen: You’ve become an editor.

    Mark Greif: But that’s something to be ashamed of.

    Benjamin Kunkel: An essayist? A critic?

    Mark Greif: Essayist! That’s interesting. You know, you go through life not really knowing who you are, and one day, somebody calls you an essayist. Out of all the pathetic categories that I read growing up, I knew there was no bigger joke than an essayist. Someone who couldn’t write something long enough to actually grab hold of anyone, someone without the imagination to write fiction, someone without the romantic inspiration to write poetry, and someone who would never make any money or be published. I’m an essayist!

    — from n+1’s fabulous What We Should Have Known: Two Discussions.

    Thursday, November 4, 2010

    the book in the browser

    The Booki.sh reader from Inventive Labs on Vimeo.

    Via if:book.

    visual storytelling

    I just finished reading Bone — all 1300 pages of it. It was okay, I guess. The usual cod-Tolkienian stuff, with the slight twist that the hobbits are Pogo characters. Well-enough told, but . . . meh.

    I’ve been reading a number of comics-slash-graphic novels, and too many of them are trying to do in comic form what word-only forms (novels, essays) do better. There’s really no point to a graphic story in which the visual element isn't pulling a heavy load of meaning and mood. Too often you have art that isn't doing a lot except, perhaps, to conceal how drearily familiar the story is.

    But a fine example of art that contributes mightily to the character and power of the storyis Asterios Polyp, which would be a fantastic book were it not marred by an utterly ridiculous ending. I get annoyed every time I think about it. But until that ending, the book is a great example of visual storytelling.

    Wednesday, November 3, 2010

    open letters and closed ones

    So far three friends of mine have signed up for letter.ly, and are producing newsletters that I can sign up for. I have very mixed feelings about this. On the one hand, I want to support my friends, and I know that it’s hard to write (or do any other skilled labor) for free. Heck, I’ve even thought about signing up for letterl.ly myself.

    But on the other hand, projects like this are more nails in the coffin of the open web, and I don't like seeing that happen, even though it’s probably inevitable. Almost any of the letter.ly newsletters would have been blogs as recently as a few months ago — most of them probably were blogs a few months ago — and thus part of the most public conversational space yet invented. Now they’ll be the property of a select few — which is pretty much how things used to be before the internet. Bill Gates’s famous “Open Letter to Hobbyists” appeared in a computer club newsletter; the great Bill James’s Baseball Abstract began life as a newsletter: people found out about it through ads in The Sporting News. Letter.ly marks an attempt to renew the newsletter as a genre for the digital age. (There are a number of free newsletters out there — e.g., Jason Calcanis’s — but I’m talking about newsletters s a means of revenue for their writers.)

    Then there are experiments like the Times of London’s paywall experiment: results are mixed so far, but even if the Times site ends up making money, a formerly major player has been taken out of the general conversation of the Web. Similarly, as more and more people encounter newspapers and magazines not in web browsers but in purpose-built iPad apps, it may get harder to do the copying, pasting, and commenting that have been intrinsic to the blogging enterprise since its inception.

    But again, if that does happen it will be a return to the Normal of twenty years ago. Then I bought magazines and newspapers individually, and if I wanted to keep items in them I literally cut them out and filed them — rarely did I paste. If now I buy them individually as iPad apps, with in-app purchase of single issues or subscriptions (which is what Wired, among others, wants me to do) then I have largely returned to old habits, though any copying and pasting I do will be digital and there won't be any cutting at all.

    I remember when I had to think hard about how many magazines and newspapers I was subscribing to, and whether I could afford a new one without canceling something else. Maybe I’ll soon be making those decisions again. I understand the necessary of such changes, but I don't have to like them. I especially don't like the thought that I might hurt someone’s feelings by not subscribing to — or, worse, canceling my subscription to — his or her newsletter. And I am deeply uncomfortable with the thought that that One Great Conversation may be breaking up again. It’s starting to look like I’ll soon be nostalgic for those few years when I had a single-payee system — once a month to an ISP — after which the whole world came to my screen.

    Monday, November 1, 2010

    ad (non)sense

    Micah White is upset:

    The vast library that is the internet is flooded with so many advertisements that many people claim not to notice them anymore. Ads line the top and right of the search results page, are displayed next to emails in Gmail, on our favourite blog, and beside reportage of anti-corporate struggles. As evidenced by the tragic reality that most people can't tell the difference between ads and content any more, this commercial barrage is having a cultural impact.

    The danger of allowing an advertising company to control the index of human knowledge is too obvious to ignore. The universal index is the shared heritage of humanity. It ought to be owned by us all. No corporation or nation has the right to privatise the index, commercialise the index, censor what they do not like or auction search ranking to the highest bidder. We have public libraries. We need a public search engine.

    Well . . . if advertising is the problem, then “a public search engine” won't solve the problem, will it? We wouldn't see ads while searching, but we would see them as soon as we arrived at the pages we were searching for. Moreover, if it’s wrong to have ads next to reportage online, then presumably it’s wrong to have ads in the paper version of the Guardian, in magazines, and on television as well.

    What exactly is White asking for? A universal prohibition on internet advertising, brokered by the U.N.? An international tribunal to prosecute Google for unauthorized indexing? Yes, it would have been wonderful, as Robert Darnton has pointed out, if universities and libraries had banded together to do the information-indexing and book-digitizing that Google has done — but they didn’t.

    So here we are, with an unprecedented and astonishing amount of information at our fingertips, and we’re going to complain about ads? — the same ads that give us television, newspapers, and magazines? Please. Why not just come right out and say “I want everything and I want it for free”?

    Google gives us plenty to complain about; I have deeply mixed feelings about the company myself, as I have often articulated. But the presence of online ads ought to be the least of our worries.

    (Update: here's Darnton on the possibility of creating a national digital library.)

    Friday, October 29, 2010


    Jonathan Safran Foer made his forthcoming story Tree of Codes by cutting words and phrases out of Bruno Schultz's story "The Street of Crocodiles." A very three-dimensional object results. Take that, e-readers!

    Thursday, October 28, 2010

    what digital humanities is and isn't

    On Twitter, Tim Carmody says that "Digital Humanities can be a methodology for doing history, but it can also be a methodology in other disciplines too." My first response was to say that DH is not a methodology but a set of questions and concerns in search of a method, but now that I've thought about it, I would revise that: DH is a set of tools which tends to generate certain questions, but neither the tools nor the questions have yet coalesced into a method.
    If there is one overriding theme in the DH conversations I've seen, it's "What should we do with all these fabulous toys?" The answers to that question that I've seen are speculative and even tentative: I think DH is still waiting for someone (someones) to come along with really distinctive, powerful, and useful ways to slice and serve the data we now have available to us.
    (By the way, Tim Carmody is one of the sharpest critics of and thinkers about the brave new world of DH, and someone needs to give him a job — or a fat book contract — so the rest of us can benefit further from a first-class mind.)

    nature as information

    Via Clay Shirky, some really important thoughts from Mike at The Aporetic:

    A woman in a farm kitchen had a LOT to consider – just making a cooking fire took constant attention, and information about the kind and quality of the wood, the specific characteristics of the cook stove, the nature of the thing being cooked.

    The modern cook flips on the burner, and his or her attention, freed up, diverts to other things. She or he has much less information to deal with.

    So what appears to us as “too much information” could just be the freedom from necessity. I don’t have to worry about finding and cutting and storing fire wood: I don’t even have to man age a coal furnace. That attention has been freed up for other things. What we see as “too much information” is probably some thing more like “a surplus of free attention.”

    Read the whole thing: it’s an excellent reminder of the value, density, and richness of what Albert Borgmann calls “natural signs,” a form of “information about reality.”

    Wednesday, October 27, 2010

    amplified authorship

    Chris Meade writes — and please forgive the length of the quotation —

    The amplified author doesn’t wait for a publisher to decide if his or her work deserves a readership or not. Before considering sending a manuscript to a traditional publisher, the writer may have tested out their ideas on a circle of readers via a blog, drawn new readers in through Twitter and a variety of online networks. Acceptance from a quality publisher gives a boost to profile and reputation, but the amplified author doesn’t need to cede control to any one gatekeeper.

    A writer who has one book bought by a conventional publisher might want to self publish the next one, freed from the constraints of editors and marketing departments who have a view on what kind of book they think they can sell most effectively. And this approach can be adopted by writers at all levels, from emerging writers to global bestsellers.

    Amplified authors aren’t prey to vanity presses selling them a pretence of publication; they study the analytics and comments to find out who actually reads their work and what they make of it. Few ‘conventional’ authors make anything like a living wage from the books they publish, yet labour under the belief that they should do. Amplified authors know they don’t need cash up front to put their work into the world, and can develop techniques to expand their readership and market their wares if they wish, buying in design, editorial and promotional skills when they choose. Amplified authors drive their own careers forward.

    Meade paints a pretty rosy picture here, and I think he ought to concede that many people likely to make a success of “amplified authorship” — Seth Godin, for instance — have such hopes because they have built careers via traditional publishing. In the same way that DIY models of education are parasitic on established educational institutions, amplified authorship may, for some time anyway, need traditional publishing to make itself viable.

    Still, I find myself thinking along these lines. On Twitter I’ve been posting a series of Theses for Disputation, mostly about technology, and when I have 96 of them — one more than Martin Luther — I’m thinking of writing commentary on each of them and turning the whole thing into a little book. But should I ask my agent (I have a fantastic agent) to try to sell it to a publisher? Or should I try one of the many varieties of self-publishing, just for the sake of fun and experimentation? Or maybe turn the commentary into a series of letters that people can subscribe to?

    Don't know. But it’s fun to think about.

    Monday, October 25, 2010

    the relative value of innovation

    Steven Johnson’s Where Good Ideas Come From is primarily about innovation — about the circumstances that favor innovation. Thus, for instance, his praise of cities, because cities enable people who are interested in something to have regular encounters with other people who are interested in the same thing. Proximity means stimulation, friction. Iron sharpens iron, as the Bible says.

    All very true, and Johnson make his case well. But as I read and enjoyed the book, I sometimes found myself asking questions that Johnson doesn't raise. This is not a criticism of his book — given his subject, he had no obligation to raise these questions — but just an indication of what can happen when you take a step back from a book’s core assumptions. So:

    1) Almost all of the innovations Johnson describes are scientific and technological. How many of these are “good” not in the sense of being new and powerful, but in the sense of contributing to general human flourishing? That is, what percentage of genuine innovations would we be better off without?

    2) A related question: Can a society be overly innovative? Is it possible to produce more new idea, discoveries, and technologies than we can healthily incorporate?

    3) Under what circumstances does a given society need strategies of conservation and preservation more than it needs innovation?

    4) Do the habits of mind (personal and social) that promote innovation consort harmoniously with those that promote conservation and preservation? Can a person, or a society, reconcile these two impulses, or will one dominate at the expense of the other?

    Just wondering.

    Friday, October 22, 2010

    oh, for the good old days

    You know, the good old days when I could safely sneer at people who hadn't read the Officially Approved Books of my social cohort:
    I lived through a time when it was great to read. There were so many books that you just had to read, which would have been read by everyone you knew. Not merely read, though, but digested and discussed. We formed not merely our opinions but ourselves on them. There was a common culture — or, more accurately, a common counter-culture — which included music, art and film. If there was some faddishness in this, and a concomitant homogenisation of taste, there was the palpable upside of having plenty of people with whom to share one's enthusiasms. . . .
    Of course I realise that what we read in Ivy League colleges and at Oxford was not representative of the general population. But the point still stands: within our middle-class, educated world there was a canon, which wasn't limited to Shakespeare, Jane Austen and Scott Fitzgerald. You could assume people had read the hot contemporary books; when they hadn't, it occasioned not merely puzzlement, but disapproval.

    Wednesday, October 20, 2010

    what I would blog about if I had time

    Robert Pippin defends naïve reading. Kathleen Fitzpatrick and Kevin Dettmar dissent, sort of. How I would love to weigh in.

    Tuesday, October 19, 2010

    Steven Johnson and the connected mind

    Folks, I’m still way busy, so posting will continue to be light for a while. I’m hoping at some point to have substantive responses to Steven Johnson’s new book Where Good Ideas Come From, which I read last week. For now, check out Jason B. Jones’s review, and consider one important question, which will take me a while to elaborate.

    Johnson’s great theme is the virtue and power of connectedness — “Fortune favors the connected mind” will end up being the tagline for the book — but he acknowledges that too much connection can be a bad thing:

    The idea, of course, is to strike the right balance between order and chaos. Inspired by the early hype about telecommuting, the advertising agency TBWA/Chiat/Day experimented with a “nonterritorial” office where desks and cubicles were jettisoned, along with the private offices: employees had no fixed location in the office and were encouraged to cluster in new, ad hoc configurations with their colleagues depending on that day’s projects. By all accounts, it was a colossal failure, precisely because it traded excessive order for excessive chaos. . . . Slightly less ambitious open-office plans have grown increasingly unfashionable in recent years, for one compelling reason: people don’t like to work in them. To work in an open office is to work exclusively in public, which turns out to have just as many drawbacks as working entirely in your private lab.

    Elsewhere he argues that “Michelangelo, Brunelleschi, and da Vinci were emerging from a medieval culture that suffered from too much order. If dispersed tribes of hunter-gatherers are the cultural equivalent of a chaotic, gaseous state, a culture where the information is largely passed down by monastic scribes stands at the opposite extreme. A cloister is a solid. By breaking up those information bonds and allowing ideas to circulate more freely through a wider, connected population, the great Italian innovators brought new life to the European mind.”

    This buys too easily into a very familiar but now largely discredited narrative of the Renaissance as emancipation from the Dark Ages, and ignores the massive intellectual contributions of monastic culture, but the general point is surely right: there can be overly ordered, closed, and private intellectual environments, and there can be overly open and chaotic ones. The fact that Johnson celebrates “the connected mind” so strenuously in this book, in chapter after chapter, suggests that he thinks we need more openness. But here’s my question (at last):

    Is that true? Is it really our problem today that we’re not sufficiently connected?

    Monday, October 18, 2010

    unpopular highlights

    Virginia Heffernan:

    Amazon is quick to point out that you can always disable the [Popular Highlights] feature. But there’s a genie-in-the-bottle problem here. As with many things on the Web, once you’ve glimpsed popular highlights, it’s hard to unglimpse them. You get curious about what other readers think, especially with a book like “Freedom,” which bookstore windows and airplane waiting lounges would have you believe everyone is thinking about. Reading, after all, is only superficially solitary; in fact, it’s a form of intensive participation in language and the building of common culture.

    Well . . . I disabled it immediately and have never considered re-enabling it. I am not in the least bit curious about what other people underline. Does that make me arrogant? A misanthrope? Both? . . . Cool.

    Thursday, October 14, 2010

    and while I'm being grumpy

    Let’s fact-check our meditations on the past and future of reading, okay?

    No one knows where all this will end up, but it will be nowhere near as revolutionary as the change from reading scrolls to reading books in the middle ages. The e-reader revolution merely lures the same people to read books in a different format. The move from scrolls to books turned an immobile activity enjoyed by a tiny minority of educated people into a mobile phenomenon that would eventually be enjoyed by all. The unanswered question remains: who will control this revolution in knowledge, them or us? The answer, literally and metaphorically, is in our hands.

    Christians had fully embraced the codex and abandoned the scroll by the end of the second century A.D. (Pagans and Jews would follow soon thereafter.) This is why virtually all surviving Christian literature is in codex form. The scroll disappeared long before the Roman Empire did.