A tradition, however firmly rooted, if it is never watered, though it dies hard, yet in the end it dies. And today a great number — perhaps the majority — of the men and women who handle our affairs, write our books and our newspapers, carry out research, present our plays and our films, speak from our platforms and pulpits —yes, and who educate our young people, have never, even in a lingering traditional memory, undergone the scholastic discipline. Less and less do the children who come to be educated bring any of that tradition with them. We have lost the tools of learning — the axe and the wedge, the hammer and the saw, the chisel and the plane — that were so adaptable to all tasks. Instead of them, we have merely a set of complicated jigs, each of which will do but one task and no more, and in using which eye and hand receive no training, so that no man ever sees the work as a whole or looks to the end of the work. What use is it to pile task on task and prolong the days of labor, if at the close the chief object is left unattained? It is not the fault of the teachers — they work only too hard already. The combined folly of a civilization that has forgotten its own roots is forcing them to shore up the tottering weight of an educational structure that is built upon sand. They are doing for their pupils the work which the pupils themselves ought to do. For the sole true end of education is simply this: to teach men how to learn for themselves; and whatever instruction fails to do this is effort spent in vain.
Saturday, October 31, 2009
Friday, October 30, 2009
Thursday, October 29, 2009
- Fleming Rutledge, The Battle for Middle-Earth (surprisingly — to me — interesting and convincing)
- Neal Stephenson, Quicksilver
- Chris Wickham, The Inheritance of Rome
- James Gleick, Isaac Newton
- Neil Shubin, Your Inner Fish
- Eric Havelock, Preface to Plato
- Gregory Dix, The Shape of the Liturgy
- Jean LeClerq, The Love of Learning and the Desire for God
- Johan Huizenga, The Autumn of the Middle Ages
- Frances Yates, The Art of Memory
- Mikhail Bakhtin, Rabelais and His World
- Diarmaid MacCullouch, Thomas Cranmer
- Maynard Mack, Alexander Pope
- Roy Porter, Flesh in the Age of Reason
- Walker Percy, The Message in the Bottle
Wednesday, October 28, 2009
Tiff plowed through more than 20 books on the Kindle. At one point in the middle, she read a book on paper (because it wasn’t available on the Kindle) and absolutely hated it. Her commentary was priceless: she couldn’t easily look up word definitions, she couldn’t change the font size, it was awkward and lopsided to hold near the beginning and end, and it would lose her place if she fell asleep while reading.
Tuesday, October 27, 2009
Monday, October 26, 2009
Friday, October 23, 2009
I tweeted a while back my sense that I should post something about this conversation about e-books and the future of reading, but all I have time for right now is a quote from David Gelernter’s contribution:
I assume that technology will soon start moving in the natural direction: integrating chips into books, not vice versa. I might like to make a book beep when I can’t find it, search its text online, download updates and keep an eye on reviews and discussion. This would all be easily handled by electronics worked into the binding. Such upgraded books acquire some of the bad traits of computer text — but at least, if the circuitry breaks or the battery runs out, I’ve still got a book.
Of course, onscreen text will change and improve. But the physical side of reading depends not on the bad aspects of computer screens but on the brilliance of the traditional book — sheets bound on end, the “codex” — which remains the most brilliant design of the last several thousand years. Technologists have (as usual) decreed its disappearance without bothering to understand it. They make the same mistake clever planners have made for half a century in forecasting the death of cars and their replacement by spiffier technology. The problem is, people like cars.
Gelernter is always interesting, even when he’s off-base — and about this I think he’s sort of off-base. He’s absolutely right that it would be really cool if codexes were augmented by electronic features rather than being replaced by electronic gadgets. But I think he probably wrong to “assume” that that’s what’ll happen. We’ll see, of course.
(As an aside, I remember reading about Gelernter’s Lifestreams project years ago — in this article, as it happens — and I wish it had panned out. At this point I fear it never will. Apple’s Time Machine software sort of looks like the Lifestreams model, and I suspect it’s a tip of the hat to Gelernter, but it has a very different function.)
Thursday, October 22, 2009
Wednesday, October 21, 2009
Tyler Cowen in the Wilson Quarterly:
Many critics charge that multitasking makes us less efficient. Researchers say that periodically checking your e-mail lowers your cognitive performance level to that of a drunk. If such claims were broadly correct, multitasking would pretty rapidly disappear simply because people would find that it didn’t make sense to do it. Multitasking is flourishing, and so are we.
Right, because human beings don't ever do things that don't make sense. We’re rational actors through and through. Addictive online behavior a problem? Impossible. The power of the variable-interval reinforcement schedule of email? Hogwash.
All of which means that any study which says that we engage in unproductive or damaging behavior can simply dismissed out of hand. “Multitasking is flourishing, and so are we” — the 21st-century version of “Every day in every way I am getting better and better.”
From a working library:
The first definition is the most familiar: one who reads, or one who is fond of reading. A young girl tucked under a tree with a book in hand; an old man waiting for the bus, nose pressed into the spine; three little boys sitting on the curb sharing a newspaper, ink smudged on their knees.
The second definition harks back to the single-room schoolhouse: an anthology of texts used for teaching. Here the term passes from the person doing the reading to the object being read, from reading for its own sake to reading with intent. The image of reading remains, but it becomes focused, purposeful; it becomes work.
The third definition shifts from the object to the machine: a device for reading data. No longer human, the reader becomes mechanical, the texts reduced to ones and zeros. There are no stories, only limitless information, each digit as insignificant as the next.
Tuesday, October 20, 2009
Monday, October 19, 2009
In the world of Legos, what I did discover is that my kids were taking these beautiful, gorgeous, incredibly restrictive predetermined Legos Star Wars play sets — and yeah, they really wanted it to be put together just the way the box showed it. I don't think it occurred to them you'd want to do anything else with it. But inevitably, over time, the things kind of crumble and get destroyed and fall apart and then, once they do, the kids take all those pieces, and they create these bizarre, freak hybrids — of pirates and Indians and Star Wars and Spider-Man. Lego-things all getting mashed up together into this post-modern Lego stew. They figure out a way, despite the best efforts of corporate retail marketing.
From Susan Hill’s new book Howards End Is On the Landing:
It began like this. I went to the shelves on the landing to look for a book I knew was there. It was not. But plenty of others were and among them I noticed at least a dozen I realised I had never read.
I pursued the elusive book through several rooms and did not find it in any of them, but each time I did find at least a dozen, perhaps two dozen, perhaps two hundred, that I had never read.
And then I picked out a book I had read but had forgotten I owned. And another and another. After that came the books I had read, knew I owned and realised that I wanted to read again.
I found the book I was looking for in the end, but by then it had become far more than a book. It marked the start of a journey through my own library.
I so want to do this.
Friday, October 16, 2009
In response to my recent post on the decline and fall of myth, I got a fascinating email from Matt Sterenberg, a historian currently teaching at Northwestern University. With his permission I’m posting it here.
A couple years ago I wrote my dissertation on on a related topic, namely, mythic thinking in twentieth-century Britain. . . . In the dissertation, I approach the profusion of mythic thinking in twentieth-century Britain as, generally speaking, a response to what the ever-perceptive Auden called “the modern problem” of:
...living in a society in which men are no longer supported by tradition without being aware of it, and in which, therefore, every individual who wishes to bring order and coherence into the stream of sensations, emotions, and ideas entering his consciousness, from without and within, is forced to do deliberately for himself what in previous ages has been done for him by family, custom, church, and state, namely the choice of the principles and presuppositions in terms of which he can make sense of his experience.
I agree with your assertion that hardly anyone in the humanities talks about myth and folktale these days. But in researching myth and literary criticism in Britain, I was surprised by how long myth held the interest of literary critics — into the 1970s in some cases. The interesting thing is that interest in myth among literary critics began to peter out just as theory from the continent began to trickle in. “Minding the myth-kitty,” as Frank Kermode put it, was big business and for a brief while looked like the wave of the future... until literary critics realized that continental theory might serve as a better foundation for their discipline.
. . . I think the rise and fall of interest in myth among literary critics in postwar Britain can in large part be explained in terms of disciplinary struggles within an expanding university system. Lots of academics began to realize that ‘myth’ was a potent rhetorical weapon that could be used in disciplinary struggles within the university. Literary critics were desperate to stake a claim for their emerging discipline in the context of an expanding university system in which the sciences were ascendant. They could not plausibly associate their discipline with the authority of science. Nevertheless, they were still in need of a justification for their work and in their search for one they turned to myth. The “myth-kitty-minding” literary critics used myth to construct cultural authority for their discipline by positioning themselves as the authorized interpreters of the mythic significance of literature, and by claiming they were uniquely equipped to elucidate that significance and therefore give access to truths that were somehow more real, and more relevant, than the deliverances of science. But when theory arrived on the scene, I think many, if not most, decided that it was a better wagon to hitch their horses to.
But Google is not digitizing these books so it can sell copies of them. They are out of print for a reason. There is no market for them as whole books. Their value lies in cutting them up into snippets and relevant excerpts, and showing those snippets along with search ads to people looking for related information. The reason they are valuable to Google is because they are a rich source of high quality information that will improve its search results, and in fact give them an information advantage over other search engines without equal access to the world’s books.
Exactly. I have mixed feelings about this, because I can see the great value — to me, as a scholar and writer — of more easily finding “snippets and relevant excerpts.” That’s the kind of thing Google already does very well, so much so that we will soon have whole generations of researchers, academics, and other intellectuals who don't even remember the needle-in-a-haystack experience that looking for data used to be.
Of course, it should also be noted that some kinds of research — like the kind that Keith Thomas did for his new book The Ends of Life — is unlikely to be digitized anytime soon. Maybe never. And one wonders whether in a scanned-and-digitized world the immense patience that Thomas has exhibited will become ever rarer than it is now.
Thursday, October 15, 2009
Okay, so, long post here. Stop tweeting and pay attention. Jessica Vascellaro has an essay in the WSJ in which she says,
Email has had a good run as king of communications. But its reign is over.
In its place, a new generation of services is starting to take hold—services like Twitter and Facebook and countless others vying for a piece of the new world. And just as email did more than a decade ago, this shift promises to profoundly rewrite the way we communicate—in ways we can only begin to imagine.
We all still use email, of course. But email was better suited to the way we used to use the Internet—logging off and on, checking our messages in bursts. Now, we are always connected, whether we are sitting at a desk or on a mobile phone. The always-on connection, in turn, has created a host of new ways to communicate that are much faster than email, and more fun.
And then she goes on to do the usual thing, which is to say, in effect, “this new technology speeds everything up and increases our connectivity, and that’s good, but what are we giving up? What are we losing? Whatever happened to meaningful in-person face-to-face human-to-human communication?”
Lev Grossman’s essay in Time about Google Wave hits many of the same notes:
Google Wave is, in short, a remarkably full-featured collaboration and communication tool, powerful enough for enterprise customers and easy enough for civilians. It's also a warning shot across the bow of pretty much every software company anywhere. It's amazing how many people's grills Google is getting up into with this single product. It's real time like AIM and Twitter (and it can talk to Twitter by importing and exporting tweets). It's social and shares media, like Facebook. Anybody who makes an e-mail client or collaboration software should be paying attention to Wave. This is vintage Google: give away a product that does stuff your competitors charge money for, thereby burnishing your public image and, at the same time, sapping your competitors' will to live.
But Wave isn't actually an e-mail killer. In practice, it's more like an insanely rich IM client. E-mail is asynchronous; you can wait an hour or (if you are, like me, a bad person) a week to answer it. But because Wave operates in real time, it demands immediate attention like an IM or a phone call or, for that matter, a crying baby. When Wave is up, it's hard to focus on anything else. That isn't a defect, but it does narrow the scope of its usefulness. Getting more information right away isn't always the most efficient way to work.
This is how these essays usually go: this is really cool, but is it tethering us more closely to our computers? (Interestingly, Wave doesn't seem, at the moment, to be reckoning with the way more and more people are using smartphones to connect to the world.)
Nicholas Carr is refreshingly unambiguous on these points:
The flaw of synchronous communication has been repackaged as the boon of realtime communication. Asynchrony, once our friend, is now our enemy. The transaction costs of interpersonal communication have fallen below zero: It costs more to leave the stream than to stay in it. The approaching Wave promises us the best of both worlds: the realtime immediacy of the phone call with the easy broadcasting capacity of email. Which is also, as we'll no doubt come to discover, the worst of both worlds. Welcome to the conference call that never ends. Welcome to Wave hell.
In this particular case I’m with Carr. I’ve only been playing around with Wave for a week or so, but I don't like the demands it makes — or will make, once enough people are using it to make it worthwhile. (Right now it’s like Union Station at 3 A.M.)
Why do I like Twitter and despise Facebook? Because Facebook is symmetrical — if you friend me, I friend you — while Twitter is asymmetrical — I can follow you, but you don't have to follow me. Why do I like email better than the telephone or IM or Wave? Because it’s asynchronous: I catch up on email when I can, not when you write, and I expect you to do the same. I can't do my work unless I have long periods away from the computer and the iPhone. Asynchrony is my friend. My best friend. My BFF.
Wednesday, October 14, 2009
Grafton shows that in the republic’s early centuries the bracketing of religious differences tended to confuse those who did not understand, or did not follow, the community’s distinctive practices. When Isaac Casaubon failed to employ his vast knowledge of Scripture and the Church Fathers to refute Catholicism, many observers assumed that this meant he was sympathetic to the Catholic cause and ripe for conversion. They could not understand that he was simply trying to assess the historical evidence fairly, which in his case meant that he could not fully sympathize with a French Catholicism that was increasingly Ultramontane or with the hard-line Calvinists within whose orbit he was educated. His loyalties to the Republic of Letters would not allow him to place his learning at the service of partisanship.This refusal tended to make life difficult for Casaubon, and eventually he left France for England. He did not find England’s communities all that they should have been, but while at Oxford he did become fascinated with the recently opened Bodleian Library and, Grafton explains, was especially pleased that the books in the library did not circulate. “The library is open for scholars seven or eight hours a day,” he wrote to a friend in France. “You would see many scholars there, eagerly enjoying the feasts spread before them. This gave me no little pleasure.”Three hundred and fifty years later, a scholar sat in that same library — Duke Humfrey’s Library, as the oldest part of the Bodleian had come to be called — and over a period of several years read every volume from the sixteenth century that the library contained. Eventually he wrote a book about what he had read, a book supposedly about the nondramatic literature of that period but, in fact, a sweeping intellectual history of the whole century. He managed the extraordinary feat of admiring and celebrating — within the limits set by scholarly honesty — some of the great enemies of that period, notably Sir Thomas More and William Tyndale, who were, he argued, far closer to each other in theology and ethics than they had been able to discern.The scholar’s name was C.S. Lewis, and Isaac Casaubon would have loved both his learning and his charity. Just after finishing that book, Lewis was named to a chair at Cambridge University, and in his inaugural address he referred to himself as one of the last examples of Old Western Man — a “dinosaur,” he said, and, we may add, remarkably like the other dinosaurs that roam Pedantic Park. I would say “May their tribe increase,” but that seems unlikely, as I think Anthony Grafton would agree. May it at least not die out.
Tuesday, October 13, 2009
I don't think this post by Alex Reid is on the right track:
Close reading, if you don't know, comes out the 30s and 40s with New Criticism as a kind of scientific method for literary analysis. It manages to survive the postmodern shift into theory and cultural studies, so that today we continue to advocate "close reading" without perhaps meaning the specific practice the New Critics called for. Btw, I think this is largely the case whether one is in a literature or composition classroom. Needless to say, while literary interpretation suggests a wide degree of openness in the meanings a reader might uncover in a text, close reading serves as a significant limitation on practices of reading and interpretation, and the compositions that might result.
Arguably, close reading is a practice predicated on a scarcity of texts. It's time consuming. Indeed, close reading might be said to follow upon a self-imposed, selective scarcity: the literary canon. Now, of course, we have an explosion of media. Furthermore, the discipline has departed from the selectivity of the canon. In short, there are more texts than ever to study. Yet we continue to cling to close reading because, I think, we have confused method with objective. This is, we have come to point where we might say that the objective of English Studies is to conduct close readings of texts. There appears to be a sense that intellectual work, at least in the humanities, can only function through close readings, that critical thinking requires close readings, and that other cultural-textual practices are anti-intellectual. Now, let me say that there's nothing "intrinsically" wrong with close reading. It is just simply a limited methodology that literally and explicitly closes reading and, indirectly, the composition practices that we insist must follow upon it.
Reid is not sure what he wants to replace close reading with, but knows he wants to “examine extant writing practices and approach the development of new compositional practices in an open, experimental way.”
Things start going wrong for this post at the outset, when Reid simply identifies close reading with the theories of the New Critics. The New Critics placed a particular kind of close reading at the center of their theories and their pedagogy, but they didn't invent it — what does he think Erasmus, or Augustine for that matter, was doing with the Psalms? — and it lived on — in the work of Jacques Derrida, for instance — long after the New Criticism became a byword for superannuated stodginess.
To repudiate the New Criticism is fine, but to repudiate close reading tout court is to abandon the one essential practice of all study of literature and writing: disciplined attentiveness. Close reading is a necessary — I would even say the necessary — skill in literary study (and in the reading component of composition classes) for one overwhelming reason: it teaches people that easy judgments made on the basis of superficial acquaintance with a text are worthless. What you (or I!) have to say about a text isn't worth hearing unless it is demonstrably based on thorough, attentive, careful reading — close reading.
Reid says that “Open composition, in the absence of close reading, is the situation of the text in an open field of networks and contexts.” This could mean almost anything, of course — he’s working at a level of nearly absolute abstraction — but I would argue that you can't “situate” anything meaningfully and usefully unless you have a detailed, even a minute, understanding of how it works. College students don't need much help in making broad connective generalizations — that’s their daily bread — but they do need to learn the hard work of testing those generalizations against what texts (or images) actually say and do. That's what the discipline of close reading is "predicated on" — not a scarcity of texts. Blake was right when he said that “To Generalize is to be an Idiot."
Monday, October 12, 2009
The book is an incomparable portrait of the writing life of Dickens. Cumulatively, it is profoundly moving, chronicling the constant restless interaction between the life and the work. Slater quotes to immensely touching effect the account by Forster, Dickens's best friend and first biographer, of a day trip up river, undertaken to furnish him with material for a chapter he needed to write for Great Expectations: "he seemed to have no care, all of that summer day, except to enjoy [his friends' and family's] enjoyment and entertain them with his own in the shape of a thousand whims and fancies; but his sleepless observation was at work all the time, and nothing had escaped his keen vision on either side of the river."
Friday, October 9, 2009
The Ends of Life is one of the most enjoyable, provocative, and instructive works of historical scholarship I have ever read. It is a work I will return to again and again, and I doubt that I will ever exhaust its riches — even though its historical narrative occupies fewer than three hundred pages (followed by a hundred and fifty pages of notes). Keith Thomas has provided as rich and compelling a picture of what early modern people lived for — what they believed gave meaning to their existence — as we could ever hope to have. And few if any historical subjects could be more worthy of our attention.
I wonder if this ever happens to anyone else: surprisingly often, when I read about a new piece of software, I can't figure out what the software actually does, or is supposed to do. I suppose that means that I am not in the target audience for the app, but I’m not the target audience for many apps that I perfectly well understand the purpose of. Mathematica, for instance, or for that matter Photoshop.
But take Bento. What the heck is Bento for? It’s a “personal database.” Okay . . . but what does that do? Turns out that with Bento I can “display [my] contacts and calendars in new and exciting ways” — I can't even imagine what that means or why I would want to do it — or “organize contacts, clubs and mailing lists” — what do you mean by organize? — or “track projects, tasks, and deadlines” — so it’s a project manager? — or “manage students, classes, and lecture notes” — what does “manage” mean in this context? — or “store recipes and shopping lists” — wait, why would I want to “store” my shopping lists? And do I need a new application for that? Don't I already have apps on my computer that do all these things quite well? (And I downloaded and tried Bento when it first came out without resolving my confusion.)
I feel the same way about a new online service called AcaWiki, which is “Increasing the Impact of Research Using Web 2.0.” The idea seems to be that you post summaries of academic articles and then have discussions about them. Well . . . okay. But that seems like a great deal of work for a very small reward — at least until there are many thousands of articles that have been summarized. And even then, aren't there already many thriving listervs and online discussion groups for people in every academic field? I just can't see what this is adding to the party.
Note that I am not saying, in either case, that I’m confident that the software is useless — rather, that I can't seem to discern what the utility is supposed to be. It’s, frankly, rather disturbing how often I find myself puzzled in this way. . . .
Wednesday, October 7, 2009
Monday, October 5, 2009
My English 215 students are beginning serious work on their first essays now. Most of them will write on the Iliad or the Odyssey, though a few will pursue Aeschylus’s Oresteia. As I talk with them about their ideas, I am reminded of something that I’m reminded of every year at this time: that students are primarily interested in writing about characters — which is good; that they very much want to decide whether a given character is “positive” or “negative” — which is simplistic; and that they are absolutely fascinated by the issue of motivation — which is the subject of this blog post.
I would say that well over half my students, when they think about fictional characters, focus mainly on what they think motivates those characters: Why do they do what they do? Sometimes I try to suggest that Homer may not even have had a conception of motive — there’s no evidence in the poems, as far as I can see, that he thinks that way — and that in any case motivation can't be nearly as important in the archaic Greek context as it is in a culture shaped by Christianity’s interest in the inner person. Odysseus, when disguised as a beggar, tries to decide whether to kill that obnoxious braggart Iros with a single blow or give him a “light tap” that just breaks his jaw, but that’s a simply practical internal debate: he never wonders who he really is, after the fashion of the seventh chapter of Paul’s letter to the Romans.
And then there’s the problem of treating fictional characters as though they were real people, people who have traits that the authors never wrote about or, perhaps, even imagined. . . .
But I digress. I think students (people in general) are concerned about motives because they believe that if they know someone’s motives, they know whether that person is a “positive” or “negative” character. But as Rebecca West once said, “There’s no such thing as an unmixed motive.” We all act out of complex and often contradictory motives, and all of us are mysteries to ourselves. Why should we think we can scope out the motives of other people when we’re largely in the dark about our own?
(This topic is one of recurrent interest to me: I wrote about it in relation to political figures here.)
Saturday, October 3, 2009
This essay by Lewis Hyde is typical of almost everything I’m reading about the Google Book Settlement. Here’s the usual structure:
[The GBS] does free the orphans from copyright limbo, but here’s the catch: They will effectively belong only to Google and the other settling parties. It will be almost impossible for any other online player to get the same right to use them. The only way a potential competitor could avoid the threat of statutory damages would be to do what Google did: scan lots of books, attract plaintiffs willing to form a class with an “opt out” feature, negotiate a settlement and get it approved by a judge. Even for those with time and money to spare, that promises to be an insurmountable barrier to entry.
Thus does the settlement portend Google’s unlimited dominion over electronic books. By aggregating the monopoly power latent in each orphan, the proposed agreement doesn’t just get the Brats to work on Google’s farm; it secures for Google a lasting monopoly in this newest of book trades. Talk about making hay!
Okay, great. But what alternative do you propose? That’s what’s usually lacking in these laments. If Google does not get to distribute these books, then they go back into “copyright limbo” and are not accessible to anyone. Is that a better situation than one in which Google makes a zillion bucks and has an effective copyright on the books?
I don't think it is. But I also don't like Google having more power than it already has. My recommendation — which I admit is not going to happen — is that we set aside the GBS until we go back and shorten our ridiculously over-extended copyright period. In other words, I agree with James Boyle. As he wrote in a column for the Financial Times,
I agree with a lot of the criticisms. Privacy protections could be improved, the monopoly point is a real one and the rights of libraries should be expanded. Some of those points might be fixed before the agreement is ratified. Others may need subsequent scrutiny by privacy and antitrust regulators. Google has responded, persuasively, that many of the problems could be resolved if only we had a rational copyright law in the first place with a safe harbor for the use of orphan works. The criticisms continue.What if the critics prevail and no settlement can be reached? I would prefer us to fix copyright law so these issues disappear. But if we cannot do that, we need a second best solution. Google’s escape module has flaws, lots of them, but it is better than staying in the black hole.
Friday, October 2, 2009
For the past couple of years I have been working on the most challenging — and maybe the most fascinating — project of my scholarly life: a critical edition of W. H. Auden’s immensely difficult long poem The Age of Anxiety. A few years ago Princeton University Press published Arthur Kirsch’s critical edition of the same poet’s The Sea and the Mirror, and that book did pretty well; thus my assignment. The gifted and resourceful Nick Jenkins is also working on an edition, in the same series, of The Double Man.
All of us have had the great pleasure of working with The Literary Executor Than Whom No Greater Can Be Conceived, Edward Mendelson. Edward was just in his mid-twenties when Auden asked him to oversee his literary estate, and has been hard at it ever since. In addition to his own brilliant critical work on Auden — Early Auden and Later Auden are the major texts — he has been for many years editing the poet’s complete works. (I’ve written often about these labors of Edward’s: see, for example, here and here.)
And in the midst of all that Edward has been immensely — staggeringly — helpful to others of us working in the same vineyard. Here’s just one instance among many: a couple of months ago I sent him a complete draft of my Age of Anxiety edition, and after reading it he thought there might be some significant background material I had overlooked. So he went to the magnificent Berg Collection of the New York Public Library — which holds vast tracts of Auden material — and spent all day looking for documents that would make my edition better. Sure enough, I had overlooked some valuable documents, so I’ll be visiting the Berg soon to incorporate their content into my edition’s notes. But I never even would have known what I was missing had it not been for Edward’s acute eye, excellent memory, and — above all — his willingness to take time away from his own work to make my work better.
When I think what my scholarly life would have been like if I had had Stephen Joyce or Valerie Eliot to deal with, I shudder. But instead I have been blessed and honored to work with Edward Mendelson. Thanks, Edward!
In 1957, when he was 69 years old, T. S. Eliot married 32-year-old Valerie Fletcher. When he died in 1965 she took charge of his literary estate and has controlled it ever since, with — from the scholar’s point of view — uneven results. When Peter Ackroyd was writing his biography of Eliot — which eventually appeared in 1984 — Mrs. Eliot first gave him free access to Eliot’s letters and papers, but then denied him permission to quote from them. He had to re-write his biography to remove the quotations.In 1988 she published the first volume of his collected letters, which covered the period through 1922 — after the publication of The Waste Land but before his conversion to Christianity — and promised that the second volume would come out the following year. Two decades later, we’re still waiting. A story published last March claimed that the long-awaited letters would appear this November, and, you know, it just might happen. But I’m not holding my breath.