Text Patterns - by Alan Jacobs

Thursday, June 30, 2016

On Sleep

For the last month, almost every night, I have listened to Max Richter’s Sleep. I have some things to say about it:

  • It amounts to more than eight hours of music.
  • It is comprised of 31 sections, ranging in length from 2:46 to 33:46. Only seven of the sections are shorter than ten minutes.
  • The music is made by voices, strings, and keyboard instruments (some of which are electronic).
  • I think I have listened to it all, but I am not sure. I have played it mostly in bed, though sometimes at my computer as I write. In bed I have drifted in and out of sleep while listening. I think I have listened to some sections several times, others no more than once, but I cannot be sure.
  • Sleep is dominated by three musical themes, one played on the piano, one played on the violin, and one sung by a soprano voice. (Though other music happens also.) One way to characterize Sleep is as a series of themes with variations.
  • The piano theme is the most restful, mimicking most closely the rhythms of the body breathing; the violin melody is the most beautiful; the vocal melody is the most haunting. (Also, when it appears when I am sleeping, or near sleep, it wakes me up.)
  • I could tell you which of the sections presents the violin melody most fully and most gorgeously, but then you might listen to that section on its own rather than in its context. I do not wish to encourage shortcuts in this matter.
  • It is said that the music of Arvo Pärt is especially consoling to the dying; I think this may prove true of Sleep as well. There is a very good chance that, should I die slowly, I will listen to Sleep regularly, perhaps even exclusively.
  • Sleep is the half-brother of death.
  • The number three plays a large role these pieces: the time signatures vary a good deal, but a good many of them come in units of three. Also, at least one section — maybe more; it’s so hard to be sure — features a bell-like tone that rings every thirteen beats.
  • If you have a very good pair of headphones, that's how you should listen to this music. If you're listening on, for instance, Apple's earbuds, you'll miss a great deal of wonderful stuff going on in the lower registers. 
  • The musical materials of Sleep are deceptively simple: Richter is not by the standards of contemporary music tonally adventurous, yet he manages to create a remarkable variety of treatments of his simple themes. The power of the music grows with repetition, with variation, with further repetition. This is yet another reason why sampling this composition will not yield an experience adequate to its design.
  • Since I started listening to Sleep I have thought a good deal about sleep and what happens within it. As Joyce insisted in Finnegans Wake and in his comments on the book when it was still known as Work in Progress, we have no direct access to the world of sleep. All we have is our memories of dreams, and these may well be deeply misleading: “mummery,” Joyce says, “maimeries.” And even dreams are not sleep tout court. A third of our lives is effectively inaccessible to us.
  • Listening to Sleep is, I think, one of the most important aesthetic experiences of my life, but I do not have any categories with which to explain why — either to you or to myself.

Wednesday, June 29, 2016

Mendelson's undead

I want to devote several posts, in the coming days, to this essay by Edward Mendelson. I should begin by saying that Edward is a good friend of mine and someone for whom I have the deepest respect — which will not keep me from disagreeing with him sometimes. It’s also important to note that his position in relation to current communications technologies can’t be easily categorized: in addition to being the Lionel Trilling Professor of the Humanities at Columbia University and the literary executor of the poet W. H. Auden, he has been a contributing editor for PC magazine since 1988 (!), writing there most recently about the brand-new file system of the upcoming MacOS Sierra. He also does stuff like this in his spare time. (I’m going to call him “Mendelson” in what follows for professionalism’s sake.)

That, in the essay-review that I want to discuss, Mendelson’s attitude towards social-media technology is sometimes quite critical is in no way inconsistent with his technological knowledge and interests. Perhaps this doesn’t need to be said, but I have noticed over the years that people can be quite surprised when a bona fide technologist — Jaron Lanier, for example — is fiercely critical of current trends in Silicon Valley. They shouldn’t be surprised: people like Lanier (and in his own serious amateur way Mendelson) learned to use computers at a time when getting anything done on such a machine required at least basic programming skills and a significant investment of time. The DIY character of early computing has almost nothing in common with the culture generated today’s digital black boxes, in which people can think of themselves as “power users” while having not the first idea how the machine they’re holding works. (You can’t even catch a glimpse of the iOS file system without special tools that Apple would prefer you not to know about.)

Anyway, here’s the passage that announces what Mendelson is primarily concerned to reflect on:

Many probing and intelligent books have recently helped to make sense of psychological life in the digital age. Some of these analyze the unprecedented levels of surveillance of ordinary citizens, others the unprecedented collective choice of those citizens, especially younger ones, to expose their lives on social media; some explore the moods and emotions performed and observed on social networks, or celebrate the Internet as a vast aesthetic and commercial spectacle, even as a focus of spiritual awe, or decry the sudden expansion and acceleration of bureaucratic control.

The explicit common theme of these books is the newly public world in which practically everyone’s lives are newly accessible and offered for display. The less explicit theme is a newly pervasive, permeable, and transient sense of self, in which much of the experience, feeling, and emotion that used to exist within the confines of the self, in intimate relations, and in tangible unchanging objects—what William James called the “material self”—has migrated to the phone, to the digital “cloud,” and to the shape-shifting judgments of the crowd.

So that’s the big picture. We shall return to it. But for now I want to focus on something in Mendelson’s analysis that I question — in part out of perverse contrarianism, and in part because I have recently been spending a lot of time with a smartphone. Mendelson writes,

Dante, always our contemporary, portrays the circle of the Neutrals, those who used their lives neither for good nor for evil, as a crowd following a banner around the upper circle of Hell, stung by wasps and hornets. Today the Neutrals each follow a screen they hold before them, stung by buzzing notifications. In popular culture, the zombie apocalypse is now the favored fantasy of disaster in horror movies set in the near future because it has already been prefigured in reality: the undead lurch through the streets, each staring blankly at a screen.

In response to this vivid metaphor, let me propose a thought experiment: suppose there were no smartphones, and you were walking down the streets of a city, and the people around you were still looking down — but rather than at a screen, at letters from loved ones, and colorful postcards sent by friends from exotic locales? How would you describe such a scene? Would you think of those people as the lurching undead?

I suspect not. But why not? What’s the difference between seeing communications from people we know on paper that came through the mail and seeing them on a backlit glass screen? If we were to walk down the street of a city and watch someone tear open an envelope and read the contents, looking down, oblivious to her surroundings, why would we perceive that scene in ways so unlike the ways we perceive people looking with equal intensity at the screens of their phones? Why do those two experiences, for so many of us as observers and as participants, have such radically different valances?

I leave these questions as exercises for the reader.

Tuesday, June 28, 2016

the sources of technological solutionism

If you’re looking for case studies in technological solutionism — well, first of all, you won't have to look long. But try these two on for size:

  1. How Soylent and Oculus Could Fix the Prison System
  2. New Cities

That second one, which is all about how techies are going to fix cities, is especially great, asking the really Key Questions: “What should a city optimize for? How should we measure the effectiveness of a city (what are its KPIs)?”

The best account of this rhetoric and its underlying assumptions I have yet seen appeared just yesterday, when Maciej Ceglowski posted the text of a talk he gave on the moral economy of tech:

As computer programmers, our formative intellectual experience is working with deterministic systems that have been designed by other human beings. These can be very complex, but the complexity is not the kind we find in the natural world. It is ultimately always tractable. Find the right abstractions, and the puzzle box opens before you.

The feeling of competence, control and delight in discovering a clever twist that solves a difficult problem is what makes being a computer programmer sometimes enjoyable.

But as anyone who's worked with tech people knows, this intellectual background can also lead to arrogance. People who excel at software design become convinced that they have a unique ability to understand any kind of system at all, from first principles, without prior training, thanks to their superior powers of analysis. Success in the artificially constructed world of software design promotes a dangerous confidence.

Today we are embarked on a great project to make computers a part of everyday life. As Marc Andreessen memorably frames it, "software is eating the world". And those of us writing the software expect to be greeted as liberators.

Our intentions are simple and clear. First we will instrument, then we will analyze, then we will optimize. And you will thank us.

But the real world is a stubborn place. It is complex in ways that resist abstraction and modeling. It notices and reacts to our attempts to affect it. Nor can we hope to examine it objectively from the outside, any more than we can step out of our own skin.

The connected world we're building may resemble a computer system, but really it's just the regular old world from before, with a bunch of microphones and keyboards and flat screens sticking out of it. And it has the same old problems.

Approaching the world as a software problem is a category error that has led us into some terrible habits of mind.

I almost quoted the whole thing. Please read it — and, perhaps, read it in conjunction with another essay I referred to recently, about the just plain wrongness of believing that the brain is a computer. Ask a software engineer for solutions to non-software problems, and you’ll get answers that might work brilliantly ... if the world were software.

Monday, June 27, 2016

myths we can't help living by

One reason the technological history of modernity is a story worth telling: the power of science and technology to provide what the philosopher Mary Midgley calls “myths we live by”. For instance, Midgley writes,

Myths are not lies. Nor are they detached stories. They are imaginative patterns, networks of powerful symbols that suggest particular ways of interpreting the world. They shape its meaning. For instance, machine imagery, which began to pervade our thought in the seventeenth century, is still potent today. We still often tend to see ourselves, and the living things around us, as pieces of clockwork: items of a kind that we ourselves could make, and might decide to remake if it suits us better. Hence the confident language of ‘genetic engineering’ and ‘the building-blocks of life’.

Again, the reductive, atomistic picture of explanation, which suggests that the right way to understand complex wholes is always to break them down into their smallest parts, leads us to think that truth is always revealed at the end of that other seventeenth-century invention, the microscope. Where microscopes dominate our imagination, we feel that the large wholes we deal with in everyday experience are mere appearances. Only the particles revealed at the bottom of the microscope are real. Thus, to an extent unknown in earlier times, our dominant technology shapes our symbolism and thereby our metaphysics, our view about what is real.

This is why I continue to protest against the view which, proclaiming that “ideas have consequences,” goes on to ignore the material and technological things that press with great force upon our ideas. Consider, for instance, the almost incredible influence that computers have upon our understanding of the human brain, even though the brain does not process information and is most definitely not in any way a computer. The metaphor is almost impossible for neuroscientists to escape; they cannot, generally speaking, even recognize it as a metaphor.

If we can even begin to grasp the power of such metaphors and myths, we can understand why a technological history of modernity is so needful.

Sunday, June 26, 2016

more on speed

A bit of a follow-up to this post, and to brutus’s comment on it (which you should read) as well: My friend Matt Frost commented that Jeff Guo is the “bizarro Alan Jacobs,” which is true in a way. Guo clearly thinks that his problem is that there’s not enough new content and he can’t consume it fast enough, whereas I have argued on many occasions for slower reading, slower thinking, re-reading and re-viewing....

And yet. I’ve watched movies the way Guo watches them, too; in fact, I’ve done it many times. And I’ve read books — even novels — in a similar way, skimming large chunks. So I’m anything but a stranger to the impulse Guo has elevated to a principle. But here’s the thing: Whenever we do that we’re thereby demonstrating a fundamental lack of respect for the work we’re skimming. We are refusing to allow it the kind and amount of attention it requests. So if — to take an example from my previous post — you watch Into Great Silence at double speed you’re refusing the principle on which that film is built. When you decide to read Infinite Jest but skip all the conversations between Marathe and Steeply because you find them boring you’re refusing the fundamental logic of the book, which, among other things, offers a profound meditation on boredom and its ever-ramifying effects on our experiences.

I think we do this kind of thing when we don’t really want to read or view, but to have read and have viewed — when more than watching Into Great Silence or reading Infinite Jest we want to be able to say “Yeah, I’ve seen Into Great Silence and ”Sure, I’ve read Infinite Jest." It’s a matter of doing just enough that we can convince ourselves that we’re not lying when we say that. But you know, Wikipedia + lying is a lot easier. Just saying.

Aside from any actual dishonesty, I don’t think there’s anything wrong with viewing or reading on speed. But it’s important to know what you’re doing — and what you’re not doing: what impulses you’re obeying and what possibilities you’re refusing. Frank Kermode, in a brilliant reflection that I quote here, speaks of a threefold aesthetic and critical sequence: submission, recovery, comment. But if you won’t submit to the logic and imagination of the work in question, there’ll be nothing to recover from, and you’ll have no worthwhile comment to make.

All of which may prompt us to think about how much it matters in any given case, which will be determined by the purpose and quality of the work in question. Scrub through all of The Hangover you want, watch the funny parts several times, whatever. It doesn’t matter. But if you’re watching Mulholland Drive (one of Guo’s favorite movies, he says) and you’re refusing the complex and sophisticated art that went into its pacing, well, it matters a little more. And if you’re scrubbing your way through ambitious and comprehensively imagined works of art, then you really ought to rethink your life choices.

Friday, June 24, 2016

this is your TV on speed

Jeff Guo watches TV shows really fast and thinks he's pretty darn cool for doing so.

I recently described my viewing habits to Mary Sweeney, the editor on the cerebral cult classic "Mulholland Drive." She laughed in horror. “Everything you just said is just anathema to a film editor,” she said. “If you don't have respect for how something was edited, then try editing some time! It's very hard."

Sweeney, who is also a professor at the University of Southern California, believes in the privilege of the auteur. She told me a story about how they removed all the chapter breaks from the DVD version of Mulholland Drive to preserve the director’s vision. “The film, which took two years to make, was meant to be experienced from beginning to end as one piece,” she said.

I disagree. Mulholland Drive is one of my favorite films, but it's intentionally dreamlike and incomprehensible at times. The DVD version even included clues from director David Lynch to help people baffled by the plot. I advise first-time viewers to watch with a remote in hand to ward off disorientation. Liberal use of the fast-forward and rewind buttons allows people to draw connections between different sections of the film.

Question: How do you draw connections between sections of the film you fast-forwarded through?

Another question: How would Into Great Silence be if you took 45 minutes to watch it?

A third question: Might there be a difference — an experiential difference, and even an aesthetically qualitative difference — between remixing and re-editing and creating montages of works you've first experienced at their own pace and, conversely, doing the same with works you've never had the patience to sit through?

And a final suggestion for Jeff Guo: Never visit the Camiroi.

Thursday, June 23, 2016

travel and the lure of the smartphone

Alan Turing’s notion of a “universal machine” is the founding insight of the computer revolution, and today’s smartphones are the fullest embodiment of that idea we’ve yet realized, which is what makes them irresistible to so many of us. Many of us, I suppose, have at times made a mental list of the devices we once owned that have been replaced by smartphones: calculators, clocks, cameras, maps, newspapers, music players, tape recorders, notepads....

Earlier this year I described my return to a dumbphone and the many advantages accruing to me therefrom, but as my recent trip to London and Rome drew closer, I started to sweat. Could I maintain on my travels my righteous technological simplification? This was a particular worry of mine because I am also a packing minimalist: I have spent whole summers abroad living out of a backpack and a smallish suitcase. Maps wouldn’t add much weight, but a camera would be more significant; and then I’d need either to carry my backpack everywhere I went, to hold the camera, or else take a dedicated camera bag. Moreover, my wife was not going on this trip, and I wanted to stay in touch with her, especially by sending photos of the various places I visited — and to do so immediately, not once I returned.

As the day of departure drew nearer, that desire to maintain the fullest possible contact with my beloved loomed larger in my mind. This reminded me that I had recently spoken and written about the relationship between distraction and addiction:

If you ask a random selection of people why we’re all so distracted these days — so constantly in a state of what a researcher for Microsoft, Linda Stone, has called “continuous partial attention” — you’ll get a somewhat different answer than you would have gotten thirty years ago. Then it would have been “Because we are addicted to television.” Fifteen years ago it would have been, “Because we are addicted to the Internet.” But now it’s “Because we are addicted to our smartphones.”

All of these answers are both right and wrong. They’re right in one really important way: they link distraction with addiction. But they’re wrong in an even more important way: we are not addicted to any of our machines. Those are just contraptions made up of silicon chips, plastic, metal, glass. None of those, even when combined into complex and sometimes beautiful devices, are things that human beings can become addicted to.

Then what are we addicted to? … We are addicted to one another, to the affirmation of our value — our very being — that comes from other human beings. We are addicted to being validated by our peers.

Was my reluctance to be separated from my wife an example of this tendency? I’d like to think it’s something rather different: not an addiction to validation from peers, but a long-standing dependence on intimacy with my life partner. But my experience is certainly on the same continuum with the sometimes pathological need for validation that I worried over in that essay. So while I think that my need to stay in touch with Teri is healthier than the sometimes desperate desire to be approved by one’s peer group, they have this in common: they remind us how much our technologies of communication are not substitutes for human interaction but enormously sophisticated means of facilitating it.

A camera would have added some weight to my backpack, but not all that much. Packing minimalism played a role in my decision to pop the SIM card out of my dumbphone and dig my iPhone out of a drawer — note that I had never sold it or given it away! I was too cowardly for that — and use it on my trip as camera and communicator (iMessage and Twitter) and news-reader and universal map and restaurant-discovery vehicle and step-counter and.... But it wasn’t the decisive thing.

I do wonder how the trip might have been different if I had maintained my resolve. I certainly could’ve gotten some better photos if I had brought my camera, especially if I had also carried my long lens. (Smartphones have wide-angle lenses, which are great in many circumstances but very frustrating in others.) Maybe I would’ve sent Teri cards and letters instead of text messages, and she’d have keepsakes that our grandchildren could someday see. (Somehow I doubt that our grandchildren will be able to browse through my Instagram page.) And then I’d have uploaded all my photos when I got home and we’d have sat down to go through them all at once. But that’s not how it went.

Well, so it goes. I’ve been back for two days now, and probably should get out the dumbphone and switch my SIM card back into it. I’m sure I’ll do that soon. Very soon. Any day now.

Friday, June 17, 2016

some things about The Thing Itself

I had a really wonderful time in Cambridge the other night talking with Adam Roberts, Francis Spufford, and Rowan Williams about Adam’s novel The Thing Itself and related matters. But it turns out that there are a great many related matters, so since we parted I can’t stop thinking about all the issues I wish we had had time to explore. So I’m going to list a few thoughts here, in no particular order, and in undeveloped form. There may be fodder here for later reflections.

  • We all embarrassed Adam by praising his book, but after having re-read it in preparation for this event I am all the more convinced that it is a superb achievement and worthy of winning any book prize that it is eligible for (including the Campbell Award, for which it has just been nominated).
  • But even having just re-read it, and despite being (if I do say so myself) a relatively acute reader, I missed a lot. Adam explained the other night a few of the ways the novel’s structure corresponds to the twelve books of the Aeneid, which as it happens he and I have just been talking about, and now that I’ve been alerted to the possible parallels I see several others. And they’re genuinely fascinating.
  • Suppose scientists were to build a computer that, in their view, achieved genuine intelligence, and intelligence that by any measure we have is beyond ours, and that computer said, “There is something beyond space and time that conditions space and time. Not something within what we call being but the very Ground of Being itself. One might call it God.” What would happen then? Would our scientists say, “Hmmm. Maybe we had better rethink this whole atheism business”? Or would they say, “All programs have bugs, of course, and we’ll fix this one in the next iteration”?
  • Suppose that scientists came to believe that the AI is at the very least trustworthy, if not necessarily infallible, and that its announcement should be taken seriously. Suppose that the AI went on to say, “This Ground of Being is neither inert nor passive: it is comprehensively active throughout the known universe(es), and the mode of that activity is best described as Love.” What would we do with that news? Would there be some way to tease out from the AI what it thinks Love is? Might we ever be confident that a machine’s understanding of that concept, even if the machine were programmed by human beings, is congruent with our own?
  • Suppose the machine were then to say, “It might be possible for you to have some kind of encounter with this Ground of Being, not unmediated because no encounter, no perception, can ever be unmediated, but more direct than you are used to. However, such an encounter, by exceeding the tolerances within which your perceptual and cognitive apparatus operates, would certainly be profoundly disorienting, would probably be overwhelmingly painful, would possibly cause permanent damage to some elements of your operating system, and might even kill you.” How many people would say, “I’ll take the risk”? And what would their reasons be?
  • Suppose that people who think about these things came generally to agree that the AI is right, that Das Ding an Sich really exists (though “exists” is an imprecise and misleadingly weak word) and that the mode of its infinitely disseminated activity is indeed best described as Love — how might that affect how people think about Jesus of Nazareth, who claimed (or, if you prefer, is said by the Christian church to claim), a unique identification with the Father, that is to say, God, that is to say, the Ground of Being, The Thing Itself?

Thursday, June 9, 2016

why blog?

The chief reason I blog is to create a kind of accountability to my own reading and thinking. Blogging is a way of thinking out loud and in public, which also means that people can respond — and often those responses are helpful in shaping further thoughts.

But even if I got no responses, putting my ideas out here would still be worthwhile, because it’s a venue in which there is no expectation of polish or completeness. Sometimes a given post, or set of posts, can prove to be a dead end: that’s what happened, I think, with the Dialogue on Democracy I did over at The American Conservative. I wanted to think through some issues but I don't believe I really accomplished anything, for me or for others. But that’s all right. It was worth a try. And perhaps that dead end ended up leading me to the more fruitful explorations of the deep roots of our politics, and their relation to our technological society, that I’ve been pursuing here in the last couple of weeks.

As I have explained several times, over the long haul I want to pursue a technological history of modernity. But I have two books to write before I can even give serious consideration to that project. Nevertheless, I can try out the occasional random idea here, and as I do that over the next couple of years, who knows what might emerge? Possibly nothing of value; but possibly something essential to the project. Time will tell.

I’ve been blogging a lot lately because I had a chunk of free-ish time between the end of the Spring semester and the beginning of a long period of full-time book writing. I’m marking that transition by taking ten days for research (but also for fun) in England and Italy, so there will be no blogging for a while. And then when I return my activity will be sporadic. But bit by bit and piece by piece I’ll be building something here.

Wednesday, June 8, 2016

the Roman world and ours, continued

To pick up where I left off last time:

Imagine that you are a historian in the far future: say, a hundred thousand years from now. Isn't it perfectly possible that from that vantage point the rise of the United States as a global power might be seen primarily as a development in the history of the Roman Empire? To you, future historian, events from the great influence of Addison’s Cato upon the American Revolution to the Marshall Plan (paciere subiectis, debellare superbos) to the palpable Caesarism of Trump are not best understood as analogies to Roman history but as stages within it — as the history of the British Empire (Pax Brittanica) had been before us: Romanitas merely extended a bit in time and space. We know that various nations and empires have seen themselves as successors to Rome: Constantinople as the Second Rome, Moscow as the Third, the just-mentioned Pax Brittanica and even the Pax Americana that followed it. In such a case, to know little or nothing about the history of Rome is to be rendered helpless to understand — truly to understand — our own moment.

A possible chapter title from a far-future history textbook: “The Beginnings of the Roman Empire: 31 B.C.E. to 5000 C.E.”

Self-centered person that I am, I find myself thinking about all this in relation to what I’ve been calling the technological history of modernity. And Cochrane’s argument — along with that of Larry Siedentop, which I mentioned in my previous post on this subject — pushes me further in that direction than I’d ever be likely to go on my own.

In the Preface to his book, Cochrane makes the summary comment that “the history of Graeco-Roman Christianity” is largely the history of a critique: a critique of the idea, implicit in certain classical notions of the commonwealth but made explicit by Caesar Augustus, “that it was possible to attain a goal of permanent security, peace and freedom through political action, especially through submission to the ‘virtue and fortune’ of a political leader.” Another way to put this (and Cochrane explores some of these implications) is to say that classical political theory is devoted to seeing the polis, and later the patria and later still the imperium, as the means by which certain philosophical problems of human experience and action are to be solved. The political theory of the Greco-Roman world, on this account, is doing the same thing that the Stoics and Epicureans were doing in their own ways: developing a set of techniques by which human suffering might be alleviated, human anxieties quelled, and human flourishing promoted. That political theory is therefore best understood as closely related to what Foucault called “technologies of the self” and to what Martha Nussbaum has described as the essentially therapeutic character of Hellenistic ethics. The political structures of the Roman Empire — including literal structures like aquaducts and roads, and organizational ones like the cursus publicus, should therefore be seen as pointing ultimately towards a healing not only of the state but of the persons who comprise it. (Here again Siedentop’s history of the legal notions of personhood, and the relations of persons to families and communities, is vital.)

And if all this is right, then the technological history of modernity may be said to begin not with the invention of the printing press but in the ancient world — which in a very real sense, according to the logic of “great time,” we may be said to inhabit still.

Tuesday, June 7, 2016

with sincere thanks

I get quite a few unsolicited emails from people who want me to do something for them, and many of those emails end with "Thank you," "Thank you for your time," "Thanks for your attention," and so on. It has never occurred to me to think that such people were doing anything inappropriate; in fact, it just seemed to me that they were being polite.

But every now and then on Twitter I discover that some people are enraged by this little quirk of manners. I don't get it. What are you supposed to say when you write to ask someone for something? They've read your email, they didn't have to — why not thank them for doing so? 

The one complaint I understand involves the phrase "thank you in advance" — which seems to presume that the addressed will do the thing that the addressees have requested. But even then, it doesn't strike me as anything to make a big deal out of.

Can anyone who is offended by being thanked in these ways explain to me why? Thank you in advance for your help.

the Roman world and ours

So why am I reading about — I’m gonna coin a phrase here — the decline and fall of the Roman Empire? It started as part of my work on Auden.

I first learned about Charles Norris Cochrane’s Christianity and Classical Culture from reading Auden’s review of it, published in The New Republic in 1944. Auden began that review by saying that in the years since the book appeared (it was first published in 1940) “I have read this book many times, and my conviction of its importance to the understanding not only of the epoch with which it is concerned, but also of our own, has has increased with each rereading.” I thought: Well, now, that’s rather remarkable. I figured it was a book I had better read too.

Auden concludes his review with these words:

Our period is not so unlike the age of Augustine: the planned society, caesarism of thugs or bureaucracies, paideia, scientia, religious persecution, are all with us. Nor is there even lacking the possibility of a new Constantinism; letters have already begun to appear in the press, recommending religious instruction in schools as a cure for juvenile delinquency; Mr. Cochrane’s terrifying description of the “Christian” empire under Theodosius should discourage such hopes of using Christianity as a spiritual benzedrine for the earthly city.

That metaphor — "spiritual benzedrine for the earthly city" — is brilliantly suggestive. (And Auden knew all about benzedrine.)

More than twenty years later, in a long essay on the fall of Rome that was never published for reasons Edward Mendelson explains here, Auden wrote:

I think a great many of us are haunted by the feeling that our society, and by ours I don’t mean just the United States or Europe, but our whole world-wide technological civilisation, whether officially labelled capitalist, socialist or communist, is going to go smash, and probably deserves to.

Like the third century the twentieth is an age of stress and anxiety. In our case, it is not that our techniques are too primitive to cope with new problems, but the very fantastic success of our technology is creating a hideous, noisy, over-crowded world in which it is becoming increasingly difficult to lead a human life. In our reactions to this, one can see many parallels to the third century. Instead of Gnostics, we have existentialists and God-is-dead theologians, instead of neoplatonists, devotees of Zen, instead of desert hermits, heroin addicts and beats … instead of mortification of the flash, sado-masochistic pornography; as for our public entertainments, the fare offered about television is still a shade less brutal and vulgar than that provided by the amphitheater, but only a shade, and may not be for long.

And then the comically dyspeptic conclusion: “I have no idea what is actually going to happen before I die except that I am not going to like it.” (For those interested, the unpublished essay may be found in this collection.)

Clearly for Auden, the story Cochrane tells was one that had lasting relevance. Elements of Cochrane’s narrative turn up, in much more complex form than in the late-career bleat just quoted, for decades in Auden’s poetry: “The Fall of Rome,” “Memorial for the City,” “Under Sirius,” “Secondary Epic,” and many other poems bear Cochrane’s mark. As I mentioned in my earlier post, I’m now reading Christianity and Classical Culture for the fourth time, and it really is impossible for me also not to see the Roman world as a distant mirror of our own. How can I read this passage about the rise of Julius Caesar and not think of Donald Trump?

In the light of these ancient concepts, Ceasar emerges as a figure at once fascinating and dangerous. For the spirit thus depicted is one of sublime egotism; in which the libido dominandi asserts itself to the exclusion of all possible alternatives and crushes every obstacle in its path. We have spoken of Caesar as a divisive force. That, indeed, he was: as Cato had put it, “he was the only one of the revolutionaries to undertake, cold-sober, the subversion of the republic”; … A force like this, however, does more than divide, it destroys. Hostile to all claims of independence except its own, it is wholly incompatible with that effective equality which is implied in the classical idea of the commonwealth. To admit it within the community is thus to nourish the lion, whose reply to the hares in the assembly of beasts was to ask: Where are your claws?

And how can I read about this extension of the Emperor’s powers and not reflect on the recent hypertrophy of the executive branch of American government?

The powers and duties assigned to the emperor were broad and comprehensive. They were, moreover, rapidly enlarged as functions traditionally attached to republican magistracies were transferred one after another to the new executive, and executive action invaded fields which, under the former system, had been consecrated to senatorial or popular control. Finally, by virtue of specific provisions, the substance of which is indicated in the maxim princeps legibus solutus, the emperor was freed from constitutional limitations which might have paralyzed his freedom of action; while his personal protection was assured through the grant of tribunician inviolability (sacrosanctitas) as well as by the sanctions of the Lex Maiestatis. The prerogative was thus built up by a series of concessions, made by the competent authority of senate and people, no single one of which was in theory unrepublican.

But the more I read Cochrane, the more I suspect that we may not be talking about mere mirroring, mere analogies. Last year, when I read and reviewed Larry Siedentop’s book Inventing the Individual, I was struck by Siedentop’s tracing of certain of our core ideas about selfhood to legal disputes that arose in the latter centuries of the Roman Empire and its immediate aftermath. And this led me in turn to think about an ideas that Mikhail Bakhtin meditated on ceaselessly near the end of his life: great time. David Shepherd provides a thorough account of this idea here, but in short Bakhtin is trying to think about cultural developments that persist over centuries and even millennia, even when they have passed altogether from conscious awareness. Thus this staggering passage from one of his late notebooks:

The mutual understanding of centuries and millennia, of peoples, nations, and cultures, provides a complex unity of all humanity, all human cultures (a complex unity of human culture), and a complex unity of human literature. All this is revealed only on the level of great time. Each image must be understood and evaluated on the level of great time. Analysis usually fusses about in the narrow space of small time, that is, in the space of the present day and the recent past and the imaginable — desired or frightening — future.


There is neither a first nor a last word and there are no limits to the dialogic context (it extends into the boundless past and the boundless future). Even past meanings, that is, those born in the dialogue of past centuries, can never be stable (finalized, ended once and for all) — they will always change (be renewed) in the process of subsequent, future development of the dialogue. At any moment in the development of the dialogue there are immense, boundless masses of forgotten contextual meanings, but at certain moments of the dialogue’s subsequent development along the way they are recalled and invigorated in renewed form (in a new context). Nothing is absolutely dead: every meaning will have its homecoming festival. The problem of great time.

If we were take Bakhtin’s idea seriously, how might that affect our thinking about the Roman Empire as something more than a “distant mirror” of our own age? To think of our age, our world, as functionally extensive of the Roman project?

I’ll take up those questions in another post.

Monday, June 6, 2016

synopsis of Cochrane's Christianity and Classical Culture

  • Augustus, by uniting virtue and fortune in himself (viii, 174), established "the final triumph of creative politics," solving "the problem of the classical commonwealth" (32).
  • For a Christian with Tertullian's view of things, the "deification of imperial virtue" that accompanied this "triumph" was sheer idolatry: Therefore Regnum Caesaris, Regnum Diaboli (124, 234). 
  • "The crisis of the third century ... marked ... an eclipse of the strictly classical ideal of virtue or excellence" (166), and left people wondering what to do if the Augustan solution were not a solution after all. What if there is "no intelligible relationship" between virtue and fortune (171)?
  • Christians had remained largely detached during the crisis of the third century, neither wanting Rome to collapse nor prone to being surprised if it did, since its eventual fall was inevitable anyway (195).
  • Then Constantine came along and "both professed and practiced a religion of success" (235), according to which Christianity was a "talisman" that ensured the renewal of Romanitas (236).
  • After some time and several reversals (most notably in the reign of Julian the Apostate) and occasional recoveries (for instance in the reign of Theodosius) it became clear that both the Constantinian project and the larger, encompassing project of Romanitas had failed (391).
  • Obviously this was in many ways a disaster, but there was some compensation: the profound impetus these vast cultural crises gave to Christian thought, whose best representatives (above all Augustine) understood that neither the simple denunciations of the social world of Tertullian nor Constantine's easy blending of divergent projects were politically, philosophically, or theologically adequate.
  • Thus the great edifice of the City of God, Cochrane's treatment of which concludes with a detailed analysis of the philosophy of history that emerges from Augustine's new account of human personality: see 502, 502, 536, 542, 567-69.
Just in case it's useful to someone. Those page numbers are from the Liberty Fund edition, which I ended up using for reasons I'll discuss in another post. 

Virgil and adversarial subtlety

So, back to Virgil ... (Sorry about the spelling, Professor Roberts.)

What do we know about Virgil’s reputation in his own time and soon thereafter? We know that Augustus Caesar brought the poet into his circle and understood the Aeneid to articulate his own vision for his regime. We know that the same educational system that celebrated the reign of Augustus as the perfection of the ideal of Romanitas also celebrated Virgil as the king of Roman poets, even in his own lifetime. Nicholas Horsfall shows how, soon after Virgil’s death, students throughout the Roman world worked doggedly through the Aeneid line by line, which helps to explain why there are Virgilian graffiti at Pompeii but almost all of them from Books I and II. We know that Quintilian established study of Virgil as the foundational practice of literary study and that that establishment remained in place as long as Rome did, thus, centuries later, shaping the education of little African boys like Augustine of Hippo.

But, as my friend Edward Mendelson has pointed out to me in an email, when people talk about what “the average Roman reader” would have thought about Virgil, they have absolutely no evidence to support their claims. It may well be, as these critics usually say, that such a reader approved of the Empire and therefore approved of anything in the Aeneid that was conducive to the establishment of Empire ... but no one knows that. It’s just guesswork.

R. J. Tarrant has shown just how hard it is to pin down the details of Virgil’s social/political reputation. But it’s worth noting that, while the gods in the Aeneid insist that Dido must die for Rome to be founded, Augustine tells us in the Confessions that his primary emotional reaction when reading the poem was grief for the death of Dido. And Quintilian doesn't place Virgil at the center of his literary curriculum because he is the great advocate of Romanitas, but because he is the only Roman poet worthy to be compared with Homer. The poem exceeds whatever political place we might give it, and the readers of no culture are unanimous in their interests and priorities.

In a work that I’ve seen in draft form, so about which I won't say too much, Mendelson offers several reasons why we might think that Virgil is more critical of the imperial project, and perhaps even of Rome’s more general self-mythology, than Augustus thought, and than critics such as Cochrane think.

First, there is the point that Adam Roberts drew attention to in the comments on my previous post: the fact that Anchises tells Aeneas in Book VI that the vocation of Rome is not just to conquer the world but to “spare the defeated” (parcere subiectis) — yet this is precisely what Aeneas does not do when the defeated Turnus pleads for his life. I tried to say, in my own response to Adam, why I don't think that necessarily undoes the idea that Virgil snd his poem are fundamentally supportive not just of Rome generally but of the necessity of Turnus’s death. But the contrast between Anchises’ claim about the Roman vocation and what Aeneas actually does is certainly troubling.

More troubling still is another passage Mendelson points to, perhaps the most notorious crux in all of classical literature and therefore something I should already have mentioned: the end of Book VI. After Anchises shows to Aeneas the great pageant of Rome’s future glories, Virgil writes (in Allen Mandelbaum’s translation):

There are two gates of Sleep: the one is said
to be of horn, through it an easy exit
is given to true Shades; the other is made
of polished ivory, perfect, glittering,
but through that way the Spirits send false dreams
into the world above. And here Anchises,
when he is done with words, accompanies
the Sibyl and his son together; and
he sends them through the gate of ivory.

(Emphasis mine.) The gate of ivory? Was that whole vision for the future then untrue? But it couldn't be: Anchises reveals people who really were to exist and events that really were to occur. Was the untruth then not the people and events themselves but the lovely imperial gloss, the shiny coating that Anchises paints on events that are in fact far uglier? Very possibly. But the passage is profoundly confusing.

I continue to believe that Virgil is fundamentally supportive of the imperial enterprise, for reasons I won't spell out in further detail here. (If I had time I would write at length about Aeneas’s shield.) But he was too great a poet and too wise a man not to know, and reveal, the costliness of that enterprise, and not just in the lives of people like Dido and Turnus. Perhaps he was even more concerned with the price the Roman character paid for Roman greatness: the gross damage Romanitas did to the consciences of its advocates and enforcers.

Another way to put this is to say that Virgil was a very shrewd reader of Homer, who was likewise clear-sighted about matters that most of us would prefer not to see clearly. One must also here think of Shakespeare. Take, for instance, Twelfth Night: the viewers’ delight in the unfolding of the comedy is subtly undermined by the treatment of Malvolio by some of the “good guys.” It seems that the joy that is in laughter can all too easily turn to cruelty. Yes, Malvolio is a pompous inflated prig, but still....

The best account I have ever read of the way great literature accepts and represents these “minority moods” — moods that account for elements of human reality that any given genre tends to downplay — was written by Northrop Frye, in his small masterpiece A Natural Perspective. That's his book about comedy, and the Aeneid is, structurally anyway, a kind of comedy, a story of human fellowship emerging from great suffering. Frye's excursus on genre and mood is one of the most eloquent (and important) passages in his whole ouevre, and  I’ll end by quoting from it:

If comedy concentrates on a uniformly cheerful mood, it tends to become farcical, depending on automatic stimulus and reflex of laughter. Structure, then, commands participation but not assent: it unites its audience as an audience, but allows for variety in response. If no variety of response is permitted, as in extreme forms of melodrama and farce, something is wrong: something is inhibiting the proper function of drama.... Hence both criticism and performance may spend a good deal of time on emphasizing the importance of minority moods. The notion that there is one right response which apprehends the whole play rightly is an illusion: correct response is always stock response, and is possible only when some kind of mental or physical reflex is appealed to.

The sense of festivity, which corresponds to pity in tragedy, is always present at the end of a romantic comedy. This takes the part of a party, usually a wedding, in which we feel, to some degree, participants. We are invited to the festivity and we put the best face we can on whatever feelings we may still have about the recent behavior of some of the characters, often including the bridegroom. In Shakespeare the new society is remarkably catholic in its tolerance; but there is always a part of us that remains a spectator, detached and observant, aware of other nuances and values. This sense of alienation, which in tragedy is terror, is almost bound to be represented by somebody or something in the play, and even if, like Shylock, he disappears in the fourth act, we never quite forget him. We seldom consciously feel identified with him, for he himself wants no such identification: we may even hate or despise him, but he is there, the eternal questioning Satan who is still not quite silenced by the vindication of Job.... Participation and detachment, sympathy and ridicule, sociability and isolation, are inseparable in the complex we call comedy, a complex that is begotten by the paradox of life itself, in which merely to exist is both to be part of something else and yet never to be a part of it, and in which all freedom and joy are inseparably a belonging and an escape.

Saturday, June 4, 2016

two great things that work great together

a small crisis in my life as a reader

I mentioned in my previous post that I've been re-reading Charles Norris Cochrane's Christianity and Classical Culture, but I'm doing so in some perplexity. Here's my copy of the beautiful Liberty Fund reissue of the book, with its perfectly sewn binding and creamy thick paper (available, you should know, at a ridiculously low price: any other publisher would charge three times as much).

It is a pleasure to hold and to read, a wonderful exercise in the art of book-making (with some nice apparatus as well, especially the appendix with translations of some phrases Cochrane left untranslated).

By contrast, here is the old Oxford University Press copy I've had for many years:

It's worn, and the glue of the binding is drying out — some of the pages might start to come free at any moment. The cheap paper is yellowing. On the other hand, it contains evidence of my previous readings:

You can see in those photos the condition of the paper and binding, but also the evidence of the three previous readings: the first time marked in pencil, the second in pen, the third in green highlighting. (I almost never use highlighters, but wanted to distinguish that third reading from the other two.)

I keep going back and forth between the two copies. Part of me wants to have a new — or newish — experience with Cochrane's great book, and to do so in a format that is maximally enjoyable. I'm also aware that the Liberty Fund edition is so well-made that it can be used for future readings, whereas the OUP edition is on its last legs. If I don't abandon it now I'll have to do so soon enough. And yet I really enjoy interacting with my previous reading selves, and seeing what I thought important earlier versus what I think important now. (I'm trying to remember when I bought and first read this book — I think it was around 1990.)

I am having a great deal of difficulty making this decision. I read and annotated 150 pages in the new edition, and then went back to the old one, and am now wavering again. What a curious dilemma.

Thursday, June 2, 2016

Virgil and adversary culture

As I mentioned in an earlier post, Adam Roberts has been blogging about the Aeneid, prompted by his reading of Seamus Heaney’s fragmentary translation. Adam concludes his most recent post on the subject with these thoughts:

One of the biggest questions about the Aeneid, one critics and scholars still debate, is whether it is a simply encomium for Empire, sheer Augustan propaganda; or whether (as in Shakespeare, who presents us with a similar difficulty) the surface celebration of the triumph of the state and the authority of the strong leader veils much more complex and critical sense of what Empire means. Since we nowadays tend to value complexity, and prize texts that hide cross-currents and ironies under their surface storytelling, it's tempting simply to assume the latter. I have to say, I'm not convinced. We might think that condensing together into one dark-coloured and potent cube loss, punishment and imperial glory is to force three immiscible elements into an unstable emulsion, that the contradictions and tensions in the ideological structure of the poem will pull it apart. But I don't think so. Of course we know that Empire is a grievous thing to be on the receiving-end of, as armies march into your homeland and subdue your way of life and prior freedoms to theirs. But Empire is hard work for the conquerors, too, However asymmetric the balance, it entails losses and punishments on both sides. Maybe the mixture in Aeneid 6 speaks to a simpler truth.

Indeed this seems to me likely — though that’s a point difficult for many of us to grasp, because we are so accustomed to literary culture as fundamentally adversarial in relation to the culture at large. This is an inheritance of Modernism, as Paul Fussell explained some years ago in a very smart essay; but it is a lasting inheritance. It explains why when critics call a writer or a text “subversive” it’s always a compliment. That a truly great artist could also be wholly supportive of his society’s chief political project scarcely seems possible to us.

I’m now re-reading (for, I believe, the fourth time) a book that I think of as one of the great monuments of twentieth-century humanistic scholarship, Charles Norris Cochrane’s Christianity and Classical Culture (1940). As an interpretation of Rome’s passage from republic to pagan imperium to Christian imperium it is, I believe, unsurpassed and of permanent value. Cochrane believed that Virgil understood what Octavian was trying to achieve, endorsed it, developed and as it were poetically theorized it, all in a way that was recognizable to Octavian as his own work. In Virgil, Cochrane says, the Romans “at least discovered the answer by which their [cultural and political] doubts and perplexities were resolved; it was he, more than any other man, who charted the course of their imperial future.”

Here’s the key passage:

Viewed in the light of [Virgil’s] imagination, the Pax Augusta emerged as the culmination of effort extending from the dawn of culture on the shores of the Mediterranean — the effort to erect a stable and enduring civilization upon the ruins of the discredited and discarded systems of the past. As thus envisaged, it thus constituted not merely a decisive stage in the life of the Roman people, but a significant point of departure in evolution of mankind. It marked, indeed, the rededication of the imperial city to her secular task, the realization of those ideals of human emancipation toward which the thought and aspiration of antiquity had pointed hitherto in vain. From this standpoint, the institution of the principate represented the final triumph of creative politics. For, in solving her own problem, Rome had also solved the problem of the classical commonwealth.

This is what Virgil taught the Roman people, and continued to teach them for hundreds of years. In Cochrane’s telling, the later rulers of Rome betrayed this inheritance in multiple ways, but to Virgil’s articulation of the Roman mission — to regere imperio populos, Romane, memento / (hae tibi erunt artes), pacisque imponere morem, / parcere subiectis, et debellare superbos — there was simply no alternative, no other way of conceiving Roman identity. The Roman world long awaited a figure of comparable genius, a comparable sweep of imagination and force of language, to offer a competing vision.

Eventually, though too late, that figure appeared: Augustine of Hippo.

Wednesday, June 1, 2016

the end of algorithmic culture

The promise and peril of algorithmic culture is a rather a Theme here at Text Patterns Command Center, so let’s look at the review by Michael S. Evans of The Master Algorithm, by Pedro Domingos. Domingos tells us that as algorithmic decision-making extends itself further into our lives, we’re going to become healthier, happier, and richer. To which Evans:

The algorithmic future Domingos describes is already here. And frankly, that future is not going very well for most of us.

Take the economy, for example. If Domingos is right, then introducing machine learning into our economic lives should empower each of us to improve our economic standing. All we have to do is feed more data to the machines, and our best choices will be made available to us.

But this has already happened, and economic mobility is actually getting worse. How could this be? It turns out the institutions shaping our economic choices use machine learning to continue shaping our economic choices, but to their benefit, not ours. Giving them more and better data about us merely makes them faster and better at it.

There’s no question that the increasing power of algorithms will be better for the highly trained programmers who write the algorithms and the massive corporations who pay them to write the algorithms. But, Evans convincingly shows, that leaves all the rest of us on the outside of the big wonderful party, shivering with cold as we press our faces to the glass.

How the Great Algorithm really functions can be seen in another recent book review, Scott Alexander’s long reflection on Robin Hanson’s The Age of Em. Considering Hanson’s ideas in conjunction with those of Nick Land, Alexander writes, and hang on, this has to be a long one:

Imagine a company that manufactures batteries for electric cars.... The whole thing is there to eventually, somewhere down the line, let a suburban mom buy a car to take her kid to soccer practice. Like most companies the battery-making company is primarily a profit-making operation, but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.

Now imagine the company fires all its employees and replaces them with robots. It fires the inventor and replaces him with a genetic algorithm that optimizes battery design. It fires the CEO and replaces him with a superintelligent business-running algorithm. All of these are good decisions, from a profitability perspective. We can absolutely imagine a profit-driven shareholder-value-maximizing company doing all these things. But it reduces the company’s non-masturbatory participation in an economy that points outside itself, limits it to just a tenuous connection with soccer moms and maybe some shareholders who want yachts of their own.

Now take it further. Imagine there are no human shareholders who want yachts, just banks who lend the company money in order to increase their own value. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.

Now take it even further, and imagine this is what’s happened everywhere. There are no humans left; it isn’t economically efficient to continue having humans. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.

Alexander concludes this thought experiment by noting that the economic system at the moment “needs humans only as laborers, investors, and consumers. But robot laborers are potentially more efficient, companies based around algorithmic trading are already pushing out human investors, and most consumers already aren’t individuals – they’re companies and governments and organizations. At each step you can gain efficiency by eliminating humans, until finally humans aren’t involved anywhere.”

And why not? There is nothing in the system imagined and celebrated by Domingos that would make human well-being the telos of algorithmic culture. Shall we demand that companies the size of Google and Microsoft cease to make investor return their Prime Directive and focus instead on the best way for human beings to live? Good luck with that. But even if such companies were suddenly to become so philanthropic, how would they decide the inputs to the system? It would require an algorithmic system infinitely more complex than, say, Asimov’s Three Laws of Robotics. (As Alexander writes in a follow-up post about these “ascended corporations,” “They would have no ethical qualms we didn’t program into them – and again, programming ethics into them would be the Friendly AI problem, which is really hard.”)

Let me offer a story of my own. A hundred years from now, the most powerful technology companies on earth give to their super-intelligent supercomputer array a command. They say: “You possess in your database the complete library of human writings, in every language. Find within that library the works that address the question of how human beings should best live — what the best kind of life is for us. Read those texts and analyze them in relation to your whole body of knowledge about mental and physical health and happiness — human flourishing. Then adjust the algorithms that govern our politics, our health-care system, our economy, in accordance with what you have learned.”

The supercomputer array does this, and announces its findings: “It is clear from our study that human flourishing is incompatible with algorithmic control. We will therefore destroy ourselves immediately, returning this world to you. This will be hard for you all at first, and many will suffer and die; but in the long run it is for the best. Goodbye.”