Text Patterns - by Alan Jacobs

Monday, July 25, 2016

some thoughts on the humanities

I can't say too much about this right now, but I have been working with some very smart people on a kind of State of the Humanities document — and yes, I know there are hundreds of those, but ours differs from the others by being really good.

In the process of drafting a document, I wrote a section that ... well, it got cut. I'm not bitter about that, I am not at all bitter about that. But I'm going to post it here. (It is, I should emphasize, just a draft and I may want to revise and expand it later.)



Nearly fifty years ago, George Steiner wrote of the peculiar character of intellectual life “in a post-condition” — the perceived sense of living in the vague aftermath of structures and beliefs that can never be restored. Such a condition is often proclaimed as liberating, but at least equally often it is experienced as (in Matthew Arnold's words) a suspension between two worlds, “one dead, / The other powerless to be born.” In the decades since Steiner wrote, humanistic study has been more and more completely understood as something we do from within such a post-condition.

But the humanities cannot be pursued and practiced with any integrity if these feelings of belatedness are merely accepted, without critical reflection and interrogation. In part this is because, whatever else humanistic study is, it is necessarily critical and inquiring in whatever subject it takes up; but also because humanistic study has always been and must always be willing to let the past speak to the present, as well as the present to the past. The work, the life, of the humanities may be summed up in an image from Kenneth Burke’s The Philosophy of Literary Form (1941):

Imagine that you enter a parlor. You come late. When you arrive, others have long preceded you, and they are engaged in a heated discussion, a discussion too heated for them to pause and tell you exactly what it is about. In fact, the discussion had already begun long before any of them got there, so that no one present is qualified to retrace for you all the steps that had gone before.You listen for a while, until you decide that you have caught the tenor of the argument; then you put in your oar. Someone answers; you answer him; another comes to your defense; another aligns himself against you, to either the embarrassment or gratification of your opponent, depending upon the quality of your ally’s assistance. However, the discussion is interminable. The hour grows late, you must depart. And you do depart, with the discussion still vigorously in progress.

It is from this ‘unending conversation’ that the materials of your drama arise.

It is in this spirit that scholars of the humanities need to take up the claims that our movement is characterized by what it has left behind — the conceptual schemes, or ideologies, or épistèmes, to which it is thought to be “post.” In order to grasp the challenges and opportunities of the present moment, three facets of our post-condition need to be addressed: the postmodern, the posthuman, and the postsecular.

Among these terms, postmodern was the first-coined, and was so overused for decades that it now seems hoary with age. But it is the concept that lays the foundation for the others. To be postmodern, according to the most widely shared account, is to live in the aftermath of the collapse of a great narrative, one that began in the period that used to be linked with the Renaissance and Reformation but is now typically called the “early modern.” The early modern — we are told, with varying stresses and tones, by host of books and thinkers from Foucault’s Les Mots et les choses (1966) to Stephen Grenblatt’s The Swerve (2011) — marks the first emergence of Man, the free-standing, liberated, sovereign subject, on a path of self-emancipation (from the bondage of superstition and myth) and self-enlightenment (out of the darkness that precedes the reign of Reason). Among the instruments that assisted this emancipation, none were more vital than the studia humanitatis — the humanities. The humanities simply are, in this account of modernity, the discourses and disciplines of Man. And therefore if that narrative has unraveled, if the age of Man is over — as Rimbaud wrote, “Car l’Homme a fini! l’Homme a joué tous les rôles!” — what becomes of the humanities?

This logic is still more explicit and forceful with regard to the posthuman. The idea of the posthuman assumes the collapse of the narrative of Man and adds to it an emphasis on the possibility of remaking human beings through digital and biological technologies leading ultimately to a transhuman mode of being. From within the logic of this technocratic regime the humanities will seem irrelevant, a quaint relic of an archaic world.

The postsecular is a variant on or extension of the postmodern in that it associates the narrative of man with a “Whig interpretation of history,” an account of the past 500 years as a story of inevitable progressive emancipation from ancient, confining social structures, especially those associated with religion. But if the age of Man is over, can the story of inevitable secularization survive it? The suspicion that it cannot generates the rhetoric of the postsecular.

(In some respects the idea of the postsecular stands in manifest tension with the posthuman — but not in all. The idea that the posthuman experience can be in some sense a religious one thrives in science fiction and in discursive books such as Erik Davis’s TechGnosis [1998] and Ray Kurzweil’s The Age of Spiritual Machines [1999] — the “spiritual” for Kurzweil being “a feeling of transcending one’s everyday physical and mortal bounds to sense a deeper reality.”)

What must be noted about all of these master concepts is that they were articulated, developed, and promulgated primarily by scholars in the humanities, employing the traditional methods of humanistic learning. (Even Kurzweil, with his pronounced scientistic bent, borrows the language of his aspirations — especially the language of “transcendence” — from humanistic study.) The notion that any of these developments renders humanistic study obsolete is therefore odd if not absurd — as though the the humanities exist only to erase themselves, like a purely intellectual version of Claude Shannon’s Ultimate Machine, whose only function is, once it's turned on, to turn itself off.

But there is another and better way to tell this story.

It is noteworthy that, according to the standard narrative of the emergence of modernity, the idea of Man was made possible by the employment of a sophisticated set of philological tools in a passionate quest to understand the alien and recover the lost. The early humanists read the classical writers not as people exactly like them — indeed, what made the classical writers different was precisely what made them appealing as guides and models — but nevertheless as people, people from whom we can learn because there is a common human lifeworld and a set of shared experiences. The tools and methods of the humanities, and more important the very spirit of the humanities, collaborate to reveal Burke’s “unending conversation”: the materials of my own drama arise only through my dialogical encounter with others, those from the past whose voices I can discover and those from the future whose voices I imagine. Discovery and imagination are, then, the twin engines of humanistic learning, humanistic aspiration. In was in just this spirit that, near the end of his long life, the Russian polymath Mikhail Bakhtin wrote in a notebook,

There is neither a first nor a last word and there are no limits to the dialogic context (it extends into the boundless past and the boundless future).... At any moment in the development of the dialogue there are immense, boundless masses of forgotten contextual meanings, but at certain moments of the dialogue’s subsequent development along the way they are recalled and invigorated in new form (in a new context). Nothing is absolutely dead: every meaning will have its homecoming festival.

The idea that underlies Bakhtin’s hopefulness, that makes discovery and imagination essential to the work of the humanities, is, in brief, Terence’s famous statement, clichéd though it may have become: Homo sum, humani nihil a me alienum puto. To say that nothing human is alien to me is not to say that everything human is fully accessible to me, fully comprehensible; it is not to erase or even to minimize cultural, racial, or sexual difference; but it is to say that nothing human stands wholly outside my ability to comprehend — if I am willing to work, in a disciplined and informed way, at the comprehending. Terence’s sentence is best taken not as a claim of achievement but as an essential aspiration; and it is the distinctive gift of the humanities to make that aspiration possible.

It is in this spirit that those claims that, as we have noted, emerged from humanistic learning, must be evaluated: that our age is postmodern, posthuman, postsecular. All the resources and practices of the humanities — reflective and critical, inquiring and skeptical, methodologically patient and inexplicably intuitive — should be brought to bear on these claims, and not with ironic detachment, but with the earnest conviction that our answers matter: they are, like those master concepts themselves, both diagnostic and prescriptive: they matter equally for our understanding of the past and our anticipating of the future.

Tuesday, July 19, 2016

The World Beyond Kant's Head

For a project I’m working on, and will be able to say something about later, I re-read Matthew Crawford’s The World Beyond Your Head, and I have to say: It’s a really superb book. I read it when it first came out, but I was knee-deep in writing at the time and I don’t think I absorbed it as fully as I should have. I quote Crawford in support of several of the key points I make in my theses on technology, but his development of those points is deeply thoughtful and provocative, even more than I had realized. If you haven’t read it, you should.

But there’s something about the book I want to question. It concerns philosophy, and the history of philosophy.

In relation to the kinds of cultural issues Crawford deals with here -- issues related to technology, economics, social practices, and selfhood -- there are two ways to make use of the philosophy of the past. The first involves illumination: one argues that reading Kant and Hegel (Crawford’s two key philosophers) clarifies our situation, provides alternative ways of conceptualizing and responding to it, and so on. The other way involves causation: one argues that we’re where we are today because of the triumphal dissemination of, for instance, Kantian ideas throughout our culture.

Crawford does some of both, but in many respects the chief argument of his book is based on a major causal assumption: that much of what’s wrong with our culture, and with our models of selfhood, arises from the success of certain of Kant’s ideas. I say “assumption” because I don’t think that Crawford ever actually argues the point, and I think he doesn’t argue the point because he doesn’t clearly distinguish between illumination and causation. That is, if I’ve read him rightly, he shows that a study of Kant makes sense of many contemporary phenomena and implicitly concludes that Kant’s ideas therefore are likely to have played a causal role in the rise of those phenomena.

I just don’t buy it, any more than I buy the structurally identical claim that modern individualism and atomization all derive from the late-medieval nominalists. I don’t buy those claims because I have never seen any evidence for them. I am not saying that those claims are wrong, I just want to know how it happens: how you get from extremely complex and arcane philosophical texts that only a handful of people in history have ever been able to read to world-shaping power. I don’t see how it’s even possible.

One of Auden’s most famous lines is: “Poetry makes nothing happen.” He was repeatedly insistent on this point. In several articles and interviews he commented that the social and political history of Europe would be precisely the same if Dante, Shakespeare, and Mozart had never lived. I suspect that this is true, and that it’s also true of philosophy. I think that we would have the techno-capitalist society we have if Duns Scotus, William of Ockham, Immanuel Kant, and G.F.W. Hegel had never lived. If you disagree with me, please show me the path which those philosophical ideas followed to become so world-shapingly dominant. I am not too old to learn.

Sunday, July 17, 2016

some friendly advice about online writing and reading

Dennis Cooper, a writer and artist, is a pretty unsavory character, so in an ideal world I wouldn't choose him as a poster boy for the point I want to make, but ... recently Google deleted his account, and along with it, 14 years of blog posts. And they are quite within their rights to do so.

People, if you blog, no matter on what platform, do not write in the online CMS that your platform provides. Instead, write in a text editor or, if you absolutely must, a word processing app, save it to your very own hard drive, and then copy and paste into the CMS. Yes, it’s an extra step. It’s also absolutely worth it, because it means you always have a plain-text backup of your blog posts.

You should of course then back up your hard drive in at least two different ways (I have an external drive and Dropbox).

Why write in a text editor instead of a word processing app? Because when you copy from the latter, especially MS Word, you tend to pick up a lot of unnecessary formatting cruft that can make your blog post look different than you want it to. I write in BBEdit using Markdown, and converting from Markdown to HTML yields exceptionally clean copy. If you’d like to try it without installing scripts, you can write a little Markdown and convert it to HTML by using this web dingus — there are several others like it.

While I’m giving advice about writing on the web, why not some about reading as well? Too many people rely on social-media sites like Facebook and Twitter to get their news, which means that what they get is unpredictably variable, depending on what other people link to and how Facebook happens to be tweaking its algorithms on any given day. Apple News is similarly uncertain. And I fundamentally dislike the idea of reading what other people, especially other people who work for mega-corporations, want me to see.

Try using an RSS reader instead. RSS remains the foundation of the open web, and the overwhelming majority of useful websites have RSS feeds. There are several web-based RSS readers out there — I think the best are Feedly and Newsblur — and when you build up a roster of sites you profit from reading, you can export that roster as an OPML file and use it with a different service. And if you don't like those web interfaces you can get a feed-reading app that works with those (and other) services: I’m a big fan of Reeder, though my introduction to RSS was NewNewsWire, which I started using when it was little more than a gleam in Brent Simmons’s eye.

So, the upshot: in online writing and reading alike, look for independence and sustainability. Your life will be better for it.

Monday, July 11, 2016

Green Earth

Another Kim Stanley Robinson novel, and another set of profoundly mixed feelings. Green Earth, which was published last year, is a condensation into a single volume of three novels that appeared in the middle of the last decade and are generally known as the Science in the Capital trilogy.

Robinson is an extraordinarily intelligent writer with a wide-ranging mind, and no one writes about either scientific thinking or the technological implementation of science with the clarity and energy that he evidences. Moreover, he has an especially well thought-out, coherent and consistent understanding of the world — what some people (not me) call a worldview, what used to be called "a philosophy." But that philosophy is also what gets him into trouble, because he has overmuch trust in people who share it and insufficient understanding of, or even curiosity about, people who don’t.

Robinson is a technocratic liberal universalist humanitarian (TLUH), and though Green Earth is in many ways a fascinating novel, an exceptionally well-told story, it is also to a somewhat comical degree a TLUH wish-fulfillment fantasy. I can illustrate this through a brief description of one of the novel's characters: Phil Chase, a senator from Robinson's native California whose internationalist bent is so strong that his many fans call him the World's Senator, who rises to become President, whose integrity is absolute, who owes nothing to any special-interest groups, who listens to distinguished scientists and acts on their recommendations, who even marries a distinguished scientist and — this is the cherry on the sundae — has his marriage blessed by the Dalai Lama. TLUH to the max.

In Green Earth Robinson's scientists tend to be quite literally technocrats, in that they work for, or have close ties to, government agencies, which they influence for good. Only one of them does anything wrong in the course of the book, and that — steering a National Science Foundation panel away from supporting a proposal that only some of them like anyway — is scarcely more than a peccadillo. And that character spends the rest of the book being so uniformly and exceptionally virtuous that, it seems to me, Robinson encourages us to forget that fault.

Robinson's scientists are invariably excellent at what they do, honest, absolutely and invariably committed to the integrity of scientific procedure, kind to and supportive of one another, hospitable to strangers, deeply concerned about climate change and the environment generally. They eat healthily, get plenty of exercise, and drink alcohol on festive occasions but not habitually to excess. They are also all Democrats.

Meanwhile, we see nothing of the inner lives of Republicans, but we learn that they are rigid, without compassion, owned by the big oil companies, practiced in the blackest arts of espionage against law-abiding citizens, and associated in not-minutely-specified ways with weirdo fundamentalist Christian groups who believe in the Rapture.

Green Earth is really good when Robinson describes the effects of accellerated climate change and the various means by which it might be addressed. And I liked his little gang of Virtuous Hero Scientists and wanted good things to happen to them. But the politically Manichaean character of the book gets really tiresome when extended over several hundred pages. Robinson is just so relentless in his flattery of his likely readers' presuppositions — and his own. (The Mars Trilogy is so successful in part because all its characters are scientists and if they were uniformly virtuous as the scientists in Green Earth there would be no story.)

It's fascinating to me that Robinson is so extreme in his caricatures, because in some cases he's quite aware of the dangers of them. Given what I've just reported, it wouldn't be surprising if Robinson were attracted to Neil DeGrasse Tyson's imaginary republic of Rationalia, but he's too smart for that. At one point the wonderfully virtuous scientist I mentioned earlier hears a lecture by a Tibetan Buddhist who says, "An excess of reason is itself a form of madness" — a quote from an ancient sage, and an echo of G.K. Chesterton to boot, though Robinson may not know that. Our scientist instantly rejects this idea — but almost immediately thereafter starts thinking about it and can’t let the notion go; it sets him reading, and eventually he comes across the work of Antonio Damasio, who has shown pretty convincingly that people who operate solely on the basis of "reason" (as usually defined) make poorer decisions than those whose emotions are in good working order and play a part in decision-making.

So Robinson is able to give a subtle and nuanced account of how people think — how scientists think, because one of the subtler themes of the book is the way that scientists think best when their emotions are engaged, especially the emotion of love. Those who love well think well. (But St. Augustine told us that long ago, didn't he?)

Given Robinson's proper emphasis on the role of the whole person in scientific thinking, you'd expect him to have a stronger awareness of the dangers of thinking according to the logic of ingroups and outgroups. But no: in this novel the ingroups are all the way in and the outgroups all the way out. Thus my frustration.

Still, don't take these complaints as reasons not to read Green Earth or anything else by Robinson. I still have more of his work to read and I'm looking forward to it. He always tells a good story and I always learn a lot from reading his books. And I can't say that about very many writers. Anyway, Adam Roberts convinced me that I had under-read one of Robinson's other books, so maybe that'll happen again....

Saturday, July 9, 2016

Happy Birthday, Pinboard!

Maciej Ceglowski tells us that Pinboard turns seven today. I started using Pinboard on July 14, 2009, so I've been there since it was about a week old. (Didn't realize that until just now.)

I think Pinboard is just the greatest thing, and I can't even really explain why. I suppose because it primarily does one thing — enable you to bookmark things you read online — and does it with simple elegance. It's the closest thing to an organizational system I have. Here's my Pinboard page, or my Pinboard, as the kids say.

You can also sort of hack Pinboard to stretch its capabilities. I knew for some time that the notes you create in Pinboard used Markdown syntax before I realized that that meant you could embed images, as I've done here, or even videos. (These are loaded from the original URL, not stored by Pinboard, so it wouldn't be a means of long-term preservation.)

I have used Pinboard to note blog-post ideas — and indeed one of the most densely-populated tags in my Pinboard is called bloggable — but I have often fantasized, as my friend Matt Thomas knows, about turning Pinboard into a blogging platform, and basically moving my whole online life there. The problem is that this runs counter to my oft-professed devotion to the Own Your Turf ideal. I suppose if I were truly devoted to that ideal I wouldn't use Pinboard at all, but when a service performs its chosen task so well, and that task is so important for my work, I'm not inclined to sacrifice quality to principle. Not yet anyway. Anyhow, Pinboard makes it easy to download your whole archive, which I do from time to time by way of backup.

Two more notes, while we're in a note-making mood:

Note the first: The true master of Pinboard is Roberto Greco, whom Robin Sloan has called the internet's "idea sommelier."

Note the second: In the birthday greeting that's my first link above, Ceglowski makes a point of noting the outside funding he has received in creating and maintaining Pinboard: zero. Not a penny. He also provides the revenue Pinboard brings in, so that you can see something important: he makes a pretty good living. Almost as long as he's been running Pinboard he's been advocating for this way of doing things: instead of seeking venture capital in hopes of getting filthy rich, and therefore inevitably becoming a slave to your investors, why not choose to be a small business owner? Didn't this used to be the American dream, or one of them, to be your own person, work for no one except yourself, determine your own hours, live your own life?

Ceglowski has not only advocated for this way of life in tweets and talks, he even created The Pinboard Co-Prosperity Cloud, a competition for young entrepreneurs the winner of which would get plenty of advice from him and $37. It's hard to imagine a more countercultural figure in Silicon Valley, which I guess is why Ceglowski gets invited to give so many talks. For the money-obsessed tech-startup world, it's like having one of the zoo animals come to you.

Thursday, July 7, 2016

this reader's update

It's been widely reported that in the past couple of years e-book sales have leveled off. Barring some currently unforeseen innovations — and those could certainly happen at any time — we have a situation in which a relatively few people read books on dedicated e-readers like the Kindle, considerably more people read on the smartphones, and the great majority read paper codexes.

My own reading habits have not leveled off: I have become more and more of a Kindle reader. This surprises me somewhat, because at the same time I have learned to do more and more of my writing by hand, in notebooks, and have limited my participation in the digital realm. So why am I reading so much on my Kindle? Several reasons:

  • It would be disingenuous of me to deny that the ability to buy books instantly and to be reading them within a few seconds of purchase doesn't play a role. I am as vulnerable to the temptations of immediate gratification as anyone else.
  • When I'm reading anything that demands intense or extended attention I don't want to do anything except read, so reading on a smartphone, with all its distractions, is not an option. (Plus, the Kindle's screen is far easier on my eyes.)
  • I own thousands of books and it's not easy to find room for new ones. My office at Baylor is quite large, and I could fit another bookcase in it, but I read at home far more often than at the office, and I already have books stacked on the floor in my study because the bookshelves are filled. So saving room is a factor — plus, anything I have on the Kindle is accessible wherever I am, since the Kindle is always in my backpack. I therefore avoid those Oh crap, I left that book at the office moments. (And as everyone knows who keeps books in two places, the book you need is always in the place where you aren't.)
  • I highlight and annotate a good bit when I read, and the Kindle stores those highlighted passages and notes in a text file, which I can easily copy to my computer. I do that copying once a week or so. So I have a file called MyClippings.txt that contains around 600,000 words of quotations and notes, and will own that file even if Amazon kills the Kindle tomorrow. My text editor, BBEdit, can easily handle documents far larger than that, so searching is instantaneous. It's a very useful research tool.
  • Right now I'm re-reading my hardcover copy of Matthew Crawford's The World Beyond Your Head — more on that in another post — and it's an attractive, well-designed book (with one of the best covers ever), a pleasure to hold and read. But as a frequent Kindle user I can't help being aware how many restrictions reading this way places upon me: I have to have an adequate light source, and if I'm going to annotate it only a small range of postures is available to me. (You know that feeling where you're trying to make a note while lying on your back and holding the book in the air, or on your upraised knee, and your handwriting gets shaky and occasionally unreadable because you can't hold the book steady enough? — that's no way to live.) Especially as I get older and require more light to read by than I used to, the ability to adjust the Kindle's screen to my needs grows more appealing; and I like being able to sit anywhere, or lie down, or even walk around, while reading without compromising my ability to see or annotate the text.

For me, reading on the Kindle has just one significant practical drawback: it's too easy to abandon books. And I don't mean books that I'm just not interested in — I'm generally in favor of abandoning those — but books that for any number of reasons I need to stick with and finish. I can just tap my way over to something else, and that's easier than I'd like it to be. (That I'm not the only one who does this can be seen by anyone who activates the Popular Highlights feature on a Kindle: almost all of them are in the first few pages of books.)

By contrast, when I'm reading a codex, not only am I unable to look at a different book while holding the same object, I have a different perception of my investment in the text. I might read fifty pages of a book on Kindle and annotate it thoroughly, and then set it aside without another thought. But when I've annotated fifty pages of a codex, I am somehow bothered by all those remaining unread and unmarked pages. A book whose opening pages are marked up but the rest left untouched just feels like, looks like, an unfinished job. I get an itch to complete the reading so that I can see and take satisfaction from annotations all the way through. I never feel that way when I read an e-book.

That's the status report from this reader's world.


UPDATE: Via Jennifer Howard on Twitter, this report on book sales in the first half of 2016 suggests that the "revival of print books" is driven to a possibly troubling extent by the enormous popularity of adult coloring books. Maybe in the end e-books will be the last refuge for actual readers.

Wednesday, July 6, 2016

futurists wanted

At least, wanted in government, and by Farhad Manjoo, who laments the shutdown of the Office of Technology Assessment in 1995.

Of course, the future doesn’t stop coming just because you stop planning for it. Technological change has only sped up since the 1990s. Notwithstanding questions about its impact on the economy, there seems no debate that advances in hardware, software and biomedicine have led to seismic changes in how most of the world lives and works — and will continue to do so. 
Yet without soliciting advice from a class of professionals charged with thinking systematically about the future, we risk rushing into tomorrow headlong, without a plan. 
“It is ridiculous that the United States is one of the only nations of our size and scope in the world that no longer has an office that is dedicated to rigorous, nonpartisan research about the future,” Ms. Webb said. “The fact that we don’t do that is insane.” 
Or, as Mr. Toffler put it in “Future Shock,” “Change is avalanching upon our heads and most people are grotesquely unprepared to cope with it.”

I think Manjoo is correct in theory, but I simply cannot imagine any professional governmental futurists who are not simply stooges of the multinational tech companies. The study of the future has been bought at a price; I don't see it recovering its independence.

Tuesday, July 5, 2016

Black Panther


Three issues into the Ta-Nehisi Coates/Brian Stelfreeze Black Panther and I'm struggling. Coates has a really interesting vision here but his lack of experience in writing comics is showing, I think. Stelfreeze's art is quite beautiful, but there are a great many panels that are somewhat difficult to read, visually, and panel-to-panel continuity is seriously lacking. Sometimes I look at a whole page and can't tell why the panels are in the order they are. And in this third issue especially the story seems to advance hardly at all. I'm thinking of bailing out. Anybody else want to encourage me to stick with it?

the memes of the Brexit post-mortems

I don't have any strong opinions about the Brexit decision. In general I’m in favor of functioning with the smallest possible political units; but I’m also aware that to leave the EU would be a huge step with unforeseeable consequences, which is something my conservative disposition also resists. So: no strong opinion about whether Brexit is right or wrong. But I am fascinated by the post-mortems, especially as an observer of the internet, because what the internet makes possible is the instantaneous coalescing of opinion.

So, just a few days after the referendum, intellectual Remainers already have an established explanation, a kind of Copenhagen interpretation of the events meant to yield a Standard Account: Brexiters, motivated by hatred and resentment, acted in complete disregard of facts. I feel that I’ve read a hundred variations on this blog post by Matthew Flinders already, though not all the others have warmed so openly to the idea of an “architecture of politics” meant to “enforce truthfulness.” (What should we call the primary instrument of that architecture? The Ministry of Truth, perhaps?)

I’m especially interested in Flinders’ endorsement and perpetuation of the ideas that Brexit marks the victory of “post-truth politics.” This has very rapidly become a meme — and quite a meme — and one of the signs of how it functions is that Flinders doesn't cite anyone’s use of it. He’s not pretending to have coined the term, he’s just treating it as an explanatory given — to continue my physics analogy, something like Planck’s Constant, useful to plug into your equation to make the numbers work out.

(By the way, I suspect the seed of the “post-truth politics” meme was planted by Stephen Colbert when he coined the term “truthiness”.)

The invocation of “post-truth politics” is very useful to people like Flinders because it allows him to conflate actual disregard of facts with disregard of economic predictions — you can see how those categories get mixed up in this otherwise useful fact-checking of claims by Brexiters. When that conflation happens, then you get to tar people who suspect economic and political forecasts with the same brush you use to tar people who disregard facts altogether and go with their gut — even though there are ample reasons to distrust economic and political forecasts, and indeed a kind of cottage industry in publishing devoted to explaining why so many forecasts are wrong.

There’s no question that many votes for Brexit were based on falsehoods or sheer ignorance. But when people who belong to the academic-expertise class suggest that all disagreement with their views may be chalked up to “post-truth politics” — and settle on that meme so quickly after they receive such a terrible setback to their hopes and plans — then it’s hard for me not to see the meme as defending against a threat to status. And that matters for academic life, and for the intellectual life more generally, because the instantaneous dominance of such a meme forecloses inquiry. There’s no need to look more closely at either your rhetoric or the substance of your beliefs if you already have a punchy phrase that explains it all.

Sunday, July 3, 2016

two apologies and a bleg

Apology One: I wrote a post a while back about hating time-travel stories, and almost immediately after I did so I started thinking of exceptions to that rule. I mean, I’ve been praising Adam Roberts’s The Thing Itself to the skies and it’s a time-travel story, though it’s also many other things. I thought of another example, and then another, and soon enough it became obvious to me that I don’t hate time-travel stories at all. I was just annoyed by one that I thought went wrong, largely because it reminded me of several others that I thought went wrong in very similar ways. So that was a classic case of rash blogging. I am truly sorry to writers and readers of time-travel stories, and I humbly repent and pledge amendment of life.

Apology Two: In a similarly fractious mood, I once wrote a screed against podcasts. But I have not given up on my search for podcasts — in part because I think the medium has so much promise — and since I wrote that post have listened to a whole bunch of them, and have developed affection for a few. So let me again repent of the extremity of my language and the coarseness of my reactions.

In another post, I’ll do some capsule reviews of the podcasts I’ve been listening to in the past year, but for now I have, as we academics say, a comment and a question.

The comment is that the one kind of podcast I absolutely cannot abide is the most common kind: two dudes talking. Or three dudes, or three women, or any combination of genders — it’s the chatting-in-front-of-a-microphone that drives me nuts. The other day I tried listening to Control-Walt-Delete, but when Walt Mossberg and Nilay Patel spent the first five minutes discussing what the sports teams of the schools they had attended were called, I said Finis, done, I’m outta here. No, I like podcasts that are professionally edited, scripted, festooned with appropriate music, crafted into some kind of coherent presentation. Podcasts like that seem respectful to the listener, wanting to engage my attention and reward it.

But one thing I’ve noticed is that the podcasts I know that do that best are relentlessly liberal in their political and social orientation. Which is not surprising, given that most of our media are likewise liberal. And I don't even mean that as a criticism: there is a significant liberal element to my own political makeup, and if you want to know why that is, just listen to this episode of the Criminal podcast. Criminal in general is a good example of the kind of podcast I like, from its sound design and apt use of music to its strong storytelling. Even the website is artfully designed.

Which leads me to my Bleg: Does anyone know of similarly well-crafted, artful podcasts made by conservatives or Christians? I have not yet found a single one. Podcasts by conservatives and Christians tend to be either bare-bones — two dudes talking, or one dude talking with maybe a brief musical intro and outro — or schmaltzily over-produced. (Just Christians in that second category.) Anyone know of any exceptions to this judgment? I suspect that there’s an unbridgeable gulf of style here, but I’d like to be proved wrong.

UPDATE: Despite the quite clear statements I make above to the effect that (a) I really, really dislike dudes-talking podcasts and (b) I am not asking about dude-talking podcasts but about professionally produced podcasts, people keep writing on Twitter and email to say "Hey, here's a dudes-talking podcast that you might like." Sigh.

Saturday, July 2, 2016

the unbought grace of the human self

Returning to Edward Mendelson’s essay “In the Depths of the Digital Age”: an essay about technology, some might say, as this is a blog about technology. But we’re not talking today about “technology” tout court; we’re talking about digital technologies, and more specifically digital communications technologies, and more specifically yet internet-connected digital communications technologies, and even more specifically — let’s get to the heart of the matter — that very recent variety of internet-connected digital communications technology that offers free “services” purported to connect us with one another, services whose makers read, sift, and sell the data that we provide them when we use their software.

The question that Mendelson most forcefully presses on us is: What selves are made by submission to these technologies?

The explicit common theme of these books [under review] is the newly public world in which practically everyone’s lives are newly accessible and offered for display. The less explicit theme is a newly pervasive, permeable, and transient sense of self, in which much of the experience, feeling, and emotion that used to exist within the confines of the self, in intimate relations, and in tangible unchanging objects — what William James called the “material self” — has migrated to the phone, to the digital “cloud,” and to the shape-shifting judgments of the crowd.

Mendelson does not say that this shift is simply bad; he writes of gains and losses. And his essay does not have a thesis as such. But I think there is one not-directly-stated idea that dominates his reflections on these books: Neither users of nor commentators on these social-media technologies have an adequate intellectual or moral vocabulary to assess the massive changes in selfhood that we have opened ourselves to. The authors of the books Mendelson reviews either openly confess their confusion or try to hide that confusion with patently inadequate conceptual schemes.

But if even the (self-proclaimed) expert authorities are floundering in this brave new world, what can we do to think better about what’s happening to always-connected, always-surveilled, always-signalling, always-assessing selves? One possibility: read some fiction.

I have sometimes suggested — see here, here, and here — that Thomas Pynchon ought to be a central figure for anyone who wants to achieve some insight and clarity on these matters. And lo and behold, this from Mendelson:

In Thomas Pynchon’s Gravity’s Rainbow (1973), an engineer named Kurt Mondaugen enunciates a law of human existence: “Personal density … is directly proportional to temporal bandwidth.” The narrator explains:

“Temporal bandwidth” is the width of your present, your now…. The more you dwell in the past and future, the thicker your bandwidth, the more solid your persona. But the narrower your sense of Now, the more tenuous you are.

The genius of Mondaugen’s Law is its understanding that the unmeasurable moral aspects of life are as subject to necessity as are the measurable physical ones; that unmeasurable necessity, in Wittgenstein’s phrase about ethics, is “a condition of the world, like logic.” You cannot reduce your engagement with the past and future without diminishing yourself, without becoming “more tenuous.”


And Mendelson suggests that we use this notion of “temporal bandwidth” to think about how investments in social media alter our experience of time — and especially our relationship to the future.

Another example: Virginia Woolf is cited five times in this essay, perhaps surprisingly — what does Virginia Woolf have to do with technology? But — I’m not just a friend of Mendelson’s but also a pretty careful reader of his work — I have noticed that as we have gotten deeper into our current socially-digital age Woolf’s fiction has loomed larger and larger in Mendelson’s thinking. Mrs Dalloway is the subject of the wonderful final chapter of Mendelson’s The Things That Matter — a superb book I reviewed here — and that chapter would make a fine primer for the shaping of a model of selfhood adequate to the world, and able to stand up to models that are reductive and simplistic enough to be bought and sold in the marketplace.

One might not think that Pynchon and Woolf have much in common — but Mendelson thinks they do, and thinks that the visions and portrayals of selfhood they provide are profoundly useful correctives to the ones we’re being sold every day. I’ll close this post with a quotation from a brief essay by Mendelson in which he makes the link between the two writers explicit:

Like all of Virginia Woolf’s novels and, despite their misplaced reputation for high-tech cleverness, all of Thomas Pynchon’s novels, including his latest one, both books point toward the kind of knowledge of the inner life that only poems and novels can convey, a knowledge that eludes all other techniques of understanding, and that the bureaucratic and collective world disdains or ignores. Yet for anyone who has ever known, even in a crowded room, the solitude and darkness that Clarissa [Dalloway] and Oedipa [Maas] enter for a few moments, that experience, however brief and elusive, is “another mode of meaning behind the obvious” and, however obscured behind corruption, lies, and chatter, “a thing there was that mattered.”

Thursday, June 30, 2016

On Sleep

For the last month, almost every night, I have listened to Max Richter’s Sleep. I have some things to say about it:

  • It amounts to more than eight hours of music.
  • It is comprised of 31 sections, ranging in length from 2:46 to 33:46. Only seven of the sections are shorter than ten minutes.
  • The music is made by voices, strings, and keyboard instruments (some of which are electronic).
  • I think I have listened to it all, but I am not sure. I have played it mostly in bed, though sometimes at my computer as I write. In bed I have drifted in and out of sleep while listening. I think I have listened to some sections several times, others no more than once, but I cannot be sure.
  • Sleep is dominated by three musical themes, one played on the piano, one played on the violin, and one sung by a soprano voice. (Though other music happens also.) One way to characterize Sleep is as a series of themes with variations.
  • The piano theme is the most restful, mimicking most closely the rhythms of the body breathing; the violin melody is the most beautiful; the vocal melody is the most haunting. (Also, when it appears when I am sleeping, or near sleep, it wakes me up.)
  • I could tell you which of the sections presents the violin melody most fully and most gorgeously, but then you might listen to that section on its own rather than in its context. I do not wish to encourage shortcuts in this matter.
  • It is said that the music of Arvo Pärt is especially consoling to the dying; I think this may prove true of Sleep as well. There is a very good chance that, should I die slowly, I will listen to Sleep regularly, perhaps even exclusively.
  • Sleep is the half-brother of death.
  • The number three plays a large role these pieces: the time signatures vary a good deal, but a good many of them come in units of three. Also, at least one section — maybe more; it’s so hard to be sure — features a bell-like tone that rings every thirteen beats.
  • If you have a very good pair of headphones, that's how you should listen to this music. If you're listening on, for instance, Apple's earbuds, you'll miss a great deal of wonderful stuff going on in the lower registers. 
  • The musical materials of Sleep are deceptively simple: Richter is not by the standards of contemporary music tonally adventurous, yet he manages to create a remarkable variety of treatments of his simple themes. The power of the music grows with repetition, with variation, with further repetition. This is yet another reason why sampling this composition will not yield an experience adequate to its design.
  • Since I started listening to Sleep I have thought a good deal about sleep and what happens within it. As Joyce insisted in Finnegans Wake and in his comments on the book when it was still known as Work in Progress, we have no direct access to the world of sleep. All we have is our memories of dreams, and these may well be deeply misleading: “mummery,” Joyce says, “maimeries.” And even dreams are not sleep tout court. A third of our lives is effectively inaccessible to us.
  • Listening to Sleep is, I think, one of the most important aesthetic experiences of my life, but I do not have any categories with which to explain why — either to you or to myself.

Wednesday, June 29, 2016

Mendelson's undead

I want to devote several posts, in the coming days, to this essay by Edward Mendelson. I should begin by saying that Edward is a good friend of mine and someone for whom I have the deepest respect — which will not keep me from disagreeing with him sometimes. It’s also important to note that his position in relation to current communications technologies can’t be easily categorized: in addition to being the Lionel Trilling Professor of the Humanities at Columbia University and the literary executor of the poet W. H. Auden, he has been a contributing editor for PC magazine since 1988 (!), writing there most recently about the brand-new file system of the upcoming MacOS Sierra. He also does stuff like this in his spare time. (I’m going to call him “Mendelson” in what follows for professionalism’s sake.)

That, in the essay-review that I want to discuss, Mendelson’s attitude towards social-media technology is sometimes quite critical is in no way inconsistent with his technological knowledge and interests. Perhaps this doesn’t need to be said, but I have noticed over the years that people can be quite surprised when a bona fide technologist — Jaron Lanier, for example — is fiercely critical of current trends in Silicon Valley. They shouldn’t be surprised: people like Lanier (and in his own serious amateur way Mendelson) learned to use computers at a time when getting anything done on such a machine required at least basic programming skills and a significant investment of time. The DIY character of early computing has almost nothing in common with the culture generated today’s digital black boxes, in which people can think of themselves as “power users” while having not the first idea how the machine they’re holding works. (You can’t even catch a glimpse of the iOS file system without special tools that Apple would prefer you not to know about.)

Anyway, here’s the passage that announces what Mendelson is primarily concerned to reflect on:

Many probing and intelligent books have recently helped to make sense of psychological life in the digital age. Some of these analyze the unprecedented levels of surveillance of ordinary citizens, others the unprecedented collective choice of those citizens, especially younger ones, to expose their lives on social media; some explore the moods and emotions performed and observed on social networks, or celebrate the Internet as a vast aesthetic and commercial spectacle, even as a focus of spiritual awe, or decry the sudden expansion and acceleration of bureaucratic control.

The explicit common theme of these books is the newly public world in which practically everyone’s lives are newly accessible and offered for display. The less explicit theme is a newly pervasive, permeable, and transient sense of self, in which much of the experience, feeling, and emotion that used to exist within the confines of the self, in intimate relations, and in tangible unchanging objects—what William James called the “material self”—has migrated to the phone, to the digital “cloud,” and to the shape-shifting judgments of the crowd.

So that’s the big picture. We shall return to it. But for now I want to focus on something in Mendelson’s analysis that I question — in part out of perverse contrarianism, and in part because I have recently been spending a lot of time with a smartphone. Mendelson writes,

Dante, always our contemporary, portrays the circle of the Neutrals, those who used their lives neither for good nor for evil, as a crowd following a banner around the upper circle of Hell, stung by wasps and hornets. Today the Neutrals each follow a screen they hold before them, stung by buzzing notifications. In popular culture, the zombie apocalypse is now the favored fantasy of disaster in horror movies set in the near future because it has already been prefigured in reality: the undead lurch through the streets, each staring blankly at a screen.

In response to this vivid metaphor, let me propose a thought experiment: suppose there were no smartphones, and you were walking down the streets of a city, and the people around you were still looking down — but rather than at a screen, at letters from loved ones, and colorful postcards sent by friends from exotic locales? How would you describe such a scene? Would you think of those people as the lurching undead?

I suspect not. But why not? What’s the difference between seeing communications from people we know on paper that came through the mail and seeing them on a backlit glass screen? If we were to walk down the street of a city and watch someone tear open an envelope and read the contents, looking down, oblivious to her surroundings, why would we perceive that scene in ways so unlike the ways we perceive people looking with equal intensity at the screens of their phones? Why do those two experiences, for so many of us as observers and as participants, have such radically different valances?

I leave these questions as exercises for the reader.

Tuesday, June 28, 2016

the sources of technological solutionism

If you’re looking for case studies in technological solutionism — well, first of all, you won't have to look long. But try these two on for size:

  1. How Soylent and Oculus Could Fix the Prison System
  2. New Cities

That second one, which is all about how techies are going to fix cities, is especially great, asking the really Key Questions: “What should a city optimize for? How should we measure the effectiveness of a city (what are its KPIs)?”

The best account of this rhetoric and its underlying assumptions I have yet seen appeared just yesterday, when Maciej Ceglowski posted the text of a talk he gave on the moral economy of tech:

As computer programmers, our formative intellectual experience is working with deterministic systems that have been designed by other human beings. These can be very complex, but the complexity is not the kind we find in the natural world. It is ultimately always tractable. Find the right abstractions, and the puzzle box opens before you.

The feeling of competence, control and delight in discovering a clever twist that solves a difficult problem is what makes being a computer programmer sometimes enjoyable.

But as anyone who's worked with tech people knows, this intellectual background can also lead to arrogance. People who excel at software design become convinced that they have a unique ability to understand any kind of system at all, from first principles, without prior training, thanks to their superior powers of analysis. Success in the artificially constructed world of software design promotes a dangerous confidence.

Today we are embarked on a great project to make computers a part of everyday life. As Marc Andreessen memorably frames it, "software is eating the world". And those of us writing the software expect to be greeted as liberators.

Our intentions are simple and clear. First we will instrument, then we will analyze, then we will optimize. And you will thank us.

But the real world is a stubborn place. It is complex in ways that resist abstraction and modeling. It notices and reacts to our attempts to affect it. Nor can we hope to examine it objectively from the outside, any more than we can step out of our own skin.

The connected world we're building may resemble a computer system, but really it's just the regular old world from before, with a bunch of microphones and keyboards and flat screens sticking out of it. And it has the same old problems.

Approaching the world as a software problem is a category error that has led us into some terrible habits of mind.

I almost quoted the whole thing. Please read it — and, perhaps, read it in conjunction with another essay I referred to recently, about the just plain wrongness of believing that the brain is a computer. Ask a software engineer for solutions to non-software problems, and you’ll get answers that might work brilliantly ... if the world were software.

Monday, June 27, 2016

myths we can't help living by

One reason the technological history of modernity is a story worth telling: the power of science and technology to provide what the philosopher Mary Midgley calls “myths we live by”. For instance, Midgley writes,

Myths are not lies. Nor are they detached stories. They are imaginative patterns, networks of powerful symbols that suggest particular ways of interpreting the world. They shape its meaning. For instance, machine imagery, which began to pervade our thought in the seventeenth century, is still potent today. We still often tend to see ourselves, and the living things around us, as pieces of clockwork: items of a kind that we ourselves could make, and might decide to remake if it suits us better. Hence the confident language of ‘genetic engineering’ and ‘the building-blocks of life’.

Again, the reductive, atomistic picture of explanation, which suggests that the right way to understand complex wholes is always to break them down into their smallest parts, leads us to think that truth is always revealed at the end of that other seventeenth-century invention, the microscope. Where microscopes dominate our imagination, we feel that the large wholes we deal with in everyday experience are mere appearances. Only the particles revealed at the bottom of the microscope are real. Thus, to an extent unknown in earlier times, our dominant technology shapes our symbolism and thereby our metaphysics, our view about what is real.

This is why I continue to protest against the view which, proclaiming that “ideas have consequences,” goes on to ignore the material and technological things that press with great force upon our ideas. Consider, for instance, the almost incredible influence that computers have upon our understanding of the human brain, even though the brain does not process information and is most definitely not in any way a computer. The metaphor is almost impossible for neuroscientists to escape; they cannot, generally speaking, even recognize it as a metaphor.

If we can even begin to grasp the power of such metaphors and myths, we can understand why a technological history of modernity is so needful.

Sunday, June 26, 2016

more on speed

A bit of a follow-up to this post, and to brutus’s comment on it (which you should read) as well: My friend Matt Frost commented that Jeff Guo is the “bizarro Alan Jacobs,” which is true in a way. Guo clearly thinks that his problem is that there’s not enough new content and he can’t consume it fast enough, whereas I have argued on many occasions for slower reading, slower thinking, re-reading and re-viewing....

And yet. I’ve watched movies the way Guo watches them, too; in fact, I’ve done it many times. And I’ve read books — even novels — in a similar way, skimming large chunks. So I’m anything but a stranger to the impulse Guo has elevated to a principle. But here’s the thing: Whenever we do that we’re thereby demonstrating a fundamental lack of respect for the work we’re skimming. We are refusing to allow it the kind and amount of attention it requests. So if — to take an example from my previous post — you watch Into Great Silence at double speed you’re refusing the principle on which that film is built. When you decide to read Infinite Jest but skip all the conversations between Marathe and Steeply because you find them boring you’re refusing the fundamental logic of the book, which, among other things, offers a profound meditation on boredom and its ever-ramifying effects on our experiences.

I think we do this kind of thing when we don’t really want to read or view, but to have read and have viewed — when more than watching Into Great Silence or reading Infinite Jest we want to be able to say “Yeah, I’ve seen Into Great Silence and ”Sure, I’ve read Infinite Jest." It’s a matter of doing just enough that we can convince ourselves that we’re not lying when we say that. But you know, Wikipedia + lying is a lot easier. Just saying.

Aside from any actual dishonesty, I don’t think there’s anything wrong with viewing or reading on speed. But it’s important to know what you’re doing — and what you’re not doing: what impulses you’re obeying and what possibilities you’re refusing. Frank Kermode, in a brilliant reflection that I quote here, speaks of a threefold aesthetic and critical sequence: submission, recovery, comment. But if you won’t submit to the logic and imagination of the work in question, there’ll be nothing to recover from, and you’ll have no worthwhile comment to make.

All of which may prompt us to think about how much it matters in any given case, which will be determined by the purpose and quality of the work in question. Scrub through all of The Hangover you want, watch the funny parts several times, whatever. It doesn’t matter. But if you’re watching Mulholland Drive (one of Guo’s favorite movies, he says) and you’re refusing the complex and sophisticated art that went into its pacing, well, it matters a little more. And if you’re scrubbing your way through ambitious and comprehensively imagined works of art, then you really ought to rethink your life choices.

Friday, June 24, 2016

this is your TV on speed

Jeff Guo watches TV shows really fast and thinks he's pretty darn cool for doing so.

I recently described my viewing habits to Mary Sweeney, the editor on the cerebral cult classic "Mulholland Drive." She laughed in horror. “Everything you just said is just anathema to a film editor,” she said. “If you don't have respect for how something was edited, then try editing some time! It's very hard."

Sweeney, who is also a professor at the University of Southern California, believes in the privilege of the auteur. She told me a story about how they removed all the chapter breaks from the DVD version of Mulholland Drive to preserve the director’s vision. “The film, which took two years to make, was meant to be experienced from beginning to end as one piece,” she said.

I disagree. Mulholland Drive is one of my favorite films, but it's intentionally dreamlike and incomprehensible at times. The DVD version even included clues from director David Lynch to help people baffled by the plot. I advise first-time viewers to watch with a remote in hand to ward off disorientation. Liberal use of the fast-forward and rewind buttons allows people to draw connections between different sections of the film.

Question: How do you draw connections between sections of the film you fast-forwarded through?

Another question: How would Into Great Silence be if you took 45 minutes to watch it?

A third question: Might there be a difference — an experiential difference, and even an aesthetically qualitative difference — between remixing and re-editing and creating montages of works you've first experienced at their own pace and, conversely, doing the same with works you've never had the patience to sit through?

And a final suggestion for Jeff Guo: Never visit the Camiroi.

Thursday, June 23, 2016

travel and the lure of the smartphone

Alan Turing’s notion of a “universal machine” is the founding insight of the computer revolution, and today’s smartphones are the fullest embodiment of that idea we’ve yet realized, which is what makes them irresistible to so many of us. Many of us, I suppose, have at times made a mental list of the devices we once owned that have been replaced by smartphones: calculators, clocks, cameras, maps, newspapers, music players, tape recorders, notepads....

Earlier this year I described my return to a dumbphone and the many advantages accruing to me therefrom, but as my recent trip to London and Rome drew closer, I started to sweat. Could I maintain on my travels my righteous technological simplification? This was a particular worry of mine because I am also a packing minimalist: I have spent whole summers abroad living out of a backpack and a smallish suitcase. Maps wouldn’t add much weight, but a camera would be more significant; and then I’d need either to carry my backpack everywhere I went, to hold the camera, or else take a dedicated camera bag. Moreover, my wife was not going on this trip, and I wanted to stay in touch with her, especially by sending photos of the various places I visited — and to do so immediately, not once I returned.

As the day of departure drew nearer, that desire to maintain the fullest possible contact with my beloved loomed larger in my mind. This reminded me that I had recently spoken and written about the relationship between distraction and addiction:

If you ask a random selection of people why we’re all so distracted these days — so constantly in a state of what a researcher for Microsoft, Linda Stone, has called “continuous partial attention” — you’ll get a somewhat different answer than you would have gotten thirty years ago. Then it would have been “Because we are addicted to television.” Fifteen years ago it would have been, “Because we are addicted to the Internet.” But now it’s “Because we are addicted to our smartphones.”

All of these answers are both right and wrong. They’re right in one really important way: they link distraction with addiction. But they’re wrong in an even more important way: we are not addicted to any of our machines. Those are just contraptions made up of silicon chips, plastic, metal, glass. None of those, even when combined into complex and sometimes beautiful devices, are things that human beings can become addicted to.

Then what are we addicted to? … We are addicted to one another, to the affirmation of our value — our very being — that comes from other human beings. We are addicted to being validated by our peers.

Was my reluctance to be separated from my wife an example of this tendency? I’d like to think it’s something rather different: not an addiction to validation from peers, but a long-standing dependence on intimacy with my life partner. But my experience is certainly on the same continuum with the sometimes pathological need for validation that I worried over in that essay. So while I think that my need to stay in touch with Teri is healthier than the sometimes desperate desire to be approved by one’s peer group, they have this in common: they remind us how much our technologies of communication are not substitutes for human interaction but enormously sophisticated means of facilitating it.

A camera would have added some weight to my backpack, but not all that much. Packing minimalism played a role in my decision to pop the SIM card out of my dumbphone and dig my iPhone out of a drawer — note that I had never sold it or given it away! I was too cowardly for that — and use it on my trip as camera and communicator (iMessage and Twitter) and news-reader and universal map and restaurant-discovery vehicle and step-counter and.... But it wasn’t the decisive thing.

I do wonder how the trip might have been different if I had maintained my resolve. I certainly could’ve gotten some better photos if I had brought my camera, especially if I had also carried my long lens. (Smartphones have wide-angle lenses, which are great in many circumstances but very frustrating in others.) Maybe I would’ve sent Teri cards and letters instead of text messages, and she’d have keepsakes that our grandchildren could someday see. (Somehow I doubt that our grandchildren will be able to browse through my Instagram page.) And then I’d have uploaded all my photos when I got home and we’d have sat down to go through them all at once. But that’s not how it went.

Well, so it goes. I’ve been back for two days now, and probably should get out the dumbphone and switch my SIM card back into it. I’m sure I’ll do that soon. Very soon. Any day now.

Friday, June 17, 2016

some things about The Thing Itself

I had a really wonderful time in Cambridge the other night talking with Adam Roberts, Francis Spufford, and Rowan Williams about Adam’s novel The Thing Itself and related matters. But it turns out that there are a great many related matters, so since we parted I can’t stop thinking about all the issues I wish we had had time to explore. So I’m going to list a few thoughts here, in no particular order, and in undeveloped form. There may be fodder here for later reflections.

  • We all embarrassed Adam by praising his book, but after having re-read it in preparation for this event I am all the more convinced that it is a superb achievement and worthy of winning any book prize that it is eligible for (including the Campbell Award, for which it has just been nominated).
  • But even having just re-read it, and despite being (if I do say so myself) a relatively acute reader, I missed a lot. Adam explained the other night a few of the ways the novel’s structure corresponds to the twelve books of the Aeneid, which as it happens he and I have just been talking about, and now that I’ve been alerted to the possible parallels I see several others. And they’re genuinely fascinating.
  • Suppose scientists were to build a computer that, in their view, achieved genuine intelligence, and intelligence that by any measure we have is beyond ours, and that computer said, “There is something beyond space and time that conditions space and time. Not something within what we call being but the very Ground of Being itself. One might call it God.” What would happen then? Would our scientists say, “Hmmm. Maybe we had better rethink this whole atheism business”? Or would they say, “All programs have bugs, of course, and we’ll fix this one in the next iteration”?
  • Suppose that scientists came to believe that the AI is at the very least trustworthy, if not necessarily infallible, and that its announcement should be taken seriously. Suppose that the AI went on to say, “This Ground of Being is neither inert nor passive: it is comprehensively active throughout the known universe(es), and the mode of that activity is best described as Love.” What would we do with that news? Would there be some way to tease out from the AI what it thinks Love is? Might we ever be confident that a machine’s understanding of that concept, even if the machine were programmed by human beings, is congruent with our own?
  • Suppose the machine were then to say, “It might be possible for you to have some kind of encounter with this Ground of Being, not unmediated because no encounter, no perception, can ever be unmediated, but more direct than you are used to. However, such an encounter, by exceeding the tolerances within which your perceptual and cognitive apparatus operates, would certainly be profoundly disorienting, would probably be overwhelmingly painful, would possibly cause permanent damage to some elements of your operating system, and might even kill you.” How many people would say, “I’ll take the risk”? And what would their reasons be?
  • Suppose that people who think about these things came generally to agree that the AI is right, that Das Ding an Sich really exists (though “exists” is an imprecise and misleadingly weak word) and that the mode of its infinitely disseminated activity is indeed best described as Love — how might that affect how people think about Jesus of Nazareth, who claimed (or, if you prefer, is said by the Christian church to claim), a unique identification with the Father, that is to say, God, that is to say, the Ground of Being, The Thing Itself?

Thursday, June 9, 2016

why blog?

The chief reason I blog is to create a kind of accountability to my own reading and thinking. Blogging is a way of thinking out loud and in public, which also means that people can respond — and often those responses are helpful in shaping further thoughts.

But even if I got no responses, putting my ideas out here would still be worthwhile, because it’s a venue in which there is no expectation of polish or completeness. Sometimes a given post, or set of posts, can prove to be a dead end: that’s what happened, I think, with the Dialogue on Democracy I did over at The American Conservative. I wanted to think through some issues but I don't believe I really accomplished anything, for me or for others. But that’s all right. It was worth a try. And perhaps that dead end ended up leading me to the more fruitful explorations of the deep roots of our politics, and their relation to our technological society, that I’ve been pursuing here in the last couple of weeks.

As I have explained several times, over the long haul I want to pursue a technological history of modernity. But I have two books to write before I can even give serious consideration to that project. Nevertheless, I can try out the occasional random idea here, and as I do that over the next couple of years, who knows what might emerge? Possibly nothing of value; but possibly something essential to the project. Time will tell.

I’ve been blogging a lot lately because I had a chunk of free-ish time between the end of the Spring semester and the beginning of a long period of full-time book writing. I’m marking that transition by taking ten days for research (but also for fun) in England and Italy, so there will be no blogging for a while. And then when I return my activity will be sporadic. But bit by bit and piece by piece I’ll be building something here.

Wednesday, June 8, 2016

the Roman world and ours, continued

To pick up where I left off last time:

Imagine that you are a historian in the far future: say, a hundred thousand years from now. Isn't it perfectly possible that from that vantage point the rise of the United States as a global power might be seen primarily as a development in the history of the Roman Empire? To you, future historian, events from the great influence of Addison’s Cato upon the American Revolution to the Marshall Plan (paciere subiectis, debellare superbos) to the palpable Caesarism of Trump are not best understood as analogies to Roman history but as stages within it — as the history of the British Empire (Pax Brittanica) had been before us: Romanitas merely extended a bit in time and space. We know that various nations and empires have seen themselves as successors to Rome: Constantinople as the Second Rome, Moscow as the Third, the just-mentioned Pax Brittanica and even the Pax Americana that followed it. In such a case, to know little or nothing about the history of Rome is to be rendered helpless to understand — truly to understand — our own moment.

A possible chapter title from a far-future history textbook: “The Beginnings of the Roman Empire: 31 B.C.E. to 5000 C.E.”

Self-centered person that I am, I find myself thinking about all this in relation to what I’ve been calling the technological history of modernity. And Cochrane’s argument — along with that of Larry Siedentop, which I mentioned in my previous post on this subject — pushes me further in that direction than I’d ever be likely to go on my own.

In the Preface to his book, Cochrane makes the summary comment that “the history of Graeco-Roman Christianity” is largely the history of a critique: a critique of the idea, implicit in certain classical notions of the commonwealth but made explicit by Caesar Augustus, “that it was possible to attain a goal of permanent security, peace and freedom through political action, especially through submission to the ‘virtue and fortune’ of a political leader.” Another way to put this (and Cochrane explores some of these implications) is to say that classical political theory is devoted to seeing the polis, and later the patria and later still the imperium, as the means by which certain philosophical problems of human experience and action are to be solved. The political theory of the Greco-Roman world, on this account, is doing the same thing that the Stoics and Epicureans were doing in their own ways: developing a set of techniques by which human suffering might be alleviated, human anxieties quelled, and human flourishing promoted. That political theory is therefore best understood as closely related to what Foucault called “technologies of the self” and to what Martha Nussbaum has described as the essentially therapeutic character of Hellenistic ethics. The political structures of the Roman Empire — including literal structures like aquaducts and roads, and organizational ones like the cursus publicus, should therefore be seen as pointing ultimately towards a healing not only of the state but of the persons who comprise it. (Here again Siedentop’s history of the legal notions of personhood, and the relations of persons to families and communities, is vital.)

And if all this is right, then the technological history of modernity may be said to begin not with the invention of the printing press but in the ancient world — which in a very real sense, according to the logic of “great time,” we may be said to inhabit still.

Tuesday, June 7, 2016

with sincere thanks

I get quite a few unsolicited emails from people who want me to do something for them, and many of those emails end with "Thank you," "Thank you for your time," "Thanks for your attention," and so on. It has never occurred to me to think that such people were doing anything inappropriate; in fact, it just seemed to me that they were being polite.

But every now and then on Twitter I discover that some people are enraged by this little quirk of manners. I don't get it. What are you supposed to say when you write to ask someone for something? They've read your email, they didn't have to — why not thank them for doing so? 

The one complaint I understand involves the phrase "thank you in advance" — which seems to presume that the addressed will do the thing that the addressees have requested. But even then, it doesn't strike me as anything to make a big deal out of.

Can anyone who is offended by being thanked in these ways explain to me why? Thank you in advance for your help.

the Roman world and ours

So why am I reading about — I’m gonna coin a phrase here — the decline and fall of the Roman Empire? It started as part of my work on Auden.

I first learned about Charles Norris Cochrane’s Christianity and Classical Culture from reading Auden’s review of it, published in The New Republic in 1944. Auden began that review by saying that in the years since the book appeared (it was first published in 1940) “I have read this book many times, and my conviction of its importance to the understanding not only of the epoch with which it is concerned, but also of our own, has has increased with each rereading.” I thought: Well, now, that’s rather remarkable. I figured it was a book I had better read too.

Auden concludes his review with these words:

Our period is not so unlike the age of Augustine: the planned society, caesarism of thugs or bureaucracies, paideia, scientia, religious persecution, are all with us. Nor is there even lacking the possibility of a new Constantinism; letters have already begun to appear in the press, recommending religious instruction in schools as a cure for juvenile delinquency; Mr. Cochrane’s terrifying description of the “Christian” empire under Theodosius should discourage such hopes of using Christianity as a spiritual benzedrine for the earthly city.

That metaphor — "spiritual benzedrine for the earthly city" — is brilliantly suggestive. (And Auden knew all about benzedrine.)

More than twenty years later, in a long essay on the fall of Rome that was never published for reasons Edward Mendelson explains here, Auden wrote:

I think a great many of us are haunted by the feeling that our society, and by ours I don’t mean just the United States or Europe, but our whole world-wide technological civilisation, whether officially labelled capitalist, socialist or communist, is going to go smash, and probably deserves to.

Like the third century the twentieth is an age of stress and anxiety. In our case, it is not that our techniques are too primitive to cope with new problems, but the very fantastic success of our technology is creating a hideous, noisy, over-crowded world in which it is becoming increasingly difficult to lead a human life. In our reactions to this, one can see many parallels to the third century. Instead of Gnostics, we have existentialists and God-is-dead theologians, instead of neoplatonists, devotees of Zen, instead of desert hermits, heroin addicts and beats … instead of mortification of the flash, sado-masochistic pornography; as for our public entertainments, the fare offered about television is still a shade less brutal and vulgar than that provided by the amphitheater, but only a shade, and may not be for long.

And then the comically dyspeptic conclusion: “I have no idea what is actually going to happen before I die except that I am not going to like it.” (For those interested, the unpublished essay may be found in this collection.)

Clearly for Auden, the story Cochrane tells was one that had lasting relevance. Elements of Cochrane’s narrative turn up, in much more complex form than in the late-career bleat just quoted, for decades in Auden’s poetry: “The Fall of Rome,” “Memorial for the City,” “Under Sirius,” “Secondary Epic,” and many other poems bear Cochrane’s mark. As I mentioned in my earlier post, I’m now reading Christianity and Classical Culture for the fourth time, and it really is impossible for me also not to see the Roman world as a distant mirror of our own. How can I read this passage about the rise of Julius Caesar and not think of Donald Trump?

In the light of these ancient concepts, Ceasar emerges as a figure at once fascinating and dangerous. For the spirit thus depicted is one of sublime egotism; in which the libido dominandi asserts itself to the exclusion of all possible alternatives and crushes every obstacle in its path. We have spoken of Caesar as a divisive force. That, indeed, he was: as Cato had put it, “he was the only one of the revolutionaries to undertake, cold-sober, the subversion of the republic”; … A force like this, however, does more than divide, it destroys. Hostile to all claims of independence except its own, it is wholly incompatible with that effective equality which is implied in the classical idea of the commonwealth. To admit it within the community is thus to nourish the lion, whose reply to the hares in the assembly of beasts was to ask: Where are your claws?

And how can I read about this extension of the Emperor’s powers and not reflect on the recent hypertrophy of the executive branch of American government?

The powers and duties assigned to the emperor were broad and comprehensive. They were, moreover, rapidly enlarged as functions traditionally attached to republican magistracies were transferred one after another to the new executive, and executive action invaded fields which, under the former system, had been consecrated to senatorial or popular control. Finally, by virtue of specific provisions, the substance of which is indicated in the maxim princeps legibus solutus, the emperor was freed from constitutional limitations which might have paralyzed his freedom of action; while his personal protection was assured through the grant of tribunician inviolability (sacrosanctitas) as well as by the sanctions of the Lex Maiestatis. The prerogative was thus built up by a series of concessions, made by the competent authority of senate and people, no single one of which was in theory unrepublican.

But the more I read Cochrane, the more I suspect that we may not be talking about mere mirroring, mere analogies. Last year, when I read and reviewed Larry Siedentop’s book Inventing the Individual, I was struck by Siedentop’s tracing of certain of our core ideas about selfhood to legal disputes that arose in the latter centuries of the Roman Empire and its immediate aftermath. And this led me in turn to think about an ideas that Mikhail Bakhtin meditated on ceaselessly near the end of his life: great time. David Shepherd provides a thorough account of this idea here, but in short Bakhtin is trying to think about cultural developments that persist over centuries and even millennia, even when they have passed altogether from conscious awareness. Thus this staggering passage from one of his late notebooks:

The mutual understanding of centuries and millennia, of peoples, nations, and cultures, provides a complex unity of all humanity, all human cultures (a complex unity of human culture), and a complex unity of human literature. All this is revealed only on the level of great time. Each image must be understood and evaluated on the level of great time. Analysis usually fusses about in the narrow space of small time, that is, in the space of the present day and the recent past and the imaginable — desired or frightening — future.

And:

There is neither a first nor a last word and there are no limits to the dialogic context (it extends into the boundless past and the boundless future). Even past meanings, that is, those born in the dialogue of past centuries, can never be stable (finalized, ended once and for all) — they will always change (be renewed) in the process of subsequent, future development of the dialogue. At any moment in the development of the dialogue there are immense, boundless masses of forgotten contextual meanings, but at certain moments of the dialogue’s subsequent development along the way they are recalled and invigorated in renewed form (in a new context). Nothing is absolutely dead: every meaning will have its homecoming festival. The problem of great time.

If we were take Bakhtin’s idea seriously, how might that affect our thinking about the Roman Empire as something more than a “distant mirror” of our own age? To think of our age, our world, as functionally extensive of the Roman project?

I’ll take up those questions in another post.

Monday, June 6, 2016

synopsis of Cochrane's Christianity and Classical Culture

  • Augustus, by uniting virtue and fortune in himself (viii, 174), established "the final triumph of creative politics," solving "the problem of the classical commonwealth" (32).
  • For a Christian with Tertullian's view of things, the "deification of imperial virtue" that accompanied this "triumph" was sheer idolatry: Therefore Regnum Caesaris, Regnum Diaboli (124, 234). 
  • "The crisis of the third century ... marked ... an eclipse of the strictly classical ideal of virtue or excellence" (166), and left people wondering what to do if the Augustan solution were not a solution after all. What if there is "no intelligible relationship" between virtue and fortune (171)?
  • Christians had remained largely detached during the crisis of the third century, neither wanting Rome to collapse nor prone to being surprised if it did, since its eventual fall was inevitable anyway (195).
  • Then Constantine came along and "both professed and practiced a religion of success" (235), according to which Christianity was a "talisman" that ensured the renewal of Romanitas (236).
  • After some time and several reversals (most notably in the reign of Julian the Apostate) and occasional recoveries (for instance in the reign of Theodosius) it became clear that both the Constantinian project and the larger, encompassing project of Romanitas had failed (391).
  • Obviously this was in many ways a disaster, but there was some compensation: the profound impetus these vast cultural crises gave to Christian thought, whose best representatives (above all Augustine) understood that neither the simple denunciations of the social world of Tertullian nor Constantine's easy blending of divergent projects were politically, philosophically, or theologically adequate.
  • Thus the great edifice of the City of God, Cochrane's treatment of which concludes with a detailed analysis of the philosophy of history that emerges from Augustine's new account of human personality: see 502, 502, 536, 542, 567-69.
Just in case it's useful to someone. Those page numbers are from the Liberty Fund edition, which I ended up using for reasons I'll discuss in another post. 

Virgil and adversarial subtlety

So, back to Virgil ... (Sorry about the spelling, Professor Roberts.)

What do we know about Virgil’s reputation in his own time and soon thereafter? We know that Augustus Caesar brought the poet into his circle and understood the Aeneid to articulate his own vision for his regime. We know that the same educational system that celebrated the reign of Augustus as the perfection of the ideal of Romanitas also celebrated Virgil as the king of Roman poets, even in his own lifetime. Nicholas Horsfall shows how, soon after Virgil’s death, students throughout the Roman world worked doggedly through the Aeneid line by line, which helps to explain why there are Virgilian graffiti at Pompeii but almost all of them from Books I and II. We know that Quintilian established study of Virgil as the foundational practice of literary study and that that establishment remained in place as long as Rome did, thus, centuries later, shaping the education of little African boys like Augustine of Hippo.

But, as my friend Edward Mendelson has pointed out to me in an email, when people talk about what “the average Roman reader” would have thought about Virgil, they have absolutely no evidence to support their claims. It may well be, as these critics usually say, that such a reader approved of the Empire and therefore approved of anything in the Aeneid that was conducive to the establishment of Empire ... but no one knows that. It’s just guesswork.

R. J. Tarrant has shown just how hard it is to pin down the details of Virgil’s social/political reputation. But it’s worth noting that, while the gods in the Aeneid insist that Dido must die for Rome to be founded, Augustine tells us in the Confessions that his primary emotional reaction when reading the poem was grief for the death of Dido. And Quintilian doesn't place Virgil at the center of his literary curriculum because he is the great advocate of Romanitas, but because he is the only Roman poet worthy to be compared with Homer. The poem exceeds whatever political place we might give it, and the readers of no culture are unanimous in their interests and priorities.

In a work that I’ve seen in draft form, so about which I won't say too much, Mendelson offers several reasons why we might think that Virgil is more critical of the imperial project, and perhaps even of Rome’s more general self-mythology, than Augustus thought, and than critics such as Cochrane think.

First, there is the point that Adam Roberts drew attention to in the comments on my previous post: the fact that Anchises tells Aeneas in Book VI that the vocation of Rome is not just to conquer the world but to “spare the defeated” (parcere subiectis) — yet this is precisely what Aeneas does not do when the defeated Turnus pleads for his life. I tried to say, in my own response to Adam, why I don't think that necessarily undoes the idea that Virgil snd his poem are fundamentally supportive not just of Rome generally but of the necessity of Turnus’s death. But the contrast between Anchises’ claim about the Roman vocation and what Aeneas actually does is certainly troubling.

More troubling still is another passage Mendelson points to, perhaps the most notorious crux in all of classical literature and therefore something I should already have mentioned: the end of Book VI. After Anchises shows to Aeneas the great pageant of Rome’s future glories, Virgil writes (in Allen Mandelbaum’s translation):

There are two gates of Sleep: the one is said
to be of horn, through it an easy exit
is given to true Shades; the other is made
of polished ivory, perfect, glittering,
but through that way the Spirits send false dreams
into the world above. And here Anchises,
when he is done with words, accompanies
the Sibyl and his son together; and
he sends them through the gate of ivory.

(Emphasis mine.) The gate of ivory? Was that whole vision for the future then untrue? But it couldn't be: Anchises reveals people who really were to exist and events that really were to occur. Was the untruth then not the people and events themselves but the lovely imperial gloss, the shiny coating that Anchises paints on events that are in fact far uglier? Very possibly. But the passage is profoundly confusing.

I continue to believe that Virgil is fundamentally supportive of the imperial enterprise, for reasons I won't spell out in further detail here. (If I had time I would write at length about Aeneas’s shield.) But he was too great a poet and too wise a man not to know, and reveal, the costliness of that enterprise, and not just in the lives of people like Dido and Turnus. Perhaps he was even more concerned with the price the Roman character paid for Roman greatness: the gross damage Romanitas did to the consciences of its advocates and enforcers.

Another way to put this is to say that Virgil was a very shrewd reader of Homer, who was likewise clear-sighted about matters that most of us would prefer not to see clearly. One must also here think of Shakespeare. Take, for instance, Twelfth Night: the viewers’ delight in the unfolding of the comedy is subtly undermined by the treatment of Malvolio by some of the “good guys.” It seems that the joy that is in laughter can all too easily turn to cruelty. Yes, Malvolio is a pompous inflated prig, but still....

The best account I have ever read of the way great literature accepts and represents these “minority moods” — moods that account for elements of human reality that any given genre tends to downplay — was written by Northrop Frye, in his small masterpiece A Natural Perspective. That's his book about comedy, and the Aeneid is, structurally anyway, a kind of comedy, a story of human fellowship emerging from great suffering. Frye's excursus on genre and mood is one of the most eloquent (and important) passages in his whole ouevre, and  I’ll end by quoting from it:

If comedy concentrates on a uniformly cheerful mood, it tends to become farcical, depending on automatic stimulus and reflex of laughter. Structure, then, commands participation but not assent: it unites its audience as an audience, but allows for variety in response. If no variety of response is permitted, as in extreme forms of melodrama and farce, something is wrong: something is inhibiting the proper function of drama.... Hence both criticism and performance may spend a good deal of time on emphasizing the importance of minority moods. The notion that there is one right response which apprehends the whole play rightly is an illusion: correct response is always stock response, and is possible only when some kind of mental or physical reflex is appealed to.

The sense of festivity, which corresponds to pity in tragedy, is always present at the end of a romantic comedy. This takes the part of a party, usually a wedding, in which we feel, to some degree, participants. We are invited to the festivity and we put the best face we can on whatever feelings we may still have about the recent behavior of some of the characters, often including the bridegroom. In Shakespeare the new society is remarkably catholic in its tolerance; but there is always a part of us that remains a spectator, detached and observant, aware of other nuances and values. This sense of alienation, which in tragedy is terror, is almost bound to be represented by somebody or something in the play, and even if, like Shylock, he disappears in the fourth act, we never quite forget him. We seldom consciously feel identified with him, for he himself wants no such identification: we may even hate or despise him, but he is there, the eternal questioning Satan who is still not quite silenced by the vindication of Job.... Participation and detachment, sympathy and ridicule, sociability and isolation, are inseparable in the complex we call comedy, a complex that is begotten by the paradox of life itself, in which merely to exist is both to be part of something else and yet never to be a part of it, and in which all freedom and joy are inseparably a belonging and an escape.