Text Patterns - by Alan Jacobs

Saturday, July 30, 2016

my boilerplate letter to social media services

Hello,

Someone has signed up for your service using my email address. (And, interestingly, using this name.) Please delete my email address from your database.

The email I got welcoming me to your service came from a no-reply address, so I had to go to your website and dig around until I found a contact form. I see that you require me to give you my name as well as my email address, so you're demanding that I tell you things about myself I’d rather you not know because you aren't smart enough, or don't care enough, to include one simple step in your sign-up process: Confirm that this is your email address.

This neglect is both discourteous and stupid. It’s discourteous because it effectively allows anyone who wants to spam someone else to use your service as a quick-and-easy tool for doing so. It’s stupid because then anyone so victimized will tag anything that comes from you as spam, which will eventually lead to your whole company being identified as a spammer. You’ll all be sitting around in the office saying, between chugs of Soylent, “We keep ending up in Gmail's spam filters, what’s up with that? Those idiots.”

So, again, please delete my email address from your database. And please stop being a rude dumbass, like all the other rude dumbasses to whom I have to send this message, more frequently than most people would believe.

Most sincerely yours,

Alan Jacobs

Wednesday, July 27, 2016

on expertise

One of the most common refrains in the aftermath of the Brexit vote was that the British electorate had acted irrationally in rejecting the advice and ignoring the predictions of economic experts. But economic experts have a truly remarkable history of getting things wrong. And it turns out, as Daniel Kahneman explains in Thinking, Fast and Slow, that there is a close causal relationship between being an expert and getting things wrong:

People who spend their time, and earn their living, studying a particular topic produce poorer predictions than dart-throwing monkeys who would have distributed their choices evenly over the options. Even in the region they knew best, experts were not significantly better than nonspecialists. Those who know more forecast very slightly better than those who know less. But those with the most knowledge are often less reliable. The reason is that the person who acquires more knowledge develops an enhanced illusion of her skill and becomes unrealistically overconfident. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” [Philip] Tetlock writes. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of The New York Times in ‘reading’ emerging situations.” The more famous the forecaster, Tetlock discovered, the more flamboyant the forecasts. “Experts in demand,” he writes, “were more overconfident than their colleagues who eked out existences far from the limelight.”

So in what sense would it be rational to trust the predictions of experts? We all need to think more about what conditions produce better predictions — and what skills and virtues produce better predictors. Tetlock and Gardner have certainly made a start on that:

The humility required for good judgment is not self-doubt – the sense that you are untalented, unintelligent, or unworthy. It is intellectual humility. It is a recognition that reality is profoundly complex, that seeing things clearly is a constant struggle, when it can be done at all, and that human judgment must therefore be riddled with mistakes. This is true for fools and geniuses alike. So it’s quite possible to think highly of yourself and be intellectually humble. In fact, this combination can be wonderfully fruitful. Intellectual humility compels the careful reflection necessary for good judgment; confidence in one’s abilities inspires determined action....

What's especially interesting here is the emphasis not on knowledge but on character — what's needed is a certain kind of person, and especially the kind of person who is humble.

Now ask yourself this: Where does our society teach, or even promote, humility?

Monday, July 25, 2016

some thoughts on the humanities

I can't say too much about this right now, but I have been working with some very smart people on a kind of State of the Humanities document — and yes, I know there are hundreds of those, but ours differs from the others by being really good.

In the process of drafting a document, I wrote a section that ... well, it got cut. I'm not bitter about that, I am not at all bitter about that. But I'm going to post it here. (It is, I should emphasize, just a draft and I may want to revise and expand it later.)



Nearly fifty years ago, George Steiner wrote of the peculiar character of intellectual life “in a post-condition” — the perceived sense of living in the vague aftermath of structures and beliefs that can never be restored. Such a condition is often proclaimed as liberating, but at least equally often it is experienced as (in Matthew Arnold's words) a suspension between two worlds, “one dead, / The other powerless to be born.” In the decades since Steiner wrote, humanistic study has been more and more completely understood as something we do from within such a post-condition.

But the humanities cannot be pursued and practiced with any integrity if these feelings of belatedness are merely accepted, without critical reflection and interrogation. In part this is because, whatever else humanistic study is, it is necessarily critical and inquiring in whatever subject it takes up; but also because humanistic study has always been and must always be willing to let the past speak to the present, as well as the present to the past. The work, the life, of the humanities may be summed up in an image from Kenneth Burke’s The Philosophy of Literary Form (1941):

Imagine that you enter a parlor. You come late. When you arrive, others have long preceded you, and they are engaged in a heated discussion, a discussion too heated for them to pause and tell you exactly what it is about. In fact, the discussion had already begun long before any of them got there, so that no one present is qualified to retrace for you all the steps that had gone before.You listen for a while, until you decide that you have caught the tenor of the argument; then you put in your oar. Someone answers; you answer him; another comes to your defense; another aligns himself against you, to either the embarrassment or gratification of your opponent, depending upon the quality of your ally’s assistance. However, the discussion is interminable. The hour grows late, you must depart. And you do depart, with the discussion still vigorously in progress.

It is from this ‘unending conversation’ that the materials of your drama arise.

It is in this spirit that scholars of the humanities need to take up the claims that our movement is characterized by what it has left behind — the conceptual schemes, or ideologies, or épistèmes, to which it is thought to be “post.” In order to grasp the challenges and opportunities of the present moment, three facets of our post-condition need to be addressed: the postmodern, the posthuman, and the postsecular.

Among these terms, postmodern was the first-coined, and was so overused for decades that it now seems hoary with age. But it is the concept that lays the foundation for the others. To be postmodern, according to the most widely shared account, is to live in the aftermath of the collapse of a great narrative, one that began in the period that used to be linked with the Renaissance and Reformation but is now typically called the “early modern.” The early modern — we are told, with varying stresses and tones, by host of books and thinkers from Foucault’s Les Mots et les choses (1966) to Stephen Grenblatt’s The Swerve (2011) — marks the first emergence of Man, the free-standing, liberated, sovereign subject, on a path of self-emancipation (from the bondage of superstition and myth) and self-enlightenment (out of the darkness that precedes the reign of Reason). Among the instruments that assisted this emancipation, none were more vital than the studia humanitatis — the humanities. The humanities simply are, in this account of modernity, the discourses and disciplines of Man. And therefore if that narrative has unraveled, if the age of Man is over — as Rimbaud wrote, “Car l’Homme a fini! l’Homme a joué tous les rôles!” — what becomes of the humanities?

This logic is still more explicit and forceful with regard to the posthuman. The idea of the posthuman assumes the collapse of the narrative of Man and adds to it an emphasis on the possibility of remaking human beings through digital and biological technologies leading ultimately to a transhuman mode of being. From within the logic of this technocratic regime the humanities will seem irrelevant, a quaint relic of an archaic world.

The postsecular is a variant on or extension of the postmodern in that it associates the narrative of man with a “Whig interpretation of history,” an account of the past 500 years as a story of inevitable progressive emancipation from ancient, confining social structures, especially those associated with religion. But if the age of Man is over, can the story of inevitable secularization survive it? The suspicion that it cannot generates the rhetoric of the postsecular.

(In some respects the idea of the postsecular stands in manifest tension with the posthuman — but not in all. The idea that the posthuman experience can be in some sense a religious one thrives in science fiction and in discursive books such as Erik Davis’s TechGnosis [1998] and Ray Kurzweil’s The Age of Spiritual Machines [1999] — the “spiritual” for Kurzweil being “a feeling of transcending one’s everyday physical and mortal bounds to sense a deeper reality.”)

What must be noted about all of these master concepts is that they were articulated, developed, and promulgated primarily by scholars in the humanities, employing the traditional methods of humanistic learning. (Even Kurzweil, with his pronounced scientistic bent, borrows the language of his aspirations — especially the language of “transcendence” — from humanistic study.) The notion that any of these developments renders humanistic study obsolete is therefore odd if not absurd — as though the the humanities exist only to erase themselves, like a purely intellectual version of Claude Shannon’s Ultimate Machine, whose only function is, once it's turned on, to turn itself off.

But there is another and better way to tell this story.

It is noteworthy that, according to the standard narrative of the emergence of modernity, the idea of Man was made possible by the employment of a sophisticated set of philological tools in a passionate quest to understand the alien and recover the lost. The early humanists read the classical writers not as people exactly like them — indeed, what made the classical writers different was precisely what made them appealing as guides and models — but nevertheless as people, people from whom we can learn because there is a common human lifeworld and a set of shared experiences. The tools and methods of the humanities, and more important the very spirit of the humanities, collaborate to reveal Burke’s “unending conversation”: the materials of my own drama arise only through my dialogical encounter with others, those from the past whose voices I can discover and those from the future whose voices I imagine. Discovery and imagination are, then, the twin engines of humanistic learning, humanistic aspiration. In was in just this spirit that, near the end of his long life, the Russian polymath Mikhail Bakhtin wrote in a notebook,

There is neither a first nor a last word and there are no limits to the dialogic context (it extends into the boundless past and the boundless future).... At any moment in the development of the dialogue there are immense, boundless masses of forgotten contextual meanings, but at certain moments of the dialogue’s subsequent development along the way they are recalled and invigorated in new form (in a new context). Nothing is absolutely dead: every meaning will have its homecoming festival.

The idea that underlies Bakhtin’s hopefulness, that makes discovery and imagination essential to the work of the humanities, is, in brief, Terence’s famous statement, clichéd though it may have become: Homo sum, humani nihil a me alienum puto. To say that nothing human is alien to me is not to say that everything human is fully accessible to me, fully comprehensible; it is not to erase or even to minimize cultural, racial, or sexual difference; but it is to say that nothing human stands wholly outside my ability to comprehend — if I am willing to work, in a disciplined and informed way, at the comprehending. Terence’s sentence is best taken not as a claim of achievement but as an essential aspiration; and it is the distinctive gift of the humanities to make that aspiration possible.

It is in this spirit that those claims that, as we have noted, emerged from humanistic learning, must be evaluated: that our age is postmodern, posthuman, postsecular. All the resources and practices of the humanities — reflective and critical, inquiring and skeptical, methodologically patient and inexplicably intuitive — should be brought to bear on these claims, and not with ironic detachment, but with the earnest conviction that our answers matter: they are, like those master concepts themselves, both diagnostic and prescriptive: they matter equally for our understanding of the past and our anticipating of the future.

Tuesday, July 19, 2016

The World Beyond Kant's Head

For a project I’m working on, and will be able to say something about later, I re-read Matthew Crawford’s The World Beyond Your Head, and I have to say: It’s a really superb book. I read it when it first came out, but I was knee-deep in writing at the time and I don’t think I absorbed it as fully as I should have. I quote Crawford in support of several of the key points I make in my theses on technology, but his development of those points is deeply thoughtful and provocative, even more than I had realized. If you haven’t read it, you should.

But there’s something about the book I want to question. It concerns philosophy, and the history of philosophy.

In relation to the kinds of cultural issues Crawford deals with here -- issues related to technology, economics, social practices, and selfhood -- there are two ways to make use of the philosophy of the past. The first involves illumination: one argues that reading Kant and Hegel (Crawford’s two key philosophers) clarifies our situation, provides alternative ways of conceptualizing and responding to it, and so on. The other way involves causation: one argues that we’re where we are today because of the triumphal dissemination of, for instance, Kantian ideas throughout our culture.

Crawford does some of both, but in many respects the chief argument of his book is based on a major causal assumption: that much of what’s wrong with our culture, and with our models of selfhood, arises from the success of certain of Kant’s ideas. I say “assumption” because I don’t think that Crawford ever actually argues the point, and I think he doesn’t argue the point because he doesn’t clearly distinguish between illumination and causation. That is, if I’ve read him rightly, he shows that a study of Kant makes sense of many contemporary phenomena and implicitly concludes that Kant’s ideas therefore are likely to have played a causal role in the rise of those phenomena.

I just don’t buy it, any more than I buy the structurally identical claim that modern individualism and atomization all derive from the late-medieval nominalists. I don’t buy those claims because I have never seen any evidence for them. I am not saying that those claims are wrong, I just want to know how it happens: how you get from extremely complex and arcane philosophical texts that only a handful of people in history have ever been able to read to world-shaping power. I don’t see how it’s even possible.

One of Auden’s most famous lines is: “Poetry makes nothing happen.” He was repeatedly insistent on this point. In several articles and interviews he commented that the social and political history of Europe would be precisely the same if Dante, Shakespeare, and Mozart had never lived. I suspect that this is true, and that it’s also true of philosophy. I think that we would have the techno-capitalist society we have if Duns Scotus, William of Ockham, Immanuel Kant, and G.F.W. Hegel had never lived. If you disagree with me, please show me the path which those philosophical ideas followed to become so world-shapingly dominant. I am not too old to learn.

Sunday, July 17, 2016

some friendly advice about online writing and reading

Dennis Cooper, a writer and artist, is a pretty unsavory character, so in an ideal world I wouldn't choose him as a poster boy for the point I want to make, but ... recently Google deleted his account, and along with it, 14 years of blog posts. And they are quite within their rights to do so.

People, if you blog, no matter on what platform, do not write in the online CMS that your platform provides. Instead, write in a text editor or, if you absolutely must, a word processing app, save it to your very own hard drive, and then copy and paste into the CMS. Yes, it’s an extra step. It’s also absolutely worth it, because it means you always have a plain-text backup of your blog posts.

You should of course then back up your hard drive in at least two different ways (I have an external drive and Dropbox).

Why write in a text editor instead of a word processing app? Because when you copy from the latter, especially MS Word, you tend to pick up a lot of unnecessary formatting cruft that can make your blog post look different than you want it to. I write in BBEdit using Markdown, and converting from Markdown to HTML yields exceptionally clean copy. If you’d like to try it without installing scripts, you can write a little Markdown and convert it to HTML by using this web dingus — there are several others like it.

While I’m giving advice about writing on the web, why not some about reading as well? Too many people rely on social-media sites like Facebook and Twitter to get their news, which means that what they get is unpredictably variable, depending on what other people link to and how Facebook happens to be tweaking its algorithms on any given day. Apple News is similarly uncertain. And I fundamentally dislike the idea of reading what other people, especially other people who work for mega-corporations, want me to see.

Try using an RSS reader instead. RSS remains the foundation of the open web, and the overwhelming majority of useful websites have RSS feeds. There are several web-based RSS readers out there — I think the best are Feedly and Newsblur — and when you build up a roster of sites you profit from reading, you can export that roster as an OPML file and use it with a different service. And if you don't like those web interfaces you can get a feed-reading app that works with those (and other) services: I’m a big fan of Reeder, though my introduction to RSS was NewNewsWire, which I started using when it was little more than a gleam in Brent Simmons’s eye.

So, the upshot: in online writing and reading alike, look for independence and sustainability. Your life will be better for it.

Monday, July 11, 2016

Green Earth

Another Kim Stanley Robinson novel, and another set of profoundly mixed feelings. Green Earth, which was published last year, is a condensation into a single volume of three novels that appeared in the middle of the last decade and are generally known as the Science in the Capital trilogy.

Robinson is an extraordinarily intelligent writer with a wide-ranging mind, and no one writes about either scientific thinking or the technological implementation of science with the clarity and energy that he evidences. Moreover, he has an especially well thought-out, coherent and consistent understanding of the world — what some people (not me) call a worldview, what used to be called "a philosophy." But that philosophy is also what gets him into trouble, because he has overmuch trust in people who share it and insufficient understanding of, or even curiosity about, people who don’t.

Robinson is a technocratic liberal universalist humanitarian (TLUH), and though Green Earth is in many ways a fascinating novel, an exceptionally well-told story, it is also to a somewhat comical degree a TLUH wish-fulfillment fantasy. I can illustrate this through a brief description of one of the novel's characters: Phil Chase, a senator from Robinson's native California whose internationalist bent is so strong that his many fans call him the World's Senator, who rises to become President, whose integrity is absolute, who owes nothing to any special-interest groups, who listens to distinguished scientists and acts on their recommendations, who even marries a distinguished scientist and — this is the cherry on the sundae — has his marriage blessed by the Dalai Lama. TLUH to the max.

In Green Earth Robinson's scientists tend to be quite literally technocrats, in that they work for, or have close ties to, government agencies, which they influence for good. Only one of them does anything wrong in the course of the book, and that — steering a National Science Foundation panel away from supporting a proposal that only some of them like anyway — is scarcely more than a peccadillo. And that character spends the rest of the book being so uniformly and exceptionally virtuous that, it seems to me, Robinson encourages us to forget that fault.

Robinson's scientists are invariably excellent at what they do, honest, absolutely and invariably committed to the integrity of scientific procedure, kind to and supportive of one another, hospitable to strangers, deeply concerned about climate change and the environment generally. They eat healthily, get plenty of exercise, and drink alcohol on festive occasions but not habitually to excess. They are also all Democrats.

Meanwhile, we see nothing of the inner lives of Republicans, but we learn that they are rigid, without compassion, owned by the big oil companies, practiced in the blackest arts of espionage against law-abiding citizens, and associated in not-minutely-specified ways with weirdo fundamentalist Christian groups who believe in the Rapture.

Green Earth is really good when Robinson describes the effects of accellerated climate change and the various means by which it might be addressed. And I liked his little gang of Virtuous Hero Scientists and wanted good things to happen to them. But the politically Manichaean character of the book gets really tiresome when extended over several hundred pages. Robinson is just so relentless in his flattery of his likely readers' presuppositions — and his own. (The Mars Trilogy is so successful in part because all its characters are scientists and if they were uniformly virtuous as the scientists in Green Earth there would be no story.)

It's fascinating to me that Robinson is so extreme in his caricatures, because in some cases he's quite aware of the dangers of them. Given what I've just reported, it wouldn't be surprising if Robinson were attracted to Neil DeGrasse Tyson's imaginary republic of Rationalia, but he's too smart for that. At one point the wonderfully virtuous scientist I mentioned earlier hears a lecture by a Tibetan Buddhist who says, "An excess of reason is itself a form of madness" — a quote from an ancient sage, and an echo of G.K. Chesterton to boot, though Robinson may not know that. Our scientist instantly rejects this idea — but almost immediately thereafter starts thinking about it and can’t let the notion go; it sets him reading, and eventually he comes across the work of Antonio Damasio, who has shown pretty convincingly that people who operate solely on the basis of "reason" (as usually defined) make poorer decisions than those whose emotions are in good working order and play a part in decision-making.

So Robinson is able to give a subtle and nuanced account of how people think — how scientists think, because one of the subtler themes of the book is the way that scientists think best when their emotions are engaged, especially the emotion of love. Those who love well think well. (But St. Augustine told us that long ago, didn't he?)

Given Robinson's proper emphasis on the role of the whole person in scientific thinking, you'd expect him to have a stronger awareness of the dangers of thinking according to the logic of ingroups and outgroups. But no: in this novel the ingroups are all the way in and the outgroups all the way out. Thus my frustration.

Still, don't take these complaints as reasons not to read Green Earth or anything else by Robinson. I still have more of his work to read and I'm looking forward to it. He always tells a good story and I always learn a lot from reading his books. And I can't say that about very many writers. Anyway, Adam Roberts convinced me that I had under-read one of Robinson's other books, so maybe that'll happen again....

Saturday, July 9, 2016

Happy Birthday, Pinboard!

Maciej Ceglowski tells us that Pinboard turns seven today. I started using Pinboard on July 14, 2009, so I've been there since it was about a week old. (Didn't realize that until just now.)

I think Pinboard is just the greatest thing, and I can't even really explain why. I suppose because it primarily does one thing — enable you to bookmark things you read online — and does it with simple elegance. It's the closest thing to an organizational system I have. Here's my Pinboard page, or my Pinboard, as the kids say.

You can also sort of hack Pinboard to stretch its capabilities. I knew for some time that the notes you create in Pinboard used Markdown syntax before I realized that that meant you could embed images, as I've done here, or even videos. (These are loaded from the original URL, not stored by Pinboard, so it wouldn't be a means of long-term preservation.)

I have used Pinboard to note blog-post ideas — and indeed one of the most densely-populated tags in my Pinboard is called bloggable — but I have often fantasized, as my friend Matt Thomas knows, about turning Pinboard into a blogging platform, and basically moving my whole online life there. The problem is that this runs counter to my oft-professed devotion to the Own Your Turf ideal. I suppose if I were truly devoted to that ideal I wouldn't use Pinboard at all, but when a service performs its chosen task so well, and that task is so important for my work, I'm not inclined to sacrifice quality to principle. Not yet anyway. Anyhow, Pinboard makes it easy to download your whole archive, which I do from time to time by way of backup.

Two more notes, while we're in a note-making mood:

Note the first: The true master of Pinboard is Roberto Greco, whom Robin Sloan has called the internet's "idea sommelier."

Note the second: In the birthday greeting that's my first link above, Ceglowski makes a point of noting the outside funding he has received in creating and maintaining Pinboard: zero. Not a penny. He also provides the revenue Pinboard brings in, so that you can see something important: he makes a pretty good living. Almost as long as he's been running Pinboard he's been advocating for this way of doing things: instead of seeking venture capital in hopes of getting filthy rich, and therefore inevitably becoming a slave to your investors, why not choose to be a small business owner? Didn't this used to be the American dream, or one of them, to be your own person, work for no one except yourself, determine your own hours, live your own life?

Ceglowski has not only advocated for this way of life in tweets and talks, he even created The Pinboard Co-Prosperity Cloud, a competition for young entrepreneurs the winner of which would get plenty of advice from him and $37. It's hard to imagine a more countercultural figure in Silicon Valley, which I guess is why Ceglowski gets invited to give so many talks. For the money-obsessed tech-startup world, it's like having one of the zoo animals come to you.

Thursday, July 7, 2016

this reader's update

It's been widely reported that in the past couple of years e-book sales have leveled off. Barring some currently unforeseen innovations — and those could certainly happen at any time — we have a situation in which a relatively few people read books on dedicated e-readers like the Kindle, considerably more people read on the smartphones, and the great majority read paper codexes.

My own reading habits have not leveled off: I have become more and more of a Kindle reader. This surprises me somewhat, because at the same time I have learned to do more and more of my writing by hand, in notebooks, and have limited my participation in the digital realm. So why am I reading so much on my Kindle? Several reasons:

  • It would be disingenuous of me to deny that the ability to buy books instantly and to be reading them within a few seconds of purchase doesn't play a role. I am as vulnerable to the temptations of immediate gratification as anyone else.
  • When I'm reading anything that demands intense or extended attention I don't want to do anything except read, so reading on a smartphone, with all its distractions, is not an option. (Plus, the Kindle's screen is far easier on my eyes.)
  • I own thousands of books and it's not easy to find room for new ones. My office at Baylor is quite large, and I could fit another bookcase in it, but I read at home far more often than at the office, and I already have books stacked on the floor in my study because the bookshelves are filled. So saving room is a factor — plus, anything I have on the Kindle is accessible wherever I am, since the Kindle is always in my backpack. I therefore avoid those Oh crap, I left that book at the office moments. (And as everyone knows who keeps books in two places, the book you need is always in the place where you aren't.)
  • I highlight and annotate a good bit when I read, and the Kindle stores those highlighted passages and notes in a text file, which I can easily copy to my computer. I do that copying once a week or so. So I have a file called MyClippings.txt that contains around 600,000 words of quotations and notes, and will own that file even if Amazon kills the Kindle tomorrow. My text editor, BBEdit, can easily handle documents far larger than that, so searching is instantaneous. It's a very useful research tool.
  • Right now I'm re-reading my hardcover copy of Matthew Crawford's The World Beyond Your Head — more on that in another post — and it's an attractive, well-designed book (with one of the best covers ever), a pleasure to hold and read. But as a frequent Kindle user I can't help being aware how many restrictions reading this way places upon me: I have to have an adequate light source, and if I'm going to annotate it only a small range of postures is available to me. (You know that feeling where you're trying to make a note while lying on your back and holding the book in the air, or on your upraised knee, and your handwriting gets shaky and occasionally unreadable because you can't hold the book steady enough? — that's no way to live.) Especially as I get older and require more light to read by than I used to, the ability to adjust the Kindle's screen to my needs grows more appealing; and I like being able to sit anywhere, or lie down, or even walk around, while reading without compromising my ability to see or annotate the text.

For me, reading on the Kindle has just one significant practical drawback: it's too easy to abandon books. And I don't mean books that I'm just not interested in — I'm generally in favor of abandoning those — but books that for any number of reasons I need to stick with and finish. I can just tap my way over to something else, and that's easier than I'd like it to be. (That I'm not the only one who does this can be seen by anyone who activates the Popular Highlights feature on a Kindle: almost all of them are in the first few pages of books.)

By contrast, when I'm reading a codex, not only am I unable to look at a different book while holding the same object, I have a different perception of my investment in the text. I might read fifty pages of a book on Kindle and annotate it thoroughly, and then set it aside without another thought. But when I've annotated fifty pages of a codex, I am somehow bothered by all those remaining unread and unmarked pages. A book whose opening pages are marked up but the rest left untouched just feels like, looks like, an unfinished job. I get an itch to complete the reading so that I can see and take satisfaction from annotations all the way through. I never feel that way when I read an e-book.

That's the status report from this reader's world.


UPDATE: Via Jennifer Howard on Twitter, this report on book sales in the first half of 2016 suggests that the "revival of print books" is driven to a possibly troubling extent by the enormous popularity of adult coloring books. Maybe in the end e-books will be the last refuge for actual readers.

Wednesday, July 6, 2016

futurists wanted

At least, wanted in government, and by Farhad Manjoo, who laments the shutdown of the Office of Technology Assessment in 1995.

Of course, the future doesn’t stop coming just because you stop planning for it. Technological change has only sped up since the 1990s. Notwithstanding questions about its impact on the economy, there seems no debate that advances in hardware, software and biomedicine have led to seismic changes in how most of the world lives and works — and will continue to do so. 
Yet without soliciting advice from a class of professionals charged with thinking systematically about the future, we risk rushing into tomorrow headlong, without a plan. 
“It is ridiculous that the United States is one of the only nations of our size and scope in the world that no longer has an office that is dedicated to rigorous, nonpartisan research about the future,” Ms. Webb said. “The fact that we don’t do that is insane.” 
Or, as Mr. Toffler put it in “Future Shock,” “Change is avalanching upon our heads and most people are grotesquely unprepared to cope with it.”

I think Manjoo is correct in theory, but I simply cannot imagine any professional governmental futurists who are not simply stooges of the multinational tech companies. The study of the future has been bought at a price; I don't see it recovering its independence.

Tuesday, July 5, 2016

Black Panther


Three issues into the Ta-Nehisi Coates/Brian Stelfreeze Black Panther and I'm struggling. Coates has a really interesting vision here but his lack of experience in writing comics is showing, I think. Stelfreeze's art is quite beautiful, but there are a great many panels that are somewhat difficult to read, visually, and panel-to-panel continuity is seriously lacking. Sometimes I look at a whole page and can't tell why the panels are in the order they are. And in this third issue especially the story seems to advance hardly at all. I'm thinking of bailing out. Anybody else want to encourage me to stick with it?

the memes of the Brexit post-mortems

I don't have any strong opinions about the Brexit decision. In general I’m in favor of functioning with the smallest possible political units; but I’m also aware that to leave the EU would be a huge step with unforeseeable consequences, which is something my conservative disposition also resists. So: no strong opinion about whether Brexit is right or wrong. But I am fascinated by the post-mortems, especially as an observer of the internet, because what the internet makes possible is the instantaneous coalescing of opinion.

So, just a few days after the referendum, intellectual Remainers already have an established explanation, a kind of Copenhagen interpretation of the events meant to yield a Standard Account: Brexiters, motivated by hatred and resentment, acted in complete disregard of facts. I feel that I’ve read a hundred variations on this blog post by Matthew Flinders already, though not all the others have warmed so openly to the idea of an “architecture of politics” meant to “enforce truthfulness.” (What should we call the primary instrument of that architecture? The Ministry of Truth, perhaps?)

I’m especially interested in Flinders’ endorsement and perpetuation of the ideas that Brexit marks the victory of “post-truth politics.” This has very rapidly become a meme — and quite a meme — and one of the signs of how it functions is that Flinders doesn't cite anyone’s use of it. He’s not pretending to have coined the term, he’s just treating it as an explanatory given — to continue my physics analogy, something like Planck’s Constant, useful to plug into your equation to make the numbers work out.

(By the way, I suspect the seed of the “post-truth politics” meme was planted by Stephen Colbert when he coined the term “truthiness”.)

The invocation of “post-truth politics” is very useful to people like Flinders because it allows him to conflate actual disregard of facts with disregard of economic predictions — you can see how those categories get mixed up in this otherwise useful fact-checking of claims by Brexiters. When that conflation happens, then you get to tar people who suspect economic and political forecasts with the same brush you use to tar people who disregard facts altogether and go with their gut — even though there are ample reasons to distrust economic and political forecasts, and indeed a kind of cottage industry in publishing devoted to explaining why so many forecasts are wrong.

There’s no question that many votes for Brexit were based on falsehoods or sheer ignorance. But when people who belong to the academic-expertise class suggest that all disagreement with their views may be chalked up to “post-truth politics” — and settle on that meme so quickly after they receive such a terrible setback to their hopes and plans — then it’s hard for me not to see the meme as defending against a threat to status. And that matters for academic life, and for the intellectual life more generally, because the instantaneous dominance of such a meme forecloses inquiry. There’s no need to look more closely at either your rhetoric or the substance of your beliefs if you already have a punchy phrase that explains it all.

Sunday, July 3, 2016

two apologies and a bleg

Apology One: I wrote a post a while back about hating time-travel stories, and almost immediately after I did so I started thinking of exceptions to that rule. I mean, I’ve been praising Adam Roberts’s The Thing Itself to the skies and it’s a time-travel story, though it’s also many other things. I thought of another example, and then another, and soon enough it became obvious to me that I don’t hate time-travel stories at all. I was just annoyed by one that I thought went wrong, largely because it reminded me of several others that I thought went wrong in very similar ways. So that was a classic case of rash blogging. I am truly sorry to writers and readers of time-travel stories, and I humbly repent and pledge amendment of life.

Apology Two: In a similarly fractious mood, I once wrote a screed against podcasts. But I have not given up on my search for podcasts — in part because I think the medium has so much promise — and since I wrote that post have listened to a whole bunch of them, and have developed affection for a few. So let me again repent of the extremity of my language and the coarseness of my reactions.

In another post, I’ll do some capsule reviews of the podcasts I’ve been listening to in the past year, but for now I have, as we academics say, a comment and a question.

The comment is that the one kind of podcast I absolutely cannot abide is the most common kind: two dudes talking. Or three dudes, or three women, or any combination of genders — it’s the chatting-in-front-of-a-microphone that drives me nuts. The other day I tried listening to Control-Walt-Delete, but when Walt Mossberg and Nilay Patel spent the first five minutes discussing what the sports teams of the schools they had attended were called, I said Finis, done, I’m outta here. No, I like podcasts that are professionally edited, scripted, festooned with appropriate music, crafted into some kind of coherent presentation. Podcasts like that seem respectful to the listener, wanting to engage my attention and reward it.

But one thing I’ve noticed is that the podcasts I know that do that best are relentlessly liberal in their political and social orientation. Which is not surprising, given that most of our media are likewise liberal. And I don't even mean that as a criticism: there is a significant liberal element to my own political makeup, and if you want to know why that is, just listen to this episode of the Criminal podcast. Criminal in general is a good example of the kind of podcast I like, from its sound design and apt use of music to its strong storytelling. Even the website is artfully designed.

Which leads me to my Bleg: Does anyone know of similarly well-crafted, artful podcasts made by conservatives or Christians? I have not yet found a single one. Podcasts by conservatives and Christians tend to be either bare-bones — two dudes talking, or one dude talking with maybe a brief musical intro and outro — or schmaltzily over-produced. (Just Christians in that second category.) Anyone know of any exceptions to this judgment? I suspect that there’s an unbridgeable gulf of style here, but I’d like to be proved wrong.

UPDATE: Despite the quite clear statements I make above to the effect that (a) I really, really dislike dudes-talking podcasts and (b) I am not asking about dude-talking podcasts but about professionally produced podcasts, people keep writing on Twitter and email to say "Hey, here's a dudes-talking podcast that you might like." Sigh.

Saturday, July 2, 2016

the unbought grace of the human self

Returning to Edward Mendelson’s essay “In the Depths of the Digital Age”: an essay about technology, some might say, as this is a blog about technology. But we’re not talking today about “technology” tout court; we’re talking about digital technologies, and more specifically digital communications technologies, and more specifically yet internet-connected digital communications technologies, and even more specifically — let’s get to the heart of the matter — that very recent variety of internet-connected digital communications technology that offers free “services” purported to connect us with one another, services whose makers read, sift, and sell the data that we provide them when we use their software.

The question that Mendelson most forcefully presses on us is: What selves are made by submission to these technologies?

The explicit common theme of these books [under review] is the newly public world in which practically everyone’s lives are newly accessible and offered for display. The less explicit theme is a newly pervasive, permeable, and transient sense of self, in which much of the experience, feeling, and emotion that used to exist within the confines of the self, in intimate relations, and in tangible unchanging objects — what William James called the “material self” — has migrated to the phone, to the digital “cloud,” and to the shape-shifting judgments of the crowd.

Mendelson does not say that this shift is simply bad; he writes of gains and losses. And his essay does not have a thesis as such. But I think there is one not-directly-stated idea that dominates his reflections on these books: Neither users of nor commentators on these social-media technologies have an adequate intellectual or moral vocabulary to assess the massive changes in selfhood that we have opened ourselves to. The authors of the books Mendelson reviews either openly confess their confusion or try to hide that confusion with patently inadequate conceptual schemes.

But if even the (self-proclaimed) expert authorities are floundering in this brave new world, what can we do to think better about what’s happening to always-connected, always-surveilled, always-signalling, always-assessing selves? One possibility: read some fiction.

I have sometimes suggested — see here, here, and here — that Thomas Pynchon ought to be a central figure for anyone who wants to achieve some insight and clarity on these matters. And lo and behold, this from Mendelson:

In Thomas Pynchon’s Gravity’s Rainbow (1973), an engineer named Kurt Mondaugen enunciates a law of human existence: “Personal density … is directly proportional to temporal bandwidth.” The narrator explains:

“Temporal bandwidth” is the width of your present, your now…. The more you dwell in the past and future, the thicker your bandwidth, the more solid your persona. But the narrower your sense of Now, the more tenuous you are.

The genius of Mondaugen’s Law is its understanding that the unmeasurable moral aspects of life are as subject to necessity as are the measurable physical ones; that unmeasurable necessity, in Wittgenstein’s phrase about ethics, is “a condition of the world, like logic.” You cannot reduce your engagement with the past and future without diminishing yourself, without becoming “more tenuous.”


And Mendelson suggests that we use this notion of “temporal bandwidth” to think about how investments in social media alter our experience of time — and especially our relationship to the future.

Another example: Virginia Woolf is cited five times in this essay, perhaps surprisingly — what does Virginia Woolf have to do with technology? But — I’m not just a friend of Mendelson’s but also a pretty careful reader of his work — I have noticed that as we have gotten deeper into our current socially-digital age Woolf’s fiction has loomed larger and larger in Mendelson’s thinking. Mrs Dalloway is the subject of the wonderful final chapter of Mendelson’s The Things That Matter — a superb book I reviewed here — and that chapter would make a fine primer for the shaping of a model of selfhood adequate to the world, and able to stand up to models that are reductive and simplistic enough to be bought and sold in the marketplace.

One might not think that Pynchon and Woolf have much in common — but Mendelson thinks they do, and thinks that the visions and portrayals of selfhood they provide are profoundly useful correctives to the ones we’re being sold every day. I’ll close this post with a quotation from a brief essay by Mendelson in which he makes the link between the two writers explicit:

Like all of Virginia Woolf’s novels and, despite their misplaced reputation for high-tech cleverness, all of Thomas Pynchon’s novels, including his latest one, both books point toward the kind of knowledge of the inner life that only poems and novels can convey, a knowledge that eludes all other techniques of understanding, and that the bureaucratic and collective world disdains or ignores. Yet for anyone who has ever known, even in a crowded room, the solitude and darkness that Clarissa [Dalloway] and Oedipa [Maas] enter for a few moments, that experience, however brief and elusive, is “another mode of meaning behind the obvious” and, however obscured behind corruption, lies, and chatter, “a thing there was that mattered.”