Text Patterns - by Alan Jacobs

Friday, August 28, 2015

on difficulty

In this exchange on literary difficulty I think Leslie Jamison gives us something far more useful than Heller.

Here’s Heller:

Recently, when I read Christine Schutt’s short story “You Drive” with a graduate writing class, several of the students complained that they found the story baffling. They couldn’t make out the chronology of the events it described; they weren’t always sure which character was speaking; the story, they concluded, “didn’t work.” The fact that they had trouble following Schutt’s elliptical prose was not in itself a surprise. What did take me aback was their indignation — their certainty that the story’s difficulty was a needless imposition on readerly good will. It was as if any writing that didn’t welcome them in and offer them the literary equivalent of a divan had failed a crucial hospitality test.

The “as if” in that last sentence is doing a lot of work, and rather snide work at that. Why should Heller conclude that her students’ dislike of one story is revelatory of a sense of readerly entitlement, a universal demand that texts “welcome them in and offer them the literary equivalent of a divan”? Maybe she assigned a poor story, and the students would have responded more positively to an equally demanding one that was better-crafted. You can’t tell what people think about “any writing” on the basis of their opinions about a single text. 

It’s easy and natural for teachers to explain every classroom clunker by blaming the inadequacies of their students. It’s also a tendency very much to be resisted.

Jamison, by contrast, approaches the question of difficulty in a much more specific way, and what I like best about her brief narrative is its acknowledgment that a reader might approach a given book with a very different spirit in one set of circumstances — or at one moment of her life — than in another. It’s something I have said and written often: that one need not think that setting a book aside is a permanent and unverifiable verdict on the book — or on oneself. People change; frames of mind and heart come and go; and if a book and a reader happen to find each other, it’s beautiful

Wednesday, August 26, 2015

Twitter and emotional resilience

It seems to me that one of the most universal and significant differences between young people and their elders is the emotional resilience of the young. Most young people — the damaged always excepted — can plunge into the deepest and wildest waters of their inner lives because they know that they have what it takes to take the buffeting, even be energized by the buffeting, and to recover easily, quickly, completely.

I’ve seen this often with students over the years. I’ve had people come to my office and disintegrate before my eyes, collapse in convulsive weeping — and then, fifteen minutes later, walk out into the world utterly composed and even cheerful. There was a time when I could have done the same. When I was their age and feeling angry, I wanted music that echoed and amplified that anger; when I was deep in melancholy, I would drive the streets at 2 A.M. and listen to Kind of Blue over and over. But looking back on these habits, I think I allowed them because, on some level, I knew I could climb out of the pit when I needed to.

Those days are past. When the world’s rough waters have buffeted you for several decades, you wear down, you lose your resilience. Now if I feel agitated or melancholy, I seek countervailing forces: the more peaceable and orderly music of Bach and Mozart and Handel, the movies of Preston Sturges, the prose of Jane Austen or P. D. James. (Classic mysteries, with their emphasis on finding and purging the sources of social disorder, have become increasingly important to me.) These are coping mechanisms, ways for me to keep my emotional balance.

This morning my Twitter feed was overwhelmed by yet another Twitter tsunami, this one prompted by the murder of two television journalists in Virginia. This one one is a little different than the usual, because much of the conversation is centering on people who, with crassly absolute insensitivity, are retweeting footage of the actual murder itself: thanks to the curse of video autoplay, thousands and thousands of people are being confronted by frightening, disturbing scenes that they never wanted to see. But in general it follows the same pattern as all the other tsunamis: hundreds and hundreds of tweets and retweets of the same information, over and over, all day long.

And I think: I don’t need this. I could make some principled, or “principled,” arguments against it — that there's no reason to pay more attention to this murder than any of the several dozen others that will happen in America today, that this is a classic illustration of the "society of the spectacle", that we should follow Augustine's example in denouncing curiositas — but my real problem is that it just makes me very sad and very tired, and I have too much to do to be sad and tired.

And then it occurs to me: maybe Twitter — maybe social media more generally — really is a young person's thing after all. Intrinsically, not just accidentally.

Monday, August 24, 2015

social media triage

We all have to find ways to manage our social-media lives. I have a few rules about engaging with other people, developed over the past several years, which I hold to pretty firmly, though not with absolute consistency.

1) If on Twitter or in blog comments you're not using your real name, I won't reply to you.

2) I never read the comments on any post that appears on a high-traffic online site, even if I have written it.

3) I have Twitter set up so that I typically see replies only from people I follow. Every once in a while I may look through my replies, but honestly, I try not to. So if you're asking me a question on Twitter, I will either never see it or, probably, will see it only some days or weeks after you've asked.

4) If I happen to see that you have tweeted me-wards but I don't know you, I will probably not reply.

Why do I follow these rules? Because my experiences in conversing with strangers online have been about 95% unpleasant. Especially as one reaches what the French call un certain âge, cutting unnecessary losses — conserving intellectual and emotional energy — becomes more important than creating new experiences. At least how that's how it's been for me. This is unfortunate for, and perhaps unfair to, people who want to engage constructively; but y'all are greatly outnumbered by the trollish, the snarky, those who reply to things they haven't read, and the pathologically contentious. And in the limited time I have to spend on social media, I prefer to nurture relationships I already have.

I've said some of these things before, but since in the past week I've received three why-didn't-you-answer-my-tweet emails, I thought it might be worthwhile to say them again.

podcasts redux

Perhaps the chief thing I learned from my post on podcasting is that a great many people take “podcast” to mean something like “any non-music audio you can listen to on your smartphone.” Okay, fair enough; the term often is used that way. And I sort of used it that way myself, even though I didn’t really mean to. This made my post less coherent than it ought to have been. 

In more precise usage, a podcast is something like an audio blog post: born digital and distributed to interested parties via web syndication. We commonly distinguish between a magazine article that gets posted online and a blog post, even when the magazine posts the article to its blog and you see it in your RSS reader; similarly, In Our Time and This American Life are radio programs that you can get in podcast form, not podcasts as such. The Mars Hill Audio Journal is an audio periodical and even farther from the podcast model because it isn’t syndicated: you have to purchase and download its episodes — and you should!  (By the way, I couldn’t help smiling at all the people who told me that I should give Mars Hill a try, given this. How did they manage to miss me?) (Also by the way, MHAJ has an occasional podcast: here.)

So clearly I should not have used In Our Time to illustrate a point about podcasts, even if I do typically listen to it in podcast form. My bad.

In Our Time has a great many fans, it seems, and while on one level I understand why, I'm typically frustrated by the show. It typically begins with Melvyn Bragg saying something like, "So Nigel, who was Maimonides?" — to which Nigel, a senior lecturer in Judaic Studies at University College, London, replies, "Maimonides was born...." And then off we go for half-an-hour of being bludgeoned with basic facts by three academics with poor voices for radio. Only in the last few minutes of the episode might an actual conversation or debate break out. If you don't especially like reading, then I guess this is a reasonably painless way to learn some stuff, but it doesn't do a lot for me.

I also discovered that EconTalk has a great many fans, and indeed, you can learn a good deal on EconTalk about stuff it would be hard to discover elsewhere. But EconTalk is basically people talking on the phone, and the complete lack of production values grates on me.

So, sorting through all these responses, I have come to two conclusions. The first is that for a great many people podcast-listening is primarily a means of downloading information or entertainment to their brains. It's content they want, and the form and quality of presentation don't, for these people, count for a lot.

The second conclusion is that in these matters I have been really, really spoiled by the Mars Hill Audio Journal. Even though it is not a podcast, it is, I now realize, the standard by which I tend to judge podcasts. And they rarely match up. Ken Myers has a really exceptional skill set: he is deeply knowledgable and intelligent, he is a friendly but incisive interviewer, he is a magnificent editor, and he has the technical skills to produce a top-quality audio presentation. I’ve come to realize, over the past few days of conversing about all this, that what I really want is for all podcasts to be like the MHAJ. And while that may be an understandable desire, it’s an unreasonable expectation.

Tuesday, August 18, 2015

podcasts

Just a quick follow-up to a comment I made on Twitter. Over the past several years I have listened to dozens and dozens of podcasts, on a very wide range of subjects, with the result that there is now not a single podcast that I listen to regularly.

Podcasts, overall, are

(1) People struggling to articulate for you stuff you could find out by looking it up on Wikipedia (e.g. In Our Time);

(2) People using old-timey radio tricks to fool you into thinking that a boring and inconsequential story is fascinating (e.g. Serial);

(3) People leveraging their celebrity in a given field as permission to ramble incoherently about whatever happens to come to their minds (e.g. The Talk Show); or

(4) People using pointless audio-production tricks to make a pedestrian story seem cutting-edge (e.g. Radiolab).

The world of podcasting desperately needs people to take it seriously and invest real thought and creativity into it. There are a lot of not-so-smart people who invest all they have in podcasts; there are a lot of smart people who do podcasts as an afterthought, giving them a fraction of the attention they give to their "real work." So far it's a medium of exceptional potential almost wholly unrealized.

All that said, The Memory Palace is pretty good.

Monday, August 17, 2015

reification and modernity

Until this morning I was certain that I had posted this some weeks ago ... but I can't find it. So maybe not. Apologies if this is, after all, a rerun.



One of the chief themes of Peter Harrison's recent book The Territories of Science and Religion is the major semantic alteration both terms of his title — science (scientia) and religion (religio) — have undergone over the centuries. For instance,

In an extended treatment of the virtues in the Summa theologiae, Aquinas observes that science (scientia) is a habit of mind or an “intellectual virtue.” The parallel with religio, then, lies in the fact that we are now used to thinking of both religion and science as systems of beliefs and practices, rather than conceiving of them primarily as personal qualities. And for us today the question of their relationship is largely determined by their respective doctrinal content and the methods through which that content is arrived at. For Aquinas, however, both religio and scientia were, in the first place, personal attributes.

The transformation in each term is, then, a form of reification: a "personal attribute," a habit or virtue, gradually becomes externalized — becomes a kind of thing, though not a material thing — becomes something out there in the world.

What's especially interesting about this, to me, is that scientia and religio aren't the only important words this happens to. Harrison mentions also the case of "doctrine":

In antiquity, doctrina meant “teaching” — literally, the activity of a doctor — and “the habit produced by instruction,” in addition to referring to the knowledge imparted by teaching. Doctrina is thus an activity or a process of training and habituation. Both of these understandings are consistent with the general point that Christianity was understood more as a way of life than a body of doctrines. Moreover they will also correlate with the notion of theology as an intellectual habit, as briefly noted in the previous chapter. As for the subject matter of doctrina — its cognitive component, if you will — this was then understood to be scripture itself, rather than “doctrines” in the sense of systematically arranged and logically related theological tenets. To take the most obvious example, Augustine’s De doctrina Christiana (On Christian Teaching) was devoted to the interpretation of scripture, and not to systematic theology.

So from "the activity of a doctor" — what a learned man does — doctrine becomes a body of propositions.

Curiously, the same thing has happened to a word that I am professionally quite familiar with: "literature." We now use it to refer to a category of texts ("That's really more literature than philosophy, don't you think?") or to a body or collection of texts ("Victorian literature"). But in Dr. Johnson's Dictionary literature is defined as "Learning; skill in letters." And this remains the first meaning in the OED:

Familiarity with letters or books; knowledge acquired from reading or studying books, esp. the principal classical texts associated with humane learning (see humane adj. 2); literary culture; learning, scholarship. Also: this as a branch of study. Now hist.

"Now hist." — historical, no longer current. Yet for Johnson it was the only meaning. (It's interesting, though, that the examples of such usage he cites seem to me to fit the modern meaning better than the one he offers — as though the meaning of the term is already changing in ways Johnson fails to see.)

So here we have a series of personal attributes — traits acquired through the exercise of discipline until they become virtues — that become external, more-or-less objective stuff. (Gives a new resonance to Alasdair MacIntyre's famous title After Virtue.) Which makes me wonder: is there a link between the rise of modernity and this reifying tendency in language? And if so, might this link be related to the technological aspect of modernity that I've been asking about lately? If a social order is increasingly defined and understood in terms of what it makes and uses — of things external to the people making using them — then might that not create a habit of mind that would lead to the reifying of acts, habits, traits, and virtues? What is important about us, in this way of thinking, would not be who we are but what we make, what we surround ourselves with, what we wield.

Tuesday, August 11, 2015

algorithms and responsibility

One of my fairly regular subthemes here is the increasing power of algorithms over our daily lives and what Ted Striphas has called “the black box of algorithmic culture”. So I am naturally interested in this interview with Cynthia Dwork on algorithms and bias — more specifically, on the widespread, erroneous, and quite poisonous notion that if decisions are being made by algorithms they can’t be biased. (See also theses 54 through 56 here.)

I found this exchange especially interesting:

Q: Whose responsibility is it to ensure that algorithms or software are not discriminatory?

A: This is better answered by an ethicist. I’m interested in how theoretical computer science and other disciplines can contribute to an understanding of what might be viable options. The goal of my work is to put fairness on a firm mathematical foundation, but even I have just begun to scratch the surface. This entails finding a mathematically rigorous definition of fairness and developing computational methods — algorithms — that guarantee fairness.

Good for Dwork that she’s concerned about these things, but note her rock-solid foundational assumption that fairness is something that can be “guaranteed” by the right algorithms. And yet when asked a question about right behavior that’s clearly not susceptible to an algorithmic answer — Who is responsible here? — Dwork simply punts: “This is better answered by an ethicist.”

One of Cornel West’s early books is called The American Evasion of Philosophy, and — if I may riff on his title more than on the particulars of his argument — this is a classic example of that phenomenon in all of its aspects. First, there is the belief that we don't need to think philosophically because we can solve our problems by technology; and then, second, when technology as such fails, to call in expertise, in this case in the form of an “ethicist.” And then, finally, in the paper Dwork co-authored on fairness that prompted this interview, we find the argument that the parameters of fairness “would be externally imposed, for example, by a regulatory body, or externally proposed, by a civil rights organization,” accompanied by a citation of John Rawls.

In the Evasion of Philosophy sweepstakes, that’s pretty much the trifecta: moral reflection and discernment by ordinary people replaced by technological expertise, academic expertise, and political expertise — the model of expertise being technical through and through. ’Cause that’s just how we roll.

Friday, August 7, 2015

respect

Portrait of Virginia Woolf by Roger Fry (Wikimedia)

Suzanne Berne on Virginia Woolf: A Portrait, by Viviane Forrester:

But it’s Leonard who gets dragged in front of the firing squad. Not only did he encourage Bell’s patronizing portrayal of Virginia; according to Forrester, he was also responsible for his wife’s only true psychotic episode, and probably helped usher her toward suicide. These accusations are fierce and emphatic: Leonard projected his own neuroses and his own frigidity onto Woolf (he had a horror of beginner sex and found most women’s bodies “extraordinarily ugly”). He married her strictly to get out of Ceylon, where he was in the British Foreign Service and where he had fallen into a suicidal depression. (He hated both the place and the position, though he pretended later to have thrown over a fabulous career for Virginia.) Without medical corroboration, he decreed that she was too unbalanced to have children, triggering her legendary mental breakdown immediately after their honeymoon. Then he held the threat of institutionalization over her, coercing her into a secluded country lifestyle that suited him but isolated and disheartened her, while using his marriage as entrée to an aristocratic, intellectual world that, as “a penniless Jew” from the professional class — just barely out of a shopkeeper’s apron — he could not have otherwise hoped to join.

This excerpt from Forrester’s book confirms Berne’s description of Forrester's attack on Leonard Woolf.

When, in March of 1941, Woolf decided to take her own life, here is the heart-wrenching letter she left for Leonard:

Dearest,

I feel certain I am going mad again. I feel we can’t go through another of those terrible times. And I shan’t recover this time. I begin to hear voices, and I can’t concentrate. So I am doing what seems the best thing to do. You have given me the greatest possible happiness. You have been in every way all that anyone could be. I don’t think two people could have been happier till this terrible disease came. I can’t fight any longer. I know that I am spoiling your life, that without me you could work. And you will I know. You see I can’t even write this properly. I can’t read. What I want to say is I owe all the happiness of my life to you. You have been entirely patient with me and incredibly good. I want to say that – everybody knows it. If anybody could have saved me it would have been you. Everything has gone from me but the certainty of your goodness. I can’t go on spoiling your life any longer.

I don’t think two people could have been happier than we have been.

Forrester quotes that last line in her book (p. 203) and offers one line of commentary on it: “What was Virginia Woolf denied? Respect.”

What counts as denying someone respect? Offhand, I’d say that if I believed that I understood the emotional life and intimate relationships of a great artist I had never met, and who died before I came of age, better than she understood them herself … that would be denying her respect.

Thursday, August 6, 2015

the humanities and the university

A few years ago, the American Academy of Arts and Sciences commissioned a report on the place of the humanities and social sciences in America in the coming years — here’s a PDF. And here’s how the report, The Heart of the Matter, begins:

Who will lead America into a bright future?

Citizens who are educated in the broadest possible sense, so that they can participate in their own governance and engage with the world. An adaptable and creative workforce. Experts in national security, equipped with the cultural understanding, knowledge of social dynamics, and language proficiency to lead our foreign service and military through complex global conflicts. Elected officials and a broader public who exercise civil political discourse, founded on an appreciation of the ways our differences and commonalities have shaped our rich history. We must prepare the next generation to be these future leaders.

And in this vein the report continues: study the humanities so you can become a leader in your chosen profession.

Which is a great argument, as long as there is reliable evidence that investing tens of thousands of dollars to study the humanities pays off in income and status later on. But what if that isn't true, or ceases to be true? The Heart of the Matter puts all its argumentative eggs in the income-and-status basket; I'm not sure that's such a great idea.

If the general public comes to believe that the humanities don't pay — at least, not in the way The Heart of the Matter suggests — then that won't be the end of the humanities. Friends will still meet to discuss Dante; a few juvenile offenders will still read Dostoevsky.

And the digital realm will play a part also: James Poulos has recently written about SPOCs — not MOOCs, Massive Open Online Courses, but Small Private Online Courses:

In small, private forums, pioneers who want to pursue wisdom can find a radically alternate education — strikingly contemporary, yet deeply rooted in the ancient practice of conversational exegesis.

Everyone wins if that happens. Wisdom-seekers can connect cheaply, effectively, intimately, and quickly, even if they're dispersed over vast distances. Universities can withdraw fully from the wisdom business, and focus on the pedigree business. And the rest of us can get on with our lives.

In a similar vein, Johann Neem has imagined an “academy in exile”:

As the university becomes more vocational and less academic in its orientation, we academics may need to find new ways to live out our calling. The academy is not the university; the university has simply been a home for academics. University education in our country is increasingly not academic: it is vocational; it is commercial; it is becoming anti-intellectual; and, more and more, it is offering standardized products that seek to train and certify rather than to educate people. In turn, an increasing proportion of academics, especially in the humanities, have become adjuncts, marginalized by the university’s growing emphasis on producing technical workers.

The ideas offered above all build on the core commitments of the academy, and the tradition of seeing the academy as a community of independent scholars joined together by their commitment to producing and sharing knowledge. Increasingly, however, universities claim to own the knowledge we produce, as do for-profit vendors who treat knowledge as proprietary. To academics, each teacher is an independent scholar working with her or his students and on her or his research, but also a citizen committed to sharing her or his insights with the world as part of a larger community of inquiry.

I do not agree with Poulos that in this severance of the humanities (in their wisdom-seeking capacity) from the university “everybody wins”: I think that would make an impoverishment of both the humanities and the university. Those dedicated to the pursuit of wisdom need the challenge of those who pursue other ends, and vice versa, and the university has been a wonderful place for those challenges to happen.

Moreover, I believe the place of the humanities — the wisdom-seeking humanities — in the contemporary American university is not a lost cause. It can still be defended — but not, I think, in the way that The Heart of the Matter tries to defend it. Some of us are working on an alternative. Stay tuned.

Monday, August 3, 2015

Thrun, fisked

Let’s work through this post by Sebastian Thrun. All of it.

You’re at the wheel, tired. You close your eyes, drift from your lane. This time you are lucky. You awaken, scared. If you are smart, you won’t drive again when you are about to fall asleep.

Well ... people don't always have a choice about this kind of thing. I mean, sometimes people drive when they’re about to fall asleep because they’ve been working a really long time and driving is the only way they can get home. But never mind. Proceed.

Through your mistakes, you learn. But other drivers won’t learn from your mistakes. They have to make the same mistakes by themselves — risking other people’s lives.

This is true. Also, when I learned to walk, to read, to hit a forehand, to drive a manual-transmission car, no one else but me learned from my mistakes. This seems to be how learning works, in general. However, some of the people who taught me these things explained them to me in ways that helped me to avoid mistakes; and often they were drawing on their own experience. People may even have told me to load up on caffeine before driving late at night. This kind of thing happens a lot among humans — the sharing of knowledge and experience.

Not so the self-driving car. When it makes a mistake, all the other cars learn from it, courtesy of the people programming them. The first time a self-driving car encountered a couch on the highway, it didn’t know what to do and the human safety driver had to take over. But just a few days later, the software of all cars was adjusted to handle such a situation. The difference? All self-driving cars learn from this mistake, not just one. Including future, “unborn” cars.

Okay, so the cars learn ... but I guess the people in the cars don't learn anything.

When it comes to artificial intelligence (AI), computers learn faster than people.

I don't understand what “when it comes to” means in this sentence, but “Some computers learn some things faster than some people” would be closer to a true statement. Let’s stick with self-driving cars for a moment: you and I have no trouble discerning and avoiding a pothole, but Google’s cars can’t do that at all. You and I can tell when a policeman on the side of the road is signaling for you to slow down or stop, and can tell whether that’s a big rock in the road or just a piece of cardboard, but Google’s cars are clueless.

The Gutenberg Bible is a beautiful early example of a technology that helped humans distribute information from brain to brain much more efficiently. AI in machines like the self-driving car is the Gutenberg Bible, on steroids.

“On steroids”?

The learning speed of AI is immense, and not just for self-driving cars. Similar revolutions are happening in fields as diverse as medical diagnostics, investing, and online information access.

I wonder what simple, everyday tasks those systems are unable to perform.

Because machines can learn faster than people, it would seem just a matter of time before we will be outranked by them.

“Outranked”?

Today, about 75 percent of the United States workforce is employed in offices — and most of this work will be taken away by AI systems. A single lawyer or accountant or secretary will soon be 100 times as effective with a good AI system, which means we’ll need fewer lawyers, accountants, and secretaries.

What do you mean by “effective”?

It’s the digital equivalent of the farmers who replaced 100 field hands with a tractor and plow. Those who thrive will be the ones who can make artificial intelligence give them superhuman capabilities.

“Make them”? How?

But if people become so very effective on the job, you need fewer of them, which means many more people will be left behind.

“Left behind” in what way? Left behind to die on the side of the road? Or what?

That places a lot of pressure on us to keep up, to get lifelong training for the skills necessary to play a role.

“Lifelong training”? Perhaps via those MOOCs that have been working so well? And what does “play a role” mean? The role of making artificial intelligence give me superhuman capabilities?

The ironic thing is that with the effectiveness of these coming technologies we could all work one or two hours a day and still retain today’s standard of living.

How? No, seriously, how would that play out? How do I, in my job, get to “one or two hours a day”? How would my doctor do it? How about a plumber? I’m not asking for a detailed roadmap of the future, but just sketch out a path, dude. Otherwise I might think you’re just talking through your artificially intelligent hat. Also, do you know what “ironic” means?

But when there are fewer jobs — in some places the chances are smaller of landing a position at Walmart than gaining admission to Harvard —

That’s called lying with statstics, but never mind, keep going.

— one way to stay employed is to work even harder. So we see people working more, not less.

If by “people,” you mean “Americans,” then that is probably true — but these things have been highly variable throughout history. Any anyway, how does “people working more” fit with your picture of the coming future?

Get ready for unprecedented times.

An evergreen remark, that one is.

We need to prepare for a world in which fewer and fewer people can make meaningful contributions.

Meaningful contributions to what?

Only a small group will command technology and command AI.

What do you mean by “command”? Does a really good plumber “command technology”? If not, why not? How important is AI in comparison to other technologies, like, for instance, farming?

What this will mean for all of us, I don’t know.

Finally, an honest and useful comment. Thrun doesn't know anything about what he was asked to comment on, but that didn't stop him from extruding a good deal of incoherent vapidity, nor did it stop an editor at Pacific Standard from presenting it to the world.

Thursday, July 30, 2015

Disagreement, Modernity, Technology

In the last couple of weeks I have published three posts over at The American Conservative on disagreement and its management.

One

Two

Three

Since this blog largely deals with technological and academic questions, I tend to move over to AmCon when I have something to say about political and social issues … but there’s a lot of overlap to these broad categories, and I seriously thought about posting the third entry in that series here at Text Patterns.

Instead, I’m just linking to the series, but I want to point out something that seems important to me: that there is a clear and strong connection between (a) the need to think acutely about how social media shape our politics and ethics and (b) the need that I’ve been emphasizing here for a technological history of modernity. The pathologies of our shared socio-political life do not just arise from immediate contexts and recent technologies, but have been generated by disputes and technologies that go back at least half a millennium. The history of modernity’s rise and the critique of new media are in a sense a single enterprise, a point which, for all he may have gotten wrong, Marshall McLuhan understood profoundly.

Wednesday, July 29, 2015

in the days of Good King Edmund

In his new collection of essays, Mario Vargas Llosa writes,

Half a century ago in the United States, it was probably Edmund Wilson, in his articles in The New Yorker or The New Republic, who decided the success or failure of a book, a poem, a novel or an essay. Now the Oprah Winfrey Show makes these decisions.  

Oh, yes, so true! In fact, all the way back in 1944 Wilson wrote the definitive takedown of detective stories — he crushed detective stories — and as we all know, people stopped reading such books and have never resumed.

Similarly, twelve years later Wilson reviewed a fantasy writer named Tolkien — or maybe I should say he totally destroyed Tolkien — with the result that none of you has ever heard that name before just now.

Yes, back in The Good Old Days highbrow critics had enormous, culture-changing power, which of course they always used for good, ensuring that masterpieces like Peyton Place and Valley of the Dolls topped the book-sales charts. Instead we get dumbasses like Oprah turning, I don't know, recent translations of Anna Karenina into bestsellers. What rot. O tempora, o mores.

Twitter's changes

So this interview with Twitter's Kevin Weil suggests that (a) there will be significant changes coming to the user experience at Twitter and (b) I will hate every single one of them.

  • I'm already annoyed by the recently-implemented "while you were away" feature, since Twitter  doesn't show me everything that happened while I was away but rather what its algorithms decide is important. A very Facebook-like thing to do.
     
  • For me the single most unpleasant thing about Twitter is its tendency to create 24-hour obsessions: someone dies, or some pointless internet fight flares up, or some celebrity says something stupid, and for a day my timeline seems to have room for nothing else. Then the next day it's gone as though it had never been. Twitter's Project Lightning suggests that the company wants to make more of my Twitter experience like this.
     
  • "Twitter says it expects most people will like autoplay video" — are you freakin' kidding me?
     
  • Stopping abuse and harassment "is critically important to us," is "an ongoing area of focus for us," is "incredibly, incredibly important to us." But Weil has no announcements because, basically, nothing has changed. Call me back when it does.
     
  • I think Twitter has a big collapse coming. The leadership still doesn't know what the service is for or what they want it to be, but they're determined to exercise more and more control over the experience anyway. That's a recipe for disaster. In the end I think they're going to drive most of their users back to Facebook.

I know we've been through the Twitter-alternative search before with app.net and ello, but I am keeping a very close eye on Manton Reece's new project.

brief book reviews: The Watchmaker of Filigree Street

Natasha Pulley’s The Watchmaker of Filigree Street is a historical fantasy, set in late Victorian London, that seems determined to bring in ... well, everything you might imagine turning up in a historical fantasy set in late Victorian London:

  • steampunky mechanical things? ✓
  • dark and narrow London alleyways? ✓
  • English orientalism? ✓
  • The Woman Question? ✓
  • stern and bigoted Victorian patriarch? ✓
  • delicate and bigoted Victorian matriarch? ✓
  • Gilbert and Sullivan? ✓
  • The Love That Dare Not Speak Its Name? ✓

And yet ... it’s a really well-told story, and Pulley is a promising writer. I just hope that her next outing contains fewer predictable elements.

Tuesday, July 21, 2015

climate science and public scrutiny

Eric Holthaus writes in Slate about a new climate study led by James Hansen that argues that we are likely to see ocean levels rising higher and far more quickly than has been expected. To say that the study is frightening is to master understatement.

But right now I just want to call attention to how the study is being presented to the world:

One necessary note of caution: Hansen’s study comes via a non-traditional publishing decision by its authors. The study will be published in Atmospheric Chemistry and Physics, an open-access “discussion” journal, and will not have formal peer-review prior to its appearance online later this week. The complete discussion draft circulated to journalists was 66 pages long, and included more than 300 references. The peer-review will take place in real-time, with responses to the work by other scientists also published online. Hansen said this publishing timeline was necessary to make the work public as soon as possible before global negotiators meet in Paris later this year. Still, the lack of traditional peer review and the fact that this study’s results go far beyond what’s been previously published will likely bring increased scrutiny. On Twitter, Ruth Mottram, a climate scientist whose work focuses on Greenland and the Arctic, was skeptical of such enormous rates of near-term sea level rise, though she defended Hansen’s decision to publish in a non-traditional way.

It’s interesting that Holthaus says that this decision calls for “a note of caution”: we need to be careful before placing any trust in studies that haven’t been peer-reviewed. And that’s true — but not the primary lesson to be taken from the decision Hansen and his co-authors have made.

Hansen et al. are saying that having their conclusions — and the data from which they drew those conclusions — evaluated in as ruthlessly public a way as possible is infinitely more important than keeping any possible errors secret or achieving maximal prestige through publishing in a Big Journal. They are saying: What we believe we have discovered matters enormously, and therefore we want to expose everything we have done to the most rigorous possible scrutiny. That means opening their work to the world and saying: Go at it. When Holthaus says that this decision “will likely bring increased scrutiny” — well, yes. Precisely the point. Feature, not bug.

So whatever you think about what’s happening to our climate — and therefore to “our common home” — I don't see how you can’t applaud the way Hansen and his co-authors are handling the presentation of their work. This is science done in the most ethically responsible, and most ethically urgent, way imaginable. Every scholar ought to pay close attention to how this scholarship is being put before the world — and everyone who shares “our common home” ought to pay attention to how the ongoing public peer-review plays out.

Monday, July 20, 2015

brief book reviews: Unflattening

In Unflattening, Nick Sousanis writes that we need to “discover new ways of seeing, to open spaces for possibilities. It is about finding different perspectives.”

Stereoscopic vision reveals “that a single, ‘true’ perspective is false.”

Comics “allow for the integration and incorporation of multiple modes and signs and symbols.”

We all have “the capacity to host a multiplicity of worlds inside us,” so “we emerge with the possibility to become something different.”

We’re like the drones in Lang’s Metropolis, and like puppets who discover we have strings, and like the two-dimensional figures in Abbott’s Flatland.

There’s even a quote from Kahlil Gibran.

The whole argument is, more or less, contained in this image. If all this strikes you as profound or provocative, maybe you’ll like the book.

brief book reviews: The Internet of Garbage

Sarah Jeong’s short book The Internet of Garbage is very well done, and rather sobering, and I recommend it to you. The argument of the book goes something like this:

1) Human societies produce garbage.

2) Properly-functioning human societies develop ways of disposing of garbage, lest it choke out, or make inaccessible, all the things we value.

3) In the digital realm, the primary form of garbage for many years was spam — but spam has effectively been dealt with. Spammers still spam, but their efforts rarely reach us anymore: and in this respect the difference between now and fifteen years ago is immense.

And then, the main thrust of the argument:

4) Today, the primary form of garbage on the internet is harassment, abuse. And yet little progress is being made by social media companies on that score. Can’t we learn something from the victorious war against spam?

Patterning harassment directly after anti-spam is not the answer, but there are obvious parallels. The real question to ask here is, Why haven’t these parallels been explored yet? Anti-spam is huge, and the state of the spam/anti-spam war is deeply advanced. It’s an entrenched industry with specialized engineers and massive research and development. Tech industries are certainly not spending billions of dollars on anti-harassment. Why is anti-harassment so far behind?

(One possibility Jeong explores without committing to it: “If harassment disproportionately impacts women, then spam disproportionately impacts men — what with the ads for Viagra, penis size enhancers, and mail-order brides. And a quick glance at any history of the early Internet would reveal that the architecture was driven heavily by male engineers.” Surely this is a significant part of the story.)

Finally:

5) The problem of harassment can only be seriously addressed with a twofold approach: “professional, expert moderation entwined with technical solutions.”

After following Jeong’s research and reflections on it, I can’t help thinking that the second of these recommendations is more likely to be followed than the first one. “The basic code of a product can encourage, discourage, or even prevent the proliferation of garbage,” and code is more easily changed in this respect than the hiring priorities of a large organization. Thus:

Low investment in the problem of garbage is why Facebook and Instagram keep accidentally banning pictures of breastfeeding mothers or failing to delete death threats. Placing user safety in the hands of low-paid contractors under a great deal of pressure to perform as quickly as possible is not an ethical outcome for either the user or the contractor. While industry sources have assured me that the financial support and resources for user trust and safety is increasing at social media companies, I see little to no evidence of competent integration with the technical side, nor the kind of research and development expenditure that is considered normal for anti-spam.

I too see little evidence that harassment and abuse of women (and minorities, especially black people) is a matter of serious concern to the big social-media companies. That really, really needs to change.

Thursday, July 16, 2015

readerly triage

In the first five pages of The Marvelous Clouds, John Durham Peters says that media are

  • “devices of information”
  • “agencies of order”
  • “constitutive parts of ... our ecological and economic systems”
  • “vessels and environments”
  • “containers of possibility that anchor our existence”
  • “vehicles that carry and communicate meaning”
  • “the means by which meaning is communicated”
  • “infrastructures of data and control”
  • “enabling environments that provide habitats for diverse forms of life”
  • “civilizational ordering devices”

It’s obvious that these definitions, while sometimes complementary, are also sometimes fundamentally incompatible: a device that is also a vessel that is also an anchor....

So I set the book down and thought for a while. Then I picked it up again, and thumbed through it. I saw some pages about clocks and sundials, and some others about clouds (the clouds of the book’s title, I presume), and some others about Google. The pages on timekeeping looked good, but I’ve read a number of books about timekeeping already. I couldn't tell, at a brief glance, about the others.

I looked at those opening pages again. Three possibilities presented themselves to me. The first is that Peters is a demanding, allusive writer who works not by some ploddingly systematic outline but rather by a Shandean association of ideas. The second is that he actually has a logical outline but prefers, either for aesthetic reasons or because he values esoteric writing, to obscure it and to allow his readers to figure out the structure for themselves. The third is that his thinking is simply disorganized and incoherent.

Some of the best books I have ever read — fiction and nonfiction alike — have been governed (or “governed”) by Shandean procedure: Robert Musil’s The Man Without Qualities, Burton’s Anatomy of Melancholy; but that style demands a great deal of readers, and when it fails it fails catastrophically. I have been exhilarated by a few Shandean books; I have been infuriated by a great many that attempt that style without success. The same is true for works (Joyce’s Ulysses is the paradigmatic example) that are highly ordered but hide their organizational principles.

When you’re trying to decide what to read you do a (formal or informal) risk/reward analysis. You think about how much time and attention you’re being asked to invest in this text; you estimate the rewards you’re likely to get in a best-case and in a worst-case scenario. I did all that and put Peters’s book aside.

Monday, July 13, 2015

why the technological history of modernity?

Ivan Illich, "Philosophy... Artifacts... Friendship" (1996):

The person today who feels called to a life of prayer and charity cannot eschew an intellectual grounding in the critique of perceptions, because beyond things, our perceptions are to a large extent technogenic. Both the thing perceived and the mode of perception it calls forth are the result of artifacts that are meant by their engineers to shape the users. The novice to the sacred liturgy and to mental prayer has a historically new task. He is largely removed from those things - water, sunlight, soil, and weather - that were made to speak of God's presence. In comparison with the saints whom he tries to emulate, his search for God's presence is of a new kind.  

Please do not take me for a technophobe. I argue for detachment from artifacts, because only by abstaining from their use can I perceive the seductiveness of their whispers. Unlike the saintly models of yesterday, the one who begins walking now under the eyes of God must not just divest himself of bad habits that have become second nature; he must not only correct proclivities toward gold or flesh or vanity that have been ingrained in his hexis, obscuring his sight or crippling his glance. Today's convert must recognize how his senses are continuously shaped by the artifacts he uses. They are charged by design with intentional symbolic loads, something previously unknown.    

The things today with decisively new consequences are systems, and these are so built that they co-opt and integrate their user's hands, ears, and eyes. The object has lost its distality by becoming systemic. No one can easily break the bonds forged by years of television absorption and curricular education that have turned eyes and ears into system components.

(Thanks to my friend Richard Gibson for reminding me of this crucial passage.)

information and history

Every few days, it seems, I come across a rueful, even mournful citation of T. S. Eliot: “Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?” But it’s possible to make these distinctions in ways that are not so tendentious. One might think along these lines:

  • information: “A difference which makes a difference is an idea. It is a ‘bit,’ a unit of information.” — Gregory Bateson
  • data: information recognized by humans as information
  • knowledge: information mastered by humans and translated into human terms
  • wisdom: the proper discerning of the human uses one's knowledge has
  • counsel: wisdom transmitted to others

The point is not to see one of these as superior to the others, but to see them as a sequential development: for example, those who lack genuine knowledge — which, mind you, comes in different forms — will be necessarily deficient in wisdom and their counsel will be correspondingly less valuable.

This is all quite sketchy and needs further development, of course. Let’s start by complicating matters further. In his book The Creation of the Media, Paul Starr writes:

“Information” often refers specifically to data relevant to decisions, while “knowledge” signifies more abstract concepts and judgments. As knowledge provides a basis of understanding, so information affords a basis of action. “Information” carries the connotation of being more precise, yet also more fragmentary, than knowledge. From early in its history, American culture was oriented more to facts than to theory, more to practicality than to literary refinement — more, in short, to information than to knowledge.

Further: Near the beginning of his remarkable book Holding On To Reality, Albert Borgmann posits that there are three major kinds of information:

  • “Without information about reality, without reports and records, the reach of experience quickly trails off into the shadows of ignorance and forgetfulness.”
  • “In addition to the information that discloses what is distant in space and remote in time, there is information that allows us to transform reality and make it richer materially and morally. As a report is the paradigm of information about reality, so a recipe is the model of information for reality, instruction for making bread or wine or French onion soup. Similarly there are plans, scores, and constitutions, information for erecting buildings, making music, and ordering society. 
  • “Technological information lifts both the illumination and the transformation of reality to another level of lucidity and power. But it also introduces a new kind of information. To information about and for reality it adds information as reality. The paradigms of report and recipe are succeeded by the paradigm of the recording. The technological information on a compact disc is so detailed and controlled that it addresses us virtually as reality. What comes from a recording of a Bach cantata on a CD is not a report about the cantata nor a recipe — the score — for performing the cantata, it is in the common understanding music itself. Information through the power of technology steps forward as a rival of reality.”

One way to explain the deficiency in our narratives of modernity is to say that they have failed to maintain these distinctions and have therefore failed to note the mediating role that specific technologies play in promoting transfer from data to knowledge to wisdom to counsel.

Wednesday, July 8, 2015

Lord Macaulay and the News

One way to pursue the technological history of modernity is to try to understand how news and ideas got around at any given time. For instance, I wrote a short piece last year about Baron von Grimm, who created, helped to write, and distributed a newsletter that kept wealthy and literate citizens of 18th-century France informed about what was really happening in their country — as opposed to what the strictly-censored official sources of news wanted them to believe.

But Grimm did not invent this sort of thing. There’s a fascinating account of an early stage in the history of the newsletter in one of the most famous of all works of narrative history, Lord Macaulay’s History of England from the Accession of James the Second. Macaulay, in classic Victorian fashion, takes a few hundred pages to get around to the official starting point of his book, because how can you understand the rise to the throne of James II if you don't understand the career of his older brother, Charles II, whose coming to the throne makes no sense if you don't grasp the essential narrative of the English Civil War, which has its roots in ... You get the idea.

But somewhere in that opening 500 pages Macaulay takes a (very long) chapter to do a kind of social and economic history of Restoration England, covering everything from population estimates to leading industries to the social place of clergymen. It’s a fascinating narrative, and anticipates models of social history that didn’t come to their full flower for another century or so. One of his interests here is technological change, and that leads him to describe the rise of the postal service. “A rude and imperfect establishment of posts for the conveyance of letters had been set up by Charles the First, and had been swept away by the civil war. Under the Commonwealth the design was resumed. At the Restoration the proceeds of the Post Office, after all expenses had been paid, were settled on the Duke of York [later to be King James II]. On most lines of road the mails went out and came in only on the alternate days. In Cornwall, in the fens of Lincolnshire, and among the hills and lakes of Cumberland, letters were received only once a week.”

It was a start. But then things started to change in London specifically, though not as a result of a governmental program. Rather, “in the reign of Charles the Second, an enterprising citizen of London, William Dockwray, set up, at great expense, a penny post, which delivered letters and parcels six or eight times a day in the busy and crowded streets near the Exchange, and four times a day in the outskirts of the capital.” Exciting! (By the way, I wish we would retire the term "entrepreneur" and replace it with "enterprising citizen.") But, Macaulay immediately notes, “this improvement was, as usual, strenuously resisted,” and in a turn of events that’s hard to understand today, people began to insist that the penny post was somehow connected with the so-called (and ultimately shown to be fictional) Popish plot against King Charles — perhaps a way for conspirators to share plans. It is characteristic of that age of political intrigue that when people heard about a new medium of communication they immediately speculated on its usefulness to the perfidious.

But the penny post worked; a wide range of Londoners found it useful. And as the various postal services, public and private, became more strongly established, people became more eager for and expectant of news. Macaulay explains, in a passage I will quote at some length, that there was one particular form of news that became something of a rage in the latter years of the Stuarts:

No part of the load which the old mails carried out was more important than the newsletters. In 1685 nothing like the London daily paper of our time existed, or could exist. Neither the necessary capital nor the necessary skill was to be found. Freedom too was wanting, a want as fatal as that of either capital or skill.

So for Macaulay there were political, technological, and socio-economic reasons — all interacting with one another — why newspapers did not yet exist, even though

the press was not indeed at that moment under a general censorship. The licensing act, which had been passed soon after the Restoration, had expired in 1679. Any person might therefore print, at his own risk, a history, a sermon, or a poem, without the previous approbation of any officer; but the Judges were unanimously of opinion that this liberty did not extend to Gazettes, and that, by the common law of England, no man, not authorised by the crown, had a right to publish political news. While the Whig party was still formidable, the government thought it expedient occasionally to connive at the violation of this rule. During the great battle of the Exclusion Bill, many newspapers were suffered to appear, the Protestant Intelligence, the Current Intelligence, the Domestic Intelligence, the True News, the London Mercury. None of these was published oftener than twice a week. None exceeded in size a single small leaf. The quantity of matter which one of them contained in a year was not more than is often found in two numbers of the Times.

Yet despite the limits of the medium, the Royal party felt that they needed to keep such newsletters under direct control:

After the defeat of the Whigs it was no longer necessary for the King to be sparing in the use of that which all his Judges had pronounced to be his undoubted prerogative. At the close of his reign no newspaper was suffered to appear without his allowance: and his allowance was given exclusively to the London Gazette. The London Gazette came out only on Mondays and Thursdays. The contents generally were a royal proclamation, two or three Tory addresses, notices of two or three promotions, an account of a skirmish between the imperial troops and the Janissaries on the Danube, a description of a highwayman, an announcement of a grand cockfight between two persons of honour, and an advertisement offering a reward for a strayed dog. The whole made up two pages of moderate size.

Such was journalism. As in France a hundred years later, the people of Restoration England knew that they were being deprived of a thorough and accurate account of events — a troublesome thing in a time of such frequent political upheavals. These chaotic political conditions, in company with the rudiments of a news-sharing infrastructure, gave impetus to entrepreneurs to develop the technological skills and distribution networks necessarily to create something like the modern newspaper — which started to happen early in the next century.

Macaulay, writing in a time when Britain was awash in newspapers and journals of all varieties that covered a wide range of political and cultural subjects — he himself, in addition to his political career, wrote regularly for the Edinburgh Review — understood that the last years of the Stuarts had laid the foundation for his own informational environment. Moreover, he was one of the first historians to recognize the value of those early experiments in news-gathering and news-distributing: he learned most of what he knew about public opinion in the Restoration era by reading those old newsletters.

Tuesday, July 7, 2015

writing (and thinking) by hand

Here’s a response from John Durham Peters to my 79 Theses on Technology. However, I’m not quite sure what sort of a response it is. It could be a dissent, or it could be just a riff on some of the themes I introduced.

For instance, Peters writes, “We humans never do anything without technique, so we shouldn’t pretend there is any ontological difference between writing by hand, keyboarding, and speaking, or that one of them is more original or pure than the other. We are technical all the way down in body and mind.” Does he believe that I have suggested that writing by hand is non-technological? If so, I would like to know where I did so.

But then he also writes, “Writing with two hands on a keyboard, dictating to a person or a machine, writing with chalk, quill, pencil, or pen — each embody mind in different ways,” and this seems to be a re-statement of my theses 64-66.

So I dunno. You be the judge.

But while we’re on the subject of handwriting, here’s a wonderful essay by Naveet Alang that explores these questions far more subtly than I did, and raises an additional question: To what extent does writing on a screen, using a stylus, enable the same qualities of mind and body that writing with a pen on paper does?

It would be altogether too optimistic to say that digital handwriting offers some kind of countervailing balance to this shift. For one, it is being pushed by enormous multinationals. When you mark up a PDF in Microsoft’s OneNote, it automatically gets uploaded to the cloud, becoming one more reason to hook you into the vertical silo of a tech giant’s services. Technology does not carry an inherent politics, but it does have tendencies to encourage behaviour one way or another. The anti-Facebook revolution will not come in the form of a digital pen, and Microsoft’s emphasis on the pen as a form of personal computing simply mirrors Apple’s similar ethos: pleasure breeds consumption, which in turn breeds profits.

For all that, though, what pens do offer is both practical and symbolic resistance to the pre-programmed nature of the modern web — its tendency to ask you to express yourself, however creatively and generatively, within the literal and figurative constraints of a small, pre-defined box. There is a charming potential in the pen for activity that works against the grain of those things: to mark out in one’s own hand the absurdities of some top ten list, or underlining some particularly poignant paragraph in a way that a highlight or newly popular screenshotting tool doesn’t quite capture. Perhaps it’s the visual nature of the transgression — the mark of a hand slashed across a page — that produces emblematically the desire for self-expression: not the witty tweet or status update, nor just the handwritten annotation, but the doubled, layered version of both, the very overlap put to one’s own, subjective ends. And then there is more simple pleasure: that you are, in both an actual and metaphorical sense, drawing outside the lines. If one can draw over and annotate a web page and then send it to a friend, for example, the web at least feels less hegemonic, recalling the kind of interactivity and freedom of expression once found in the now-broken dream of blog comment sections.

Fascinating stuff. Please read it all.

Friday, July 3, 2015

the blind man's stick


How Things Shape the Mind: A Theory of Material Engagement, by Lambros Malafouris, is a maddening but also fascinating book that is seriously helping me to think through some of the issues that concern me. Malafouris wants to argue that the human mind is “embodied, extended, enacted, and distributed” — extensive rather than intensive in its fundamental character.

He starts his exploration wonderfully: by considering a thought-experiment that Maurice Merleau-Ponty first posited in his Phenomenology of Perception. Merleau-Ponty asks us to imagine a blind man navigating a city street with a cane. What is the relationship between that cane and the man’s perceptual apparatus? Or, as Gregory Bateson put it in Steps to an Ecology of Mind,

Consider a blind man with a stick. Where does the blind man's self begin? At the tip of the stick? At the handle of the stick? Or at some point halfway up the stick? These questions are nonsense, because the stick is a pathway along which differences are transmitted under transformation, so that to draw a delimiting line across this pathway is to cut off a part of the systemic circuit which determines the blind man's locomotion.

(Bateson does not mention and probably was not aware of Merleau-Ponty.) For Malafouris the example of the blind man’s cane suggests that “what is outside the head may not necessarily be outside the mind.... I see no compelling reason why the study of the mind should stop at the skin or at the skull. It would, I suggest, be more productive to explore the hypothesis that human intelligence ‘spreads out’ beyond the skin into culture and the material world.” Moreover, things in the material world embody intentions and purposes — Malafouris thinks they actually have intentions and purposes, a view I think is misleading and sloppy — and these come to be part of the mind: they don't just influence it, they help constitute it.

I believe this example provides one of the best diachronic exemplars of what I call the gray zone of material engagement, i.e., the zone in which brains, bodies, and things conflate, mutually catalyzing and constituting one another. Mind, as the anthropologist Gregory Bateson pointed out, “is not limited by the skin,” and that is why Bateson was able to recognize the stick as a “pathway” instead of a boundary. Differentiating between “inside” and “outside” makes no real sense for the blind man. As Bateson notes, “the mental characteristics of the system are immanent, not in some part, but in the system as a whole.”

If we were to take this model seriously, then we would need to narrate the rise of modernity differently than we’ve been narrating it — proceeding in a wholly different manner than the three major stories I mentioned in my previous post. Among other things, we’d need to be ready to see the Oppenheimer Principle as having a far stronger motive role in history than is typical.

When I talk this way, some people tell me that they think I'm falling into technological determinism. Not so. Rather, it's a matter of taking with proper seriousness the power that some technologies have to shape culture. And that's not because they think or want, nor because we are their slaves. Rather, people make them for certain purposes, and either those makers themselves have socio-political power or the technologies fall into the hands of people who have socio-political power, so that the technologies are put to work in society. We then have the option to accept the defaults or undertake the difficult challenge of hacking the inherited tools — bending them in a direction unanticipated and unwanted by those who deployed them.

To write the technological history of modernity is to investigate how our predecessors have received the technologies handed to them, or used upon them, by the powerful; and also, perhaps, to investigate how countercultural tech has risen up from below to break up the one-way flow of power. These are things worth knowing for anyone who is uncomfortable with the dominant paradigm we live under now.

Wednesday, July 1, 2015

my big fat intellectual project

If there is any one general topic that has preoccupied me in the last decade, it’s ... well, it’s hard to put in a phrase. Let’s try this: The ways that technocratic modernity has changed the possibilities for religious belief, and the understanding of those changes that we get from studying the literature that has been attentive to them. But literature has not been merely an observer of these vast seismic tremors; it has been a participant, insofar as literature has been, for many, the chief means by which a disenchanted world can be re-enchanted — but not fully — and by which buffered selves can become porous again — but not wholly. There are powerful literary responses to technocratic modernity that serve simultaneously as case studies (what it’s like to be modern) and diagnostic (what's to be done about being modern).

I have not chosen to write a book about all this, but rather to explore it in a series of essays. The two key ones, the ones that form a kind of presentatonal diptych for my thoughts, are “Fantasy and the Buffered Self”, which appeared here in The New Atlantis last year, and “The Witness of Literature: A Genealogical Sketch”, which has just appeared in The Hedgehog Review.

These essays offer the fullest laying-out of the history as I understand it to date, but there are a few others in which I have elaborated some of the key ideas in more detail: see this essay on Thomas Pynchon, this one on Walker Percy, this one on Iain M. Banks, and this one on Iain Sinclair. Some of these writers are religious, some are not, some are ambivalent or ambiguous; all of them are deeply concerned with modernity and its real or imagined alternatives, especially those which seem to connect us with what used to be called the transcendent.

These recent posts of mine on what I’m calling the technological history of modernity are part of the same overarching project — a way to understand more deeply and more broadly where we are and how we got here. My reflections will on these matters will continue, probably in one form or another for the rest of my life.

the three big stories of modernity

So far there have been three widely influential stories about the rise of modernity: the Emancipatory, the Protestant, and the Neo-Thomist. The Emancipatory account argues that modernity is fundamentally about the use of rediscovered classical learning, especially the Skeptics and Epicureans in their literary and philosophical modes, to liberate European Man from bondage to a power-hungry church and religious superstition. The Protestant account argues that modernity marks the moment when rediscovered biblical languages reconnected people with the authentic Gospel of Jesus Christ, obscured for many centuries by those same power-hungry priests and by the obscurantist pedantries of Scholastic philosophy. The Neo-Thomist account argues that what the others portray as liberation or deliverance was instead a tragedy, an unwarranted rebellion against a church that, while flawed, had managed to achieve by the high Middle Ages a unity of thought, feeling, and action — manifest in the poetry of Dante, the philosophy of Thomas Aquinas, and the great cathedrals of the era — that gave great aid, comfort, and understanding to generations of people, the high and the low alike.

The Neo-Thomists agree with the Protestants in rejecting the Emancipators' irreligion and false, truncated "humanism." The Protestants join the Emancipators in condemning the priestcraft, superstition, and hostility to progress of the Neo-Thomists. The Neo-Thomists and the Emancipators share the belief that the Protestants are neither fish nor fowl, neither religious nor secular.

All of these accounts began five hundred years ago, and all survive today, in popular and in scholarly forms. The Protestant account undergirds the massive studies of Jesus and Paul recently produced by N. T. Wright; the Neo-Thomist account (which was articulated most fully in the early twentieth century by Jacques Maritain and Etienne Gilson) continues in the work of scholars as varied as the English Radical Orthodoxy crowd and Catholic scholars such as Brad Gregory; a classic version of the Emancipatory account, Stephen Greenblatt's The Swerve, recently received both the Pulitzer Prize and the National Book Award.

There may seem to be little that all three have in common, but in fact all are committed to a single governing idea, one stated seventy years ago by an influential Neo-Thomist, Richard Weaver of the University of Chicago: Ideas Have Consequences. But we can present their shared convictions with greater specificity through a twofold expansion: (a) philosophical and theological ideas (b) that emerged half a millennium ago are the most vital ones for who we are in the West today. That is, all these narrators of modernity see our own age as one in which the consequences of 500-year-old debates conducted by philosophers and theologians are still being played out.

I think all of these narratives are wrong. They are wrong because they are the product of scholars in universities who overrate the historical importance and influence of other scholars in universities, and because they neglect ideas that connect more directly with the material world. All of these grands recits should be set aside, and they should not immediately be replaced with others, but with more particular, less sweeping, and more technologically-oriented stories. The technologies that Marshall McLuhan called "the extensions of Man" are infinitely more important for Man's story, for good and for ill, than the debates of the schoolmen and interpreters of the Bible. Instead of grand narratives of the emergence of The Modern we need something far more plural: technological histories of modernity.

It is not my purpose here to supply such histories: that would be a vast undertaking indeed. The closest analogue to what I have in mind is perhaps the 27-book series Science and Civilisation in China (1954-2008), initiated and for several decades edited by Joseph Needham; or perhaps, also on a massive scale, Lynn Thorndike's A History of Magic and Experimental Science (8 volumes, 1923–58) — Thorndike’s project being actually a part of the story I think needs to be told, though it’s outdated now. Other pieces of the technological history of modernity already exist, of course: in the thriving discipline of book history, in various economic and social histories, in books like A Pattern Language and Paul Starr’s The Creation of the Media and Roy Porter’s The Greatest Benefit to Mankind.

Had Porter not died prematurely he would have been the person best suited to telling the whole story, though it’s too big for any one person to tell extremely well. But it needs to be told: we need a complex, multifaceted, materially-oriented account of how modernity arose and developed, starting with the later Middle Ages. The three big stories, with their overemphasis on theological and philosophical ideas and inattentiveness to economics and technology, have reigned long enough — more than long enough.