Text Patterns - by Alan Jacobs

Thursday, October 23, 2014

another note on the Southern Reach

Another note on the Southern Reach trilogy — or something that started as a note but then turned into a critique.

A couple of years ago I wrote that the first installment of Peter Jackson’s Hobbit was marred by “videogame aesthetics.” Remember the dwarves running across the bridges in the goblin caves? Classic side-scroller! It’s Super Thorin Brothers! [UPDATE: As Adam Roberts points out, this should surely be Super Moria Brothers.] But “all I could to was watch the dwarves bounce around from horror to horror. My hands felt empty and useless without the controller they so obviously needed. Video-game aesthetics are built around the assumption of manual activity: they work far better when you have something to do. I didn’t really want to sit passively and watch Peter Jackson play with his Xbox but that’s what I felt was happening to me for much of the second half of the movie.”

The narrative technique of the Southern Reach trilogy — especially in the first book — is likewise derived from videogames, though not side-scrollers: instead, first-person explorers are the model, especially (it seems to me) Myst and Riven. The explorations of Area X in Annihilation very strongly reminded me of my explorations of Myst Island, with, added-on, some of the interactive elements that emerged in Riven. (Myst and Riven could be said to have elements of “strange pastoral” also.)

The biologist walks through the forest to the lighthouse — hey, there’s a ruined village off to the side. Can you interact with anything there? Not really; keep going. The lighthouse is scary but unrewarding until you click on the rug and find the trap door! And then the return trip: can you find the path that lets you escape the moaning creature? If not, maybe you die, and the game restarts back at base camp?

And then in the second book: Explore the official Southern Reach facility! Walk through the spooky room with the gloves; see what's behind the locked door in your office — and search until you find the key that opens the one locked drawer in your desk. Then, if you're really clever, find the trap door — yes, another trap door, but this one in the ceiling of the storage closet! Some of this reminds me of the old text-based adventures games also: type “open the drawer” and you get “Opening the drawer reveals a strange plant, a dead mouse, and an old cellphone.”

Now, I don’t want to get carried away with this: text-based adventure games are a kind of interactive fiction, and interactive fiction draws on the narrative types and tropes of the novel. But there are certain kinds of actions you perform in those games, certain kinds of objects you interact with, that then make their way into videogames — and action and objects of those general kinds are what populate the Southern Reach saga.

So the story can feel at times like interactive fiction without the interaction; a point which leads me to another, one that might explain my own lack of excitement about the novel, despite its various excellences. Because so much of it seems to be translated from another medium, one perhaps better suited to the shape of the story, the Southern Reach story as a whole feels to me less like a novel than a novelization — in light of which it’s interesting to note the work VanderMeer has done in other media, including games and film.

I’m not happy about what seems to me the increasing influence of videogames on film and fiction — not because I dislike videogames, but because not every artistic medium does everything equally well. Maybe we should let videogames do what they do best; and when we make movies or write novels, we could do worse than think about what those media can do that videogames can’t.

UPDATE: corrections from Jeff VanderMeer:





I understand what VanderMeer is saying: that the books' descriptions of the natural world are based on his own love of an care for that world, based on "actual exploring." And I have no doubt that that's true! But I don't think that his explanation is in any way inconsistent with my thesis that the narrative technique of the novel is indebted to the first-person explorer videogame.

on the Southern Reach Trilogy

Having read Jeff VanderMeer’s Southern Reach Trilogy and thought about it for a while, I’ve decided that I don’t like it as much as I thought I would.

To be sure, some things about it are fantastic: the first volume in particular is sublimely creepy. But I am not sure whether the novel as a whole — and it is one novel broken into three parts, like Lord of the Rings — pays back the investment of time required to read 900 pages.

Mac Rogers, who likes the story very much, says this about it:

This frustration that manifests over and over throughout the trilogy — as the characters repeatedly fail not only to solve the mystery of Area X, but indeed to even meaningfully perceive it — doesn’t feel like a cheat, but rather a natural outgrowth of how these books see humanity. We’re isolated. We’re easily manipulated. We don’t cooperate. We’re poorly suited to our natural habitat, and insignificant in the face of its untamed grandeur. We care more about our image of ourselves, our identity, than about our interaction with the world around us. The colossal physical and spiritual transformation Area X represents is beyond human reference.

I think this is a very plausible read. The question is whether the book makes this point well, vividly, compellingly. The admirable Adam Roberts thinks it does. Writing particularly about the first volume, he argues that “what makes this book so remarkable is less what happens in it, and more its tense, eerie and unsettling vibe. Creating such an atmosphere is a balancing act: on the one hand, the writer must not destroy the mood with too much brute explanation; and on the other, he must not alienate the reader by being too annoyingly oblique. VanderMeer hits exactly the right balance.”

And Roberts thinks this continues to the end: “Finding a way satisfactorily to pay off so much mysteriously tense apprehension is no small challenge for a writer – and VanderMeer manages to avoid banality and opacity both, and generates some real emotional charge while he's about it.” Mac Rogers also talks about the book’s “payoffs,” and also thinks they are significant. I don’t. I’m okay with not getting standard-issue resolutions to mysteries if I get something of equal or greater value instead, but I don’t think that happens here. It seems to me that we know very little more at the conclusion of the novel than we knew at its outset, and what we do learn suffers from unnecessary “opacity” — is, indeed, “annoyingly oblique” in light of the investment of time and energy the book asks of its readers.

And I suspect that the book’s lack of resolution — so many of its major characters (all but one, I’d say) left in mid-journey, and some complications introduced in the third volume for no apparent reason — may be a set-up for a sequel. I don’t mind spending 300 pages being set up for a sequel, but 900? That’s too much to ask.

If my suspicion is correct — if we get a sequel that provides more conventional forms of resolution for the characters, and more conventional answers to the mysteries raised in these volumes — I wonder if Rogers and Roberts will need to revisit their praise for the balance between mystery and revelation they think VanderMeer achieves here. Wouldn’t more clarity be too much?

With all these complaints registered, I want to close by commending the books for their unusual and often eloquent attentiveness to the details of the natural world — and for raising the possibility that there could be massively powerful intelligences that encounter the natural world in ways totally alien to our own — in ways more like the way that animals and plants interact with one another, whether cooperatively, symbiotically, or violently. In another essay Roberts refers to the book as “strange pastoral”, and I think that’s a brilliant designation, and helps to show what the story of the Southern Reach does accomplish.

Monday, October 20, 2014

this brunch will not stand, man

This attack on brunch is worth noting because it exemplifies a couple of recent trends in opinion pieces.

First, we have strategic exaggeration. You don’t just say that you disapprove of brunch, you say that eating brunch manifests a “desire to reject adulthood.” You say it’s a rejection of “the social conventions of our parents’ generation.” You call it “the mealtime equivalent of a Jeff Koons sculpture.” You quote someone else who says brunch is “a symptom of the soulless suburban conformity that is relentlessly colonizing our urban environments.”

In short, you make the most absurdly over-the-top claims imaginable so that when someone calls out the extremity of your language you can reply, “Dude, you need to get a sense of humor.” But of course you don’t actually take anything back, because you meant it. You really, really despise brunch, in a way that really, really is weirdly extreme. But you need not own that because you can invoke “humor” and “irony.”

Which leads to my second point of interest, which is: the panoptic reach of the pink police state. As I’ve noted before, I think James Poulos’s in-development thoughts on this topic are incisive and important, but let me just add some theses for disputation:

  • There is a Law of the Conservation of Moral Energy. That is, the amount of moral energy in a given society is constant. It just gets deployed in different ways.
  • As I have previously noted, in the pink police state there are no adiaphora: everything that is not forbidden is compulsory, and vice versa.
  • As more and more people in our society become convinced that consent is the only relevant ethical category in the domain of sex, the moral energy that once would have gone into policing sexual activity is transferred to questions about when you should eat your meals and what books you should (or should not) read in your spare time — with no diminishment of moral intensity.

Monday, October 13, 2014

update

Folks, things have been quiet around here for the past few days and will likely be quiet for the foreseeable future. I have a great deal on my plate, both professionally and personally. But a few bits of news and/or enlightenment:

1) I had planned to blog about Nick Carr's new book The Glass Cage here, but instead I'm going to review it for Books & Culture. So you'll see my thoughts in a more coherent form than I usually offer on this blog, but they'll be delayed. Spoiler: I like the book a lot and think you should certainly read it, though I have some reservations too.

2) I'm plugging away on my Big Book Project — which gets bigger the more I work on it, so that the more I write the farther I get from the finish line — and, inspired by some of the figures in that story, I'm working my way through some larger and more general thoughts about imagination, creativity, and theology. I lay out some of the problems in this oddball essay; and I point towards a few constructive responses to those problems in a handful of quotes I've posted to my tumblr: here, here, and here. Consider that a teaser for a book to be released in about 2024 (should I be spared).

3) Two other books I'll be reviewing in the coming months: Adam Roberts's magnificent new edition of Coleridge's Biographia Literaria, and a wonderful new volume of Italo Calvino's Complete Cosmicomics.

Wednesday, October 8, 2014

more on social structures and imaginative work

A couple of follow-ups on yesterday’s oddball rantish thing on the social and economic structures that enable or disable genuine imagination:

First, a really thoughtful response from my friend Bryan McGraw, who can provide a political philosopher’s take on these issues. Please read it all, but here’s an excerpt:

No doubt lots of folks on the political and cultural Left will read this (or see pithily tweeted link) and cheer. See, they’ll say, the universities are being “corporatized” and here’s another casualty! Ah, but I think Alan’s point is meant to cut more deeply than that, because what our libertarian economists and socialist sociologists share is a deep, deep commitment to a modern (and post-modern) conception of human moral psychology that reduces human beings to calculating preference machines (whether those preferences emerge out of appetites, culture, whatever makes for many of our differences, but that they rule us is widely held). And since we can see “through” human beings that way, we can organize them (or allow them to organize themselves) in some unitary and unified way. That’s why we can see what looks superficially like a paradox – a society that is both more libertine (sexual ethics limited only by consent) and puritanical (don’t smoke!) – is, in fact, not and why there is a tremendous amount of pressure to remake every institution and range of human activity in the image of, well, something or someone.

In a well-known passage, C. S. Lewis writes, “Nothing strikes me more when I read the controversies of past ages than the fact that both sides were usually assuming without question a good deal which we should now absolutely deny. They thought that they were as completely opposed as two sides could be, but in fact they were all the time secretly united — united with each other and against earlier and later ages — by a great mass of common assumptions. We may be sure that the characteristic blindness of the twentieth century — the blindness about which posterity will ask, ‘But how could they have thought that?’ — lies where we have never suspected it, and concerns something about which there is untroubled agreement between Hitler and President Roosevelt or between Mr. H. G. Wells and Karl Barth.” I think (I hope) that later ages will see almost all of today’s political thought as wrapped up in the unquestioned and even unconfronted assumption that people are simply “calculating preference machines.”

More directly to the point of my article, while Eisenhower may have wanted us to distrust the “military-industrial complex” because of its power to involve private industry in policy-making, and while that is a very important warning indeed, when government, mega-industry, and the university system all become entangled beyond the possibility of disentanglement, the flow of influence runs in all directions, but especially from the richer to the less-rich — from the patrons to the patronized. And that puts universities in the position of being shaped far more than they shape; and that, in turn, puts the artists and writers who work for the university in an even more dependent position. This worries me.

I think I’ll have more to say about Bryan’s smart response, but for now just one note: I do think the anti-capitalist left is likely to find something to cheer in my post; they and I have a good deal in common. My politics are probably too incoherent to describe, but one might say that they are sorta kinda paleo-conservative green-communitarian, emphasizing the need to renew and strengthen the institutions (especially family and local community, and schools insofar as they grow out of family and local community) that mediate between the individual and the nation-state, for the better care of people and the created order. And since the nation-state that is growing and growing and growing in power is an international-capitalist one, I end up agreeing with the left that that nation-state’s dominance is probably our largest single political problem. When I think about politics, I have infinitely more sympathy for a left-anarchist like David Graeber than I do for any National Greatness conservatism. (Bryan, set me straight if I’m leaving the true path here.)

Second: One of the reasons I want to make an argument for regenerating genuine imagination, genuine creativity, is that “imagination” and “creativity” and today almost totally co-opted by scenes like this — the happy-clappy “super excited” artificially-generated enthusiasm of the TED world that Benjamin Bratton has called, in one of the most apt phrases of the twenty-first century, “middlebrow megachurch infotainment”. If that’s what imagination and creativity are all about, may God save us all from them.

Kathy Sierra and online abuse

Kathy Sierra has written a post about her experiences with what we (mildly) call online harassment — a post that may not stay up for long, so if you’re at all inclined, please read it while you can. I just want to say a few words.

1) Understand where I’m coming from when I talk about things like this: I wrote a book on the history of the theological doctrine of original sin that more-or-less openly endorses the claim that we are all fallen, all broken, all tempted by wickedness and all sometimes successfully tempted. As Solzhenitsyn famously wrote, “The line between good and evil runs through the middle of every human heart.” So no wickedness surprises me.

2) Wickedness has to be called by its true name.

3) The people who have abused and harassed and threatened Kathy Sierra (and Lord knows how many other women with online lives) have acted wickedly. Their behavior is not trivial: it is malicious to the highest degree.

4) Psychological and emotional abuse is no less wicked than physical abuse; in some circumstances it can be worse.

5) It does no good to say that these are the acts of “a few bad apples” in the tech world. We have no way of knowing what percentage of men in the tech world act in this way — but in total numbers, there are certainly far more than “a few.” It would be impossible for a relative handful of men using multiple user names to do as much harassing of women as gets done in forums, in comment threads, on Twitter, and elsewhere online. Regardless of the percentages, there are a great many of these cruel and malicious men, and they are very active, and virtually nothing is being done to stop them.

6) What corruption is in my heart, or yours, is not something that can be determined solely by our actions. We may restrain our darkest impulses out of fear — fear of being shamed or punished. It is when we have no fear of exposure or retribution that we act according to our desires. The men who harass women online do so because they think they are protected. For the same reason, children will torment animals when they think adults can't see — they know they have power over the animals, and rejoice in exploiting that power. For the same reason, in Stanley Milgram’s famous experiment, people administered electrical shocks to strangers because they were protected by the authority of the scientists who assigned them that task.

7) It is impossible to overstress how outraged the mobs at Reddit were when one of their nastiest and most prominent trolls was doxxed — this threatened everything they had come to take for granted about their ability to manage their online presence. We have no way of knowing how many men started controlling their cruel impulses after this exposure; probably not very many, since it could be seen as a one-off. But if exposure were more common, we might see some changes in behavior.

8) As the Milgram experiment shows, exposure isn’t the only thing people fear: the people who administered those electrical shocks had their own willingness to inflict torture exposed, but by and large they didn’t mind: they were “happy to be of service”. Similarly, weev has been exposed as one of Kathy Sierra’s abusers, but he has paid no evident social price for being so exposed: as Sierra points out, leading figures in the tech world chat with him in a friendly manner and treat him with respect. To some he is a hero, a martyr. He doesn’t need to be protected from exposure; he is protected by the good opinion and warm bonhomie of his fellow geeks.

9) The best analogy I can think of to this cadre of misogynist trolls is the Ku Klux Klan. The Klan arose not in the era of slavery but as a response to the abolition of slavery, when white men felt that their previously undisputed social dominance was in danger of being undermined. Only a relatively small number of men participated in the the Klan’s lynchings and burnings; but almost no one spoke out against them. Though they protected their identities with masks, those identities were nonetheless widely known; yet upstanding citizens greeted them on the street every day, looked them in the eye, smiled, shook their hands. Perfunctory legal inquiries sometimes led to slaps on the wrist, but the Klansmen were willing to risk that, because they paid no social price for their actions. Indeed, they were feared, respected, and sometimes secretly admired — and they knew it.

10) This is why Martin Luther King Jr. wrote, “I have almost reached the regrettable conclusion that the Negro's great stumbling block in his stride toward freedom is not the White Citizen's Counciler or the Ku Klux Klanner, but the white moderate.” Similarly, in this situation the heart of the problem is not people like weev, but the moderate, reasonable, friendly people in the tech world who enable weev. The dedicated trolls are probably beyond correction — and are certainly beyond reasoning with: they are drunk on the power they wield. But those in positions of power in the tech world who would never abuse women online or offline and yet tolerate, even sort of admire, the trolls — they may be reachable. They must be reachable. But reason may not be the only or even the best tool. They are going to have to be exposed, and shamed into action to change the structure of the technological tools and services they control. Otherwise there is no foreseeable end to the kind of abuse that Kathy Sierra and countless other women have experienced.

Tuesday, October 7, 2014

The Devil's Bargain, expanded

It's here.

Monday, October 6, 2014

rebel tech

Electronic technologies are seeking to escape my control — and they are largely succeeding!

This must stop.

Take Ello, about which I have written. I fooled around for a bit, but it has no privacy controls of any kind: everything is public to everyone, nobody can be blocked, etc. I understand that the service is new and still under development, but I won’t be back until I can control my environment (if then).

And then there’s this: I subscribe to some magazines on iOS, because with my aging eyes — I’ve mentioned this before — I really like being able to adjust the type size. (Most print magazines are close to unreadable for me now, unless I take off my glasses and hold them inches from my face, which is not the most comfortable way to read.) But the iOS 8 update broke a number of magazines in Apple’s Newsstand, including Scientific American, and while some of them have been fixed, SciAm has been both inactive and silent. I have paid for their magazines, but I can’t read them; and so far they have not responded to my emails.

These are just reminders that, for all the convenience that online and digital life provides, and while we use a great deal, we own very little indeed. I admire Comixology’s recent move to enable PDF or CBZ downloads of comics I’ve purchased from them — “from participating publishers.” But Marvel and DC (among others) aren’t participating.

So I guess I’d better get used to reading magazines and comics a few inches from my de-spectacled face. And I should rededicate myself to owning my turf.

never mind (for now)

It seems that my thoughts on the issues raised in the previous two posts are expanding, like No-Face in Spirited Away, into something significantly larger than I can manage here. So you will hear more from me on these matters, but probably not in this format.

the devil's bargain: part 2

inc.com

I promised a follow-up to my previous post, so here I am. In this post and the next I want to discuss two essays by David Graeber — one and two — because I think that, while they seem to have very different purposes, they contribute in interesting and useful ways to a single important point.

Let me say at the outset that I have significant reservations about some details of the arguments that Graeber develops. But I want to see what ideas emerge if we at least take those arguments seriously.

In the first of these essays Graeber takes up the old “Where are our flying cars?” question — or, in my favorite version of the complaint, Jaron Lanier’s sharp comment: “Let’s suppose that, back in the 1980s, I had said, ‘In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new encyclopedia and a new version of UNIX.’”

Here’s Graeber:

Might the cultural sensibility that came to be referred to as postmodernism best be seen as a prolonged meditation on all the technological changes that never happened? The question struck me as I watched one of the recent Star Wars movies. The movie was terrible, but I couldn’t help but feel impressed by the quality of the special effects. Recalling the clumsy special effects typical of fifties sci-fi films, I kept thinking how impressed a fifties audience would have been if they’d known what we could do by now — only to realize, “Actually, no. They wouldn’t be impressed at all, would they? They thought we’d be doing this kind of thing by now. Not just figuring out more sophisticated ways to simulate it.”

So why have things turned out this way? That’s the subject of Graeber’s essay, and if you’re interested in this question at all you should read the whole thing, because he makes his case in some detail. But he sums up that case here:

By the sixties, conservative political forces were growing skittish about the socially disruptive effects of technological progress, and employers were beginning to worry about the economic impact of mechanization. The fading Soviet threat allowed for a reallocation of resources in directions seen as less challenging to social and economic arrangements, or indeed directions that could support a campaign of reversing the gains of progressive social movements and achieving a decisive victory in what U.S. elites saw as a global class war. The change of priorities was introduced as a withdrawal of big-government projects and a return to the market, but in fact the change shifted government-directed research away from programs like NASA or alternative energy sources and toward military, information, and medical technologies.

The question Graber wants to put to us is this: To what extent are our imaginations shaped — constrained, limited — by our having had to live with the technological choices made by the military-industrial complex — by industries and universities working in close collaboration with the government, in a spirit of subservience to its needs?

Or, to put it another way: How were we taught not even to dream of flying cars and jetpacks? To see “sophisticated simulations” of the things we used to hope we’d really see as good enough?
Next time, I’ll look at the second Graeber essay and start to draw together some of my themes.

Sunday, October 5, 2014

the devil's bargain: part 1

blackmur

So wrote R. P. Blackmur, an eminent poet and critic from Princeton University, writing in the Sewanee Review in 1945. His essay is called “The Economy of the American Writer: Preliminary Notes,” and his chief question is whether it is possible for literary writers to make a living. Plus ça change, oui? An essay very much worth reading for anyone, but especially for people who think that the problem of the aspiring-artist-piecing-together-a-rough-living is a phenomenon of the millennial generation.

Anyhow, Blackmur is concerned because he has run some numbers.

blackmur2

In these circumstances, where can the necessary money — money sufficient to allow artists to pursue their art full-time (or nearly so) — come from?

From our vantage point, perhaps the most interesting point here is Blackmur’s uncertainty about the most likely source of support for artists: will they find their place in the world of the university, or in the world of the non-profit foundation? We know how it turned out: while foundations do still support artists of various kinds, universities have turned out to be the chief patrons of American artists — especially writers.

Blackmur sees that even at his moment support for writers and artists is drifting towards the university; he’s just not altogether happy about that. He’s not happy because he has seen that “the universities are themselves increasingly becoming social and technical service stations — are increasingly attracted into the orbit of the market system.” Social and technical service stations: a prophetic word if there ever was one. The universities have in the intervening seventy years become generous patrons of the arts; but what is virtually impossible for us to see, because we can’t re-run history, is the extent to which the arts have been limited and confined by being absorbed into an institution that has utterly lost its independence from “the market system” — that has simply and fully become what the Marxist critic Louis Althusser called an “ideological state apparatus,” an institution that does not overtly belong to the massive nation-state but exists largely to support and when possible fulfill the nation-state’s purposes.

One of my favorite things about W. H. Auden is his tendency, when he has something very serious to say, to cast it in comic terms. In 1946 Auden wrote a poem for the Harvard chapter of Phi Beta Kappa. It is called “Under Which Lyre: A Reactionary Tract for the Times,” and you may listen to the poet read it here. As Adam Kirsch has noted, Harvard had played an important role in the war:

Twenty-six thousand Harvard alumni had served in uniform during the war, and 649 of them had perished. The University itself had been integrated into the war effort at the highest level: President James Bryant Conant had been one of those consulted when President Truman decided to drop the atomic bomb on Japan. William Langer, a professor of history, had recruited many faculty members into the newly formed Office of Strategic Services, the precursor to the CIA. Now that the Cold War was under way, the partnership between the University and the federal government was destined to grow even closer. 

But as Kirsch only hints, Auden was deeply suspicious of the capture of intellectual life by what, fifteen years later, President Eisenhower would call the “military-industrial complex”; and he presented his poem as a direct, if superficially light-hearted, attack on that capture. For Auden, Conant was a perfect embodiment of the “new barbarian” who was breaking down the best of Western culture from within. (See more about this here.)

Soon after his return from Harvard, Auden told his friend Alan Ansen, “When I was delivering my Phi Beta Kappa poem in Cambridge, I met Conant for about five minutes. ‘This is the real enemy,’ I thought to myself. And I’m sure he had the same impression about me.”

first of a series of posts

Saturday, October 4, 2014

defending the liberal arts, once more

Thanks to those who answered my question about defenses of the liberal arts and the humanities.

What makes for a good defense of the liberal arts? (I’ll refer only to the liberal arts in the rest of this post, since defenses of the humanities can usually be fit within that larger category.) That’s a question that can only be answered in relation to a particular audience.

The first possible audience is those who are already involved in the liberal arts but are not sure precisely why — people who sense that what they are doing has some value, but can’t confidently articulate it. For those people, essays like this one, by my colleague Elizabeth Corey, do a wonderful job of teasing out the implicit values and commitments in what we do.

A second possible audience includes people — scientists, or people who associate themselves with SCIENCE (their mental capitals, not mine) — who think that science alone is truth-conducive and that the artes liberales are just a higher form of fooling around.

A third possible audience — and for those of us who teach in liberal-arts settings a likely one — is an especially tough one: parents of college students who want their investment in their children’s education to be repaid in the coin of … well, coin: a good job upon graduation, or as soon after graduate as possible, followed by a lifetime of financial security and steady income growth.

To that first audience I can enthusiastically recommend essays like Elizabeth Corey’s; to the second I am prepared to make some strong arguments about the multiple forms of knowledge and the limits of the scientific method; but to the third audience I don’t have any arguments that I really care to make.

To be sure, I truly believe that study of the liberal arts can yield much economic value, and I can point parents to many, many financially successful people who are quite vocal about how much of their success they owe to liberal education; and when pressed I dutifully pass along the relevant information — because I believe it’s true. But my heart is never in such defenses.

For one thing, I don’t expect the parents to buy it. Parents who think about their children’s education according to an ROI model tend to have very specific beliefs about what professions are sufficiently remunerative, and about how people get into those professions; I know from long experience that those beliefs are not easily shaken.

But even more to the point, I may believe that the liberal arts have economic value but that’s not why I’m in the line of work I’m in; and that’s not why young people want to major in liberal-arts disciplines, either. They, like me, will trot out the ROI arguments, but their hearts aren’t in it either, a condition quite transparent to their parents.

This situation bears close and significant analogies to another one I find myself in fairly regularly: being asked to explain why I am a Christian, or why I think Christianity makes sense. Over several decades I have tried many responses to those folks, but I now think the best one is simply this: Come and see. Christianity is not simply a set of beliefs; what Christians believe is intimately intertwined with what they do. Christian life is a set of practices — intellectual, doxastic, social, economic — and cannot be fully defended, or even accounted for, to people unwilling to participate, at least to some degree, in those practices. To put it another way, you can’t get any return on an investment (of time and observation) that you haven’t made.

I think much the same can be said of the liberal arts. When properly pursued, they constitute something close to a way of life: a set of practices of inquiry conducted by people who share space and time with one another, whose conversations are extended and embodied. If you want to understand the value of a liberal education, in a very real sense you have to be there.

So to the parents who can’t understand why they should pay for their son or daughter to study literature or philosophy or art history, maybe the best thing I can say is something like this: “I fully understand your concern. And you have every right to know what you are paying for, and to believe that it has value. But if you want to know what value this education has, you’ll need to spend some time with us. It may not make sense from the outside; so come and see.”

Friday, September 26, 2014

Ello, it's me

I’m sure this has been noted many times, but it strikes me that one of the central conventions of sitcoms is that people have a single location where they tend to meet: Cheers, Monk's Café, Central Perk, Paddy’s Pub, etc. 

All social-media platforms aspire to be this: the one-stop shop for your connecting-to-friends-and-family needs, your hourly drip-feed of emotional sustenance. And for some that’s how it is: many millions (tens of millions? hundreds of millions?) of people almost never leave Facebook. 

But splitting social time is more the norm, I think — certainly IRL it is. You may have one place where you’re more likely to meet your friends, but it’s probably not the only place. Thus Foursquare: Where are my people hanging out tonight? 

And in a larger sense, what matters is not where we connect, but that we connect, yes? Thus Google integrates chat into mail, and Apple integrates phone-network text messages with their own iMessage network. Thus also iOS’s Notification Center: maybe your significant other sent you an email, maybe he sent you a text, maybe he Skyped you, maybe he DM’d you on Twitter — who cares? The point is: you have been addressed by someone you care about; you want to answer. 

And your smartphone works pretty well as an aggregator of communications — as long as someone initiates contact with you. But — and behold the power of FOMO — what if your friends are having a fantastic conversation on Twitter but you’re over at Ello? Or vice versa? Wouldn’t that be terrible? What if they’re even exchanging thoughts in the comments on someone’s blog? You’ll never find them there

So: where’s Foursquare for social media? Foursquare for online conversations? A heat map of my friends’ social activity? I doubt that any of the existing platforms want to write APIs that would allow that to happen, but wouldn’t it be cool? 

to know as I am known (by Apple)

IMG 0190

Take a good look at the iOS screenshot above and you’ll see something interesting. Look just above the keyboard, at the row of suggested words — a new feature in iOS 8, though one that has been around a while in Android.
Apple doesn’t just get its suggestions from a universal dictionary. Its description of this “predictive typing” says, 
As you type, you’ll see choices of words or phrases you’d probably type next, based on your past conversations and writing style. iOS 8 takes into account the casual style you might use in Messages and the more formal language you probably use in Mail. It also adjusts based on the person you’re communicating with, because your choice of words is likely more laid back with your spouse than with your boss. Your conversation data is kept only on your device, so it’s always private.
The key phrase here is “based on your past conversations” — but it’s not only conversations that Apple is drawing on. 
See the word “DeepArcher”? Not a dictionary word. In fact, it’s a recent coinage, from Thomas Pynchon’s 2013 novel Bleeding Edge, where it’s the name of an MMORPG. How did it make its way into a list of “predictions” for my typing? 
This is how: I wrote a review of Bleeding Edge. Apple didn’t scan the internet for it, though; rather, a few months ago, I decided to test the “Open in” feature on my iPad, and opened the MS Word version of my review — I always send stuff to editors as .doc files, even though I never actually write anything in Word, and in fact don’t own it — in Pages, Apple's own word processing application. By opening it in Pages, I saved it to iCloud, and Apple evidently uses all my documents in their cloud, as well as my “conversations” in Messages or in my .me mail (which I don’t use), to create a corpus from which its predictions may be drawn. 
Either innocent or creepy, depending on how you think about it. But considering Pynchon’s status as the unquestioned poet laureate of paranoia, there’s something perversely appropriate about that word showing up on my keyboard — and about my surprise when it did. 

hello Ello

Like everyone else I know, it seems, I’m fooling around with Ello. My first comment there was: “So if I understand this correctly, we're all going to follow the same people here that we follow on Twitter, and then we're done, yes?” My next comments were about the peculiarities of the timeline, or more specifically conversation threads, which are not in chronological order. I could try to explain, but it makes me tired. Maybe I’m just old — which leads me to Dan Hon’s really interesting reflections:

It's really hard to use, and apparently I'm not the only one who finds it that way. It's opaque and cool and I'm not entirely sure that this is a conscious design choice: in that I'm not convinced that it's been intentionally designed this way to keep the olds out.

There's a lot to be skeptical about with ello. After having spent three years in manifesto-land, ello's manifesto sets off alarm bells for me because there are a bunch of things that they're saying that either aren't true, or feel like overreaching. Certainly there are things in there that resonate with people ("You are not a product"), but the way that they're acting in communications ("In the meantime, please help us spread the word") doesn't address the fact that there's labour to be profited from. And again, the ... pattern of not including content in notification emails to increase click-through for site retention means that those emails saying someone has replied to your post don't actually include the reply to your post: requiring you to go back to the site.

Completely separate to whether ello is going to work or not is the idea above that it's intentionally designed in a difficult to use way purely to define it as a separate space, much like the way that teens like to invent new language so that they can erect some sort of language boundary. The idea that there's an evolution from language to products/services with which to create safer/more private spaces is super interesting and feels like something that we're potentially seeing more of....

A great deal to think about here! Some initial responses:

  • Will hard-to-use keep the olds (ahem) out? Possibly. Will hard-to-use keep the trolls out? Definitely not.
  • If, as Ello’s creators tell me, I am not the product, what is the product? Ello’s creators place a lot of emphasis on its being advertising-free, which surely means that at some point they’re going to have to charge for the service — which is fine by me, as a dues-paying member of the anti-free-software club — but that will run against the grain of how people are used to thinking of social networks. That will limit the size of the community, which also would be fine by me, and maybe fine by Ello’s creators as well.
  • I definitely agree that there’s a strong movement towards “products/services with which to create safer/more private spaces,” but I have serious doubts about whether creating new proprietary social platforms is the way to do that. It seems to me, as I have already suggested, that we might do better to think about how to leverage the powers of the open web and its existing and very powerful technologies. (Presumably Ello itself, like Twitter, is built on RSS.)
  • In light of all this, I continue to think the smart move will be to own your turf, keep your ideas and pictures and videos there, and use whatever social networks are currently regnant to announce its presence.

And some further non-Dan-Hon-based thoughts:

  • Ello has clearly gotten more immediate traction than App.net did, which got more immediate traction than Diaspora did. This suggests increasing levels of dissatisfaction with existing social networks.
  • Just browsing around on Ello, I see more people describing it as a Facebook replacement than a Twitter replacement. For what that’s worth.
  • Adam Rothstein says, “Ello will one day suck.” Quinn Norton endorses this view and adds, “I hope Ello can be cool, at least for a while. And when Ello fails, I will look for the next thing.” Thoughtful people, then, are going into this expecting it to be a temporary phenomenon. So in such a situation, what to you do? Do you (following the own-your-turf model) make a point of saving to your own computers everything you do on Ello — and Twitter, and Facebook, and Tumblr? Or do you learn to accept that the conversations we have, and the things we make, on modern social media are just as ephemeral as front-porch conversations and castles in the sand?

Thursday, September 25, 2014

bleg: on defending liberal education

Every defense of liberal education in general, and the humanities more specifically, that I can think of makes one or more of the following arguments:

  • studying the liberal arts makes you a better citizen
  • studying the liberal arts makes you more empathetic and compassionate
  • studying the liberal arts teaches you critical-thinking skills
  • studying the liberal arts makes you a capable communicator, in speech and writing
  • knowledge is good for its own sake

Have I missed anything? Are there any other common defenses that I'm overlooking?

Wednesday, September 24, 2014

think locally, act globally

I was drafting this post before Freddie deBoer’s recent post on the subject, so this isn’t really a response to Freddie. But what the heck, call it a response to Freddie.

I want to respond by changing the terms of the conversation: Instead of asking “What is the university for?” I’d like for us to ask, “What is this university for?” — “this” university being whatever university I happen to be associated with or to care about.

For instance, I teach in the Honors Program at Baylor University, an intentionally Christian research university — one of the few in the world — that happens to sit in the middle of an exceptionally poor city. So I and my colleagues need to ask:

  • What is the role of the Honors Program within the framework of the university as a whole, whose students are not, by and large, as academically accomplished?
  • What should Baylor be doing to become, more and more fully and truly, a *Christian* university — to be deeply serious about its faith commitments and its academic ambitions?
  • What can Baylor do to be a good institutional citizen within its local community — to feed the hungry and shelter the homeless and train the jobless — since, after all, these would seem to be mandatory concerns for Christians of all descriptions?

I really believe that this is how we should be thinking about our universities: not deductively, by reasoning from what “the university” should be to how we might instantiate that ideal locally, but rather inductively: from what this particular institution is called to be, and is capable of being, to larger generalizations. I truly believe that if we could suspend the general conversation about “the university” for a decade, a decade during which every American institution of higher learning focused on understanding and realizing its own particular mission, and then reconvened with one another to compare notes — then we just might get somewhere.

And I further believe that by attending to its own home turf — its own students, its own faculty, its own surrounding community — any given university will be better able to serve the larger world of academia and society. The old slogan “Think globally, act locally” gets it precisely backwards, I believe: it is only by thinking and acting locally that we can make the right kind of difference globally.

UPDATE: Roberto Greco reminded me of what Wendell Berry says about this:

I don’t think “global thinking” is futile, I think it is impossible. You can’t think about what you don’t know and nobody knows this planet. Some people know a little about a few small parts of it … The people who think globally do so by abstractly and statistically reducing the globe to quantities. Political tyrants and industrial exploiters have done this most successfully. Their concepts and their greed are abstract and their abstractions lead with terrifying directness and simplicity to acts that are invariably destructive. If you want to do good and preserving acts you must think and act locally. The effort to do good acts gives the global game away. You can’t do a good act that is global … a good act, to be good must be acceptable to what Alexander Pope called “the genius of the place”. This calls for local knowledge, local skills, and local love that virtually none of us has, and that none of us can get by thinking globally. We can get it only by a local fidelity that we would have to maintain through several lifetimes … I don’t wish to be loved by people who don’t know me; if I were a planet I would feel exactly the same way.

Sunday, September 21, 2014

happy stories


Tell me a happy story, please! Pretty please?? A happy, happy story.

Turns out a good many people are willing to comply.

Saturday, September 20, 2014

how not to write a book review, techno-utopian edition

Maria Bustillos’s review of Nick Carr’s new book The Glass Cage is really, really badly done. Let me illustrate with just one example (it’s a corker):

In the case of aviation, the answer is crystal clear, yet Carr somehow manages to draw the opposite conclusion from the one supported by facts. In a panicky chapter describing fatal plane crashes, Carr suggests that pilots have come to rely so much on computers that they are forgetting how to fly. However, he also notes the "sharp and steady decline in accidents and deaths over the decades. In the U.S. and other Western countries, fatal airline crashes have become exceedingly rare." So yay, right? Somehow, no: Carr claims that "this sunny story carries a dark footnote," because pilots with rusty flying skills who take over from autopilot "often make mistakes." But if airline passengers are far safer now than they were 30 years ago — and it's certain they are — what on Earth can be "dark" about that?

Note that Bustillos is trying so frantically to refute Carr that she can’t even see what he’s actually saying. (Which might not surprise anyone who notes that in the review’s first sentence she refers to Carr as a “scaredy-cat” — yeah, she actually says that — and in its third refers to his “paranoia.”) She wants us to believe that Carr’s point is that automating the piloting of aircraft is just bad: “the opposite conclusion from the one supported by facts.” But if Carr himself is the one who notes that “fatal airline crashes have become exceedingly rare,” and if Carr himself calls the decline in air fatalities a “sunny story,” then he just might not be saying that the automating of flight is simply a wrong decision. Bustillos quotes the relevant passages, but can’t see the plain meaning that’s right in front of her face.

Carr cites several examples of planes that in recent years have crashed when pilots unaccustomed to taking direct control of planes were faced with the failure of their automated systems. Does Bustillos think these events just didn't happen? If they did happen, then we have an answer to her incredulous question, “If airline passengers are far safer now than they were 30 years ago ... what on Earth can be "dark" about that?” That answer is: If you’re one of the thousands of people whose loved ones have died because pilots couldn't deal with having to fly planes themselves, then what you’ve had to go through is pretty damned dark.

Again, Bustillos quotes Carr accurately: The automation of piloting is a sunny story with a dark footnote. If Carr says anywhere in his book that we would be better off if we ditched our automated systems and went back to manual flying, I haven’t seen it. I’d like for Bustillos to show it to me. But I don't think she can.

The point Carr is making in that chapter of The Glass Cage is that flight automation shows us that even wonderful technologies that make us safer and healthier come with a cost of some kind — a “dark footnote” at least. Even photographers who rejoice in the fabulous powers of digital photography knows that there were things Cartier-Bresson could do with his Leica and film and darkroom that they struggle to replicate. Very, very few of those photographers will go back to the earlier tools; but thinking about the differences, counting those costs, is a vital intellectual exercise that helps to keep us users of our tools instead of their thoughtless servants. If we don't take care to think in this way, we’ll have no way of knowing whether the adoption of a new technology gives us a sunny story with no more than a footnote’s worth of darkness — or something far worse.

All Carr is saying, really, is: count the costs. This is counsel Bustillos actively repudiates: “Computers are tools, no different from hammers, blowtorches or bulldozers; history clearly suggests that we will get better at making and using them. With the gifts of intelligence, foresight and sensible leadership, we've managed to develop safer factories, more productive agricultural systems and more fuel-efficient cars.” Now I just need her to explain to me how those “gifts of intelligence, foresight and sensible leadership” have also yielded massively armored local police departments and the vast apparatus of a national surveillance state, among other developments.

I suppose “history clearly suggests” that those are either not problems at all or problems that will magically vanish — because if not, then Carr might be correct when he writes, near the end of his book, that “The belief in technology as a benevolent, self-healing, autonomous force is seductive.”

But that’s just what a paranoid scaredy-cat would say, isn’t it?

UPDATE: Evan Selinger has some very useful thoughts — I didn't see them until after I wrote this post.

Wednesday, September 17, 2014

creative futures, the Minecraft edition

Please read Robin Sloan’s wonderful reflection on Minecraft — or rather, the implications of the game for people who enjoy thinking about generative engines and collaborative creation.

Here’s what gets Robin’s wheels turning. As a Minecraft beginner you might find yourself presented with something like this:

Minecraft

But people who really know what they’re doing can end up with something like this:

ancient

Or this:

GoT

(More examples here.) Robin comments, “People often compare Minecraft to LEGO; both support open-ended creation (once you’ve mastered the crafting table, you can build nearly anything) and, of course, they share an essential blockiness.” But in his view there’s a major, major difference: “I think this comparison is misleading, because a LEGO set always includes instructions, and Minecraft comes with none.”

Now, that is actually only true of what we might call Modern LEGO, which tells you on the box what you’re going to make and teaches you how to make it. But the good old Basic LEGO sets — buckets and boxes of many different kinds of bricks — are a lot more like Minecraft, except that it’s pretty obvious what to do with those bricks, whereas, as Robin points out, finding your way around Minecraft without help is a trial-and-error process with “a lot of errors.”

But see, Minecraft as such doesn’t help you. You’re on your own — unless you consult resources (websites, YouTube videos, books) created by users, by the community of players. This is what fascinates Robin about Minecraft: that it is not complete, exhaustive, and closed, but rather open-ended and generative. It doesn’t finish itself, but is comprised, essentially, of the instructions that allow users to develop it further and further.

And one of the ways this happens is in books. Robin — a tech guy who also loves books, as a recent novel of his demonstrates — is utterly taken with codexes about Minecraft: “I’m not a huge Minecraft player myself—my shelter never grew beyond the rough-hewn Robinson Crusoe stage—but I look at those books and, I tell you: I am eight years old again. I feel afresh all the impulses that led me towards books and writing, toward the fantastic and science-fictional… except now, there is this other door.”

This other door — a door that Robin thinks he might be able to walk through in some future book of his own. A door that links the world of screens to the world of print, a linkage that holds out at least the possibility of stories being collaborative and self-generating in ways that go beyond Choose Your Own Adventure. There have been attempts at this already: the Mongoliad created by Neal Stephenson et al. seemed at one point to be headed in this direction, though in the end users were able only to “supplement” rather than direct the course of the narrative and the development of the fictional world.

I don’t think Robin knows where these thoughts are headed — and that’s the exciting thing. Doors are opening in unexpected places, but we can’t yet know where they’re leading. Not long ago I was having a backchannel conversation on Twitter with a designer whose work I really, really admire, and we said to each other, “We should do something together.” I don’t think either of us had any idea what that might be, and in any case both of us have big tasks to complete. But there is something intrinsically exciting about the idea of collaborating, not only with people in your own field, others who do more or less what you do, but with people who do totally different things. In such a circumstance one of the prime drivers of collaboration is the desire to find out what collaboration looks like and feels like — to connect with the experience of gifted people who think differently than you think, use tools that are alien to you, approach problems from what to you are strange angles. It’s interesting that the mystery at the heart of Robin Sloan’s Mr. Penumbra’s 24-Hour Bookstore requires people who come from rather different worlds and possess rather different skills to pool their resources and work together.

I’m thinking about all this because of Robin’s post, in which he says that he’s got a book to finish — a book of the kind that he’s written before — but wonders what the generative world of Minecraft might teach him about future projects. Well, I too have a book to finish — but after that, why not something different? And why not look for models of intellectual and creative energy in unfamiliar locations? There might be some pretty cool surprises around the future’s next corner.

why the jerks didn't win

The conclusion of this Jason Kottke post got me thinking:

People ascribe all sorts of crazy stuff to you without knowing anything about the context of your actual life. I even lost real-life friends because my online actions as a person were viewed through a conceptual lens; basically: “you shouldn’t have acted in that way because of what it means for the community” or some crap like that. Eventually (and mostly unconsciously), I distanced myself from my conceptual counterpart and became much less of a presence online. I mean, I still post stuff here, on Twitter, on Instagram, and so on, but very little of it is actually personal and almost none of it is opinionated in any noteworthy way. Unlike Persson or Fish, I didn’t quit. I just got boring. Which I guess isn’t so good for business, but neither is quitting.

When I think about this in relation to my recent posts on Twitter, and Erin Kissane’s recent post, and the increasing number of periodicals that have eliminated comments on their articles, it all tempts me to think: the jerks won.

The jerks: the people who use social media not converse but to crow like demented roosters, to nurse every petty grievance, to do the typewritten version of this — they’re powerful. I don’t know how many of them there are, or what percentage of readers they are, but their persistence is amazing, and eventually they drive most people of good will out of the territory. Eventually a guy like Jason Kottke, anything but a belligerent or controversial person, just starts keeping his opinions to himself because it isn’t worth the trouble of dealing with all the nastiness — not disagreement, the nastiness, of the kinds listed above and others — that comes when you express a point of view about anything.

And yet no. The jerks haven’t won after all, unless we let them.

For one thing, they can’t change the fact that before they grew in numbers and influence, Twitter was pretty cool and many of us made friends there that we wouldn’t have made elsewhere. Take, for instance, Erin Kissane and me. According most socio-political metrics we might not seem to have a lot in common, and if socio-political metrics were the only ones available, Erin and I probably would never have connected. But we both like books and reading and we laugh about some of the same things; and I deeply admire Erin’s determination to be kind even to people who are unkind, even as she stands up for the causes she really believes in. (She’s far more charitable than I am.) She’s one of the best people I have met on Twitter — and I don’t think I ever could have met her had it not been for Twitter.

There’s no question that the increasing power of the jerks on Twitter makes it much harder to cultivate friendships there now; but it doesn’t take away the friendships that have already been formed. Nor does it take away the possibility of cultivating those friendships. While there are seasons for making new friends, there are also seasons for strengthening the ones that already exist. It might be time to start thinking about creating, or going back to, tools that help us achieve those goods. And it’s not just a matter of tools, as Erin explains:

Beyond the tools, though, I’m trying to make an emotional shift from exuberant joyful angry frenetic Twitter to something subtler and gentler. When moved to discuss something about which I feel strongly, I’m beginning to default to a longer form first, to reduce the heat of my Twitter conversations and boost the light I work by elsewhere.

To “boost the light I work by elsewhere” — that sounds like a really good idea.

Tuesday, September 16, 2014

uncomfortable

When I returned from the physical shock of Nagasaki, which I have described in the first page of this book, I tried to persuade my colleagues in governments and in the United Nations that Nagasaki should be preserved exactly as it was then. I wanted all future conferences on disarmament, and on other issues which weigh the fates of nations, to be held in that ashy, clinical sea of rubble. I still think as I did then, that only in this forbidding context could statesmen make realistic judgements of the problems which they handle on our behalf. Alas, my official colleagues thought nothing of my scheme; on the contrary, they pointed out to me that delegates would be uncomfortable in Nagasaki.
— Jacob Bronowski, Science and Human Values (1953)

Would Davos Man still want to rule the world if he had to be uncomfortable doing it?

Friday, September 12, 2014

a civil tongue

A sudden consensus seems to be emerging among a subset of current-event commentators: there are big problems with with term "civility." Here's the claim, summed up:


(NB: see important clarifications/corrections from Pat Blanchfield in the comments below.)

Likewise, Elizabeth Stoker Breunig writes of the “cult” of civility, of its “peculiar tyranny.” Freddie deBoer agrees, and goes a step further: “Civility is the discourse of power…. That’s what civility is, in real life: the powerful telling us that we must speak to them with deference and respect, while they are under no similar responsibility to us.”

I think these complaints are immensely counterproductive. Does the term “civility” get misused? Of course it does — just like every other term celebrating a virtue or an achievement. But it’s sloppy and thoughtless to allow criticism of a term’s abuse to slide into dismissal of the term itself. What words have been more abused than “justice” and “peace” and “charity”? Yet it would be madness to stop using those words because of the ways that bad people have sought to deploy them. They must be rescued and redeployed. The same is true, I think, of “civility”, of which, surely, there is not enough in this fetid swamp of abusive language everyone on social media at least dips a toe into every day.

So, to Freddie I would say that if the powerful demand a civility from the powerless that they are not willing to offer in turn — a claim that I agree with whole-heartedly — then the problem is not that the powerful invoke civility as a virtue but that they are rankly hypocritical, acting in a way totally at odds with their rhetoric. The critique should focus on that hypocrisy; such a critique is not aided by the abandonment of the ideal of civil discourse.

Making a rather different argument, Bruenig writes, “We should all want to be the kind of person who is charitable, merciful, quick to forgive and quick to ask forgiveness; these are all better virtues than ‘civility’ anyway, which is by its own admission little more than a veneer of these genuine virtues.” Well, sure! But the lesson Bruenig draws from this point is the opposite of the one that should be drawn. It is precisely because civility is a lesser virtue that we should be at pains to cultivate it. It is precisely because charity and mercy and forgiveness are so hard that we build a bridge to them by the lesser virtue of civility. I may not be able yet to love my enemies as I should, but if I can practice civility towards them that’s a step in the right direction. If that’s a “cult,” it’s one I want to belong to. A world in which the language we use towards others does not aspire to something nobler than we feel at the moment — well, again, that’s the world of most social media. And it’s not a healthy one.

Nor is civility of discourse incompatible with speaking truth to power. Indeed it may be necessary if one would speak that truth in a way that it can be heard. Consider, as a paradigmatic example of what I mean, Martin Luther King’s “Letter from the Birmingham Jail.” It’s hard to imagine anything more civil. It’s also hard to imagine anything more devastating. King held himself to a strict standard of civility because setting that standard aside would have reduced the likelihood of his people entering their promised land.

It may well be true that some nasty and ill-intentioned people have tried to co-opt the language of civility. For heaven’s sake let’s not help them do so. Instead, let’s take it back.

Thursday, September 11, 2014

articulation

Sviatoslav Richter

When I learned to play the guitar, many years ago, I developed a near-obsession with the musical virtue of articulation. I’m not sure why; maybe because I found it so hard to play without slurring notes or missing them altogether, and without introducing unintentional variations in volume. I came to love guitarists, like Martin Simpson and Stephen Bennett, who managed to articulate every note with wonderful precision — but who did so without losing musical flow and flair.

(Simpson is above all others my guitar hero, and if you want a brief master class in mixed finger- and thumb-picking, slide-playing, and alternate tunings, just take a look at this video — and listen to the stuff at the beginning about why he plays what he plays. Also, don’t stop before the six-minute mark. If you want a closer and higher-definition look at what he does, check out this video — especially useful for guitarists interested in technique. )

Oddly — or maybe not so oddly, I don’t know — my fascination with articulate guitar players has affected my listening to other music. For instance, I have long loved Glenn Gould’s way with Bach: his pedal-free, hyper-articulated approach plays right into my obsessions — especially given his famous recording style, with the microphone stuck right into the piano. Gould’s Goldbergs, and his Well-Tempered Clavier, were simply my versions of those masterpieces for many years.

But … that humming. When I’m listening on speakers I can ignore it; but in the past few years I have been listening to music more and more often on headphones, and the extraneous racket increasingly got on my nerves. I decided I needed a new experience of listening to Bach’s piano music.

So I bought this: the performances that Sviatoslav Richter recorded in the 1970s. At first they were almost impossible for me to listen to: all that pedal! And the echo! — as though it were recorded … I don’t know, in a concert hall or something. What’s up with that? I huffed and sighed; Richter made me deeply uncomfortable. In comparison to Gould his playing seemed so florid and Romantic, thoroughly un-Bach-like.

But I kept listening.

And after I settled down, I couldn’t deny that Richter played with great intelligence and, yes, articulation; his playing wasn’t so stereotypically “Romantic” as I had first assumed; he was, I came increasingly to feel, simply adapting Bach’s music to the character of the instrument, which was, after all, not a harpsichord but a pianoforte. The magnificent architecture of Bach was still there, and in a way brought forth with a new clarity and beauty by Richter’s style.

And after Richter captured my imagination, going back to Gould was … well, not disappointing, exactly — but his way of playing Bach no longer seemed to inevitably right to me. Perhaps he was, at times, allowing a fetish for articulation to displace other musical virtues. On the other hand, I noticed that he did indeed sneak a little pedal in there, allow a few resonances — he was not as rigid a purist as I had thought. And Gould, who is famous for his fast tempos, can take things very, very slow as well: try listening to his version of the Prelude and Fugue in F-minor, followed by Richter’s: Richter takes the Prelude about twice as fast as Gould does — I have to listen with some care to be sure that they’re playing the same thing.

I still love Gould, but at this point I think I love Richter more. In fact, I don’t know that I own a record that I treasure more than Richter’s WTC. I do wish that Richter’s recording technique, as opposed to his playing, had been a little more like Gould’s; but he has somehow become my measuring rod, the performance against which I measure others. If only he had recorded the Goldbergs! But if I want contrast to Gould’s approach to that masterpiece, I have Murray Perahia ‚ and, more recently, Jeremy Denk.

Who knows what versions of Bach I will listen to the most over the coming years? But in any case I am immensely grateful to live in an age which offers me so many wonderful recordings, so many performances of such variety. In exploring this music in the company of multiple performers I draw closer and closer to its heart.

Wednesday, September 10, 2014

biases

I’m reading the first volume of David Bromwich’s projected two-volume intellectual biography of Edmund Burke, and it’s fantastic. A magnificent piece of scholarship, about which I hope to have more to say. But in my typically perverse way, I’m going to devote this post to a disagreement — though not to be ornery, I promise. I have a constructive point to make.

One of the chief purposes of Bromwich’s book, it seems to me, is to rescue Burke from those who praise him, especially conservatives. (“No serious historian today would repeat the commonplace that Burke was the founder of modern conservatism” — no serious historian, though I fear that serious historians are defined as those who don’t say that Burke was the founder of modern conservatism.) So any of Burke’s statements that might confirm a conservative or generally traditionalist reading get some careful scrutiny from Bromwich, which in general is, I think, a good thing: conventional wisdom should always get doubled scrutiny.

But what about Burke’s religious beliefs? Here’s a noteworthy passage from Bromwich’s “Introduction”: 

Replying once to a question about his religious beliefs, Burke said he was a Christian “much from conviction; more from affection.” The remark is open to various readings. I take it to imply that for him, ordinary feelings such as trust, though they have a Christian correlative, themselves supply a sufficient groundwork of moral conduct. 

Try though I might, I can see absolutely nothing in Burke’s statement to warrant Bromwich’s inference. Burke doesn’t say anything there about trust, about ordinary feelings, about moral conduct or the grounds thereof. In fact, if you look at the original letter (in Volume VI of The Correspondence of Edmund Burke), he’s not even describing his own beliefs per se: the complete sentence is “I am attached to Christianity at large, much from conviction; more from affection.” The context is reflection on the denominational divisions among Christians, something an Irishman could scarcely not have thought about. And Burke’s point is, quite obviously I think, that his “attachment” to Christianity — something distinct from belief in its teachings — is supported in two ways: first by rational conviction, and second, and to a greater degree, through the testimony of his affections. It is a statement about the grounds of attachment, not about the foundations of moral conduct. 

So why does Bromwich read it the way he does? It seems to be a pre-emptive strike against any claim that Burke’s Christian convictions are essential to his thought. If there is a “sufficient groundwork” for morals outside the framework of Christian teaching, then Christian teaching can be largely set aside in an intellectual biography of Burke, even if it provides a “correlative” to beliefs held on other grounds. 

Yet Bromwich concludes his Introduction by quoting another passage from Burke’s letters that seems to cast doubt on this dismissal of Christianity. To a young woman who had protested against his prosecution of the East India Company for its maltreatment of Indians, Burke wrote, “I have no party in this business, my dear Miss Palmer, but among a set of people, who have none of your lilies and roses in their faces, but who are images of the great Pattern as well as you or I. I know what I am doing; whether the white people like it or not.” Though Bromwich seems not to notice, Burke is grounding his political action in the Christian and Jewish teaching that all human beings are made in the image of God: even the darkest-skinned people “are images of the great Pattern.” It is his conviction of the universal imago Dei that drives Burke’s attempts to bring gross injustice before the judgment of the Law. 

I think that in this case Bromwich fails to see the importance of Christianity to Burke because he is not especially interested in it himself; whereas I see the importance of it because I am both interested in and knowledgable about the subject. Among other things, this incident should be a reminder of how our own inclinations, our own biases, can lead us to shape writers and thinkers we love in our own image. Though I think I have shown that on this one matter Bromwich has misread Burke and I have read him correctly, if I were to write an intellectual biography of Burke I would run a significant danger of over-emphasizing his Christianity. If I were going to write a really first-rate book about Burke I would always need to be on guard against that tendency. 

Similarly, Edward Mendelson and I have had many conversations over the years about W. H. Auden’s religious beliefs and thoughts, and while we agree on much, we sometimes don’t — and when we differ, almost invariably my reading of Auden bends towards my theological inclinations, while Edward’s reading bends towards his. (Oddly enough, the fact that Edward knows five times more about Auden than I ever will does not incline me to defer to his judgment in these matters.)   

E. B. White once wrote, "All writing slants the way a writer leans, and no man is born perpendicular, although many men are born upright” — a lovely line, but it’s not how you’re born, it’s how you discipline yourself that counts. Pulling yourself towards the perpendicular you know you’ll never quite reach requires a constant struggle. I would like to say that I achieve such constancy; but I don’t.