Text Patterns - by Alan Jacobs

Tuesday, May 31, 2016

the defilement thesis, expanded

In a recent post I spoke of what we might call the Defiling of the Memes, and suggested that Paul Ricoeur’s work on The Symbolism of Evil might be relevant. Let’s see how that might go.

In that book Ricoeur essentially works backwards from the familiar and conceptually sophisticated theological language of sin to what underlies it, or, as he puts the matter, what “gives rise” to it. If “the symbol gives rise to the thought,” what “primary symbols” underlie the notion of sin? Sin is a kind of fault, but beneath or behind the notion of fault is a more fundamental experience, defilement, whose primary symbol is stain. Before I could ever know that I have sinned (or that anyone else has) there must be a deeper and pre-rational awareness of defilement happening or being. I think of a passage from Dickens’s Hard Times:

‘Are you in pain, dear mother?’
‘I think there’s a pain somewhere in the room,’ said Mrs. Gradgrind, ‘but I couldn’t positively say that I have got it.’

First we know that defilement is, “somewhere in the room”; then we become aware that we have been somehow stained. From those elemental experiences and their primary symbols arise, ultimately, complex rational accounts that might lead to something like this: “I have defiled myself by sinning, and therefore must find a way to atone for what I have done so that I may live free from guilt.” But that kind of formulation lies far down the road, and there are many other roads that lead to many other conclusions about what went wrong and how to fix it.

Ricoeur writes as a philosopher and a Christian, which is to say he writes as someone who has inherited an immensely sophisticated centuries-old vocabulary that can mediate to him the elemental experiences and their primary symbols. Therefore one of his chief tasks in The Symbolism of Evil is to try to find a way back:

It is in the age when our language has become more precise, more univocal, more technical in a word, more suited to those integral formalizations which are called precisely symbolic logic, it is in this very age of discourse that we want to recharge our language, that we want to start again from the fullness of language. Beyond the desert of criticism, we wish to be called again.

But what if you have not inherited such a sophisticated moral language? Might you not then be closer to the elemental experiences and their primary symbols? That might help to account for the kind of thing described here:

The safe space, Ms. Byron explained, was intended to give people who might find comments “troubling” or “triggering,” a place to recuperate. The room was equipped with cookies, coloring books, bubbles, Play-Doh, calming music, pillows, blankets and a video of frolicking puppies, as well as students and staff members trained to deal with trauma. Emma Hall, a junior, rape survivor and “sexual assault peer educator” who helped set up the room and worked in it during the debate, estimates that a couple of dozen people used it. At one point she went to the lecture hall — it was packed — but after a while, she had to return to the safe space. “I was feeling bombarded by a lot of viewpoints that really go against my dearly and closely held beliefs,” Ms. Hall said.

So here's my (highly tentative) thesis: when you have a whole generation of young people whose moral language is severely attenuated — made up of almost nothing except Mill's harm principle — and who have been encouraged to extend that one principle to almost any kind of discomfort — then disagreement, or alternative points of view, appear to them not as matters for rational adjudication but as defilement from which they must be cleansed.

And this in turn leads to a phenomenon I have discussed before, and about which Freddie deBoer has written eloquently: The immediate turn to administrators as the agents of cleansing. This is especially true for students who have identified themselves as marginal, as social outsiders, as Mary Douglas explains in Purity and Danger: “It seems that if a person has no place in the social system and is therefore a marginal being, all precaution against danger must come from others. He cannot help his abnormal situation.”

And yet another consequence of the experience of defilement: the archaic ritualistic character of the protests and demands, for example, the scapegoating and explusion of Dean Mary Spellman of Claremont McKenna College, and the insistence of many protestors upon elaborate initiation rituals for new members of the community in order to prevent defiling words and deeds. (Douglas again: “Ritual recognises the potency of disorder.”)

I have described the thinking of these student protestors as Baconian — a notion I develop somewhat more fully in a forthcoming essay for National Affairs — and while I still think that analysis is substantially correct, I now believe that it is incomplete. The anthropological account I have been sketching out here seems necessary as well.

Again: these are behavioral pathologies generated by simplistic moral frameworks and a general disdain for rational debate. The sleep of reason produces, if not always monsters, then a return to a primal experience of defilement, and a grasping for the elemental symbols and rituals used from ancient times to manage such defilement. And in light of these recent developments, the world of criticism seems less like a desert than an elegant and well-furnished room.

Monday, May 30, 2016

the technological history of modernity by a partial, prejudiced, and ignorant historian

When I think, as I often do, and will continue to do in a slow way* for the next few years, about a possible technological history of modernity, I am always aware that this account will be for me a theological account. That is, the history will be done from within, and on behalf of, a Christian understanding of the world. This poses problems.

In a brilliant essay called “Looking for the Barbarians: The Illusions of Cultural Universalism” (1980), Lezsek Kolakowski writes that the self-understanding of the Western world, or as he says Europe, that arose during the early modern period “set in motion the process of endless self-criticism which was to become the source not only of her strength but of her various weaknesses and her vulnerability.” Kolakowski is serious about those strengths: “This capacity to doubt herself, to abandon ... her self-assurance and self-satisfaction, lies at the heart of Europe’s development as a spiritual force.” But the West tends to tell this story of its own self-doubt tendentiously and inaccurately, as a move towards neutrality — towards a kind of detached anthropological curiosity that suspends or brackets questions of value.

For Kolakowski this is nonsense:

The anthropologist’s stance is not really one of suspended judgment; his attitude arises from the belief that description and analysis, freed from normative prejudices, are worth more than the spirit of superiority or fanaticism. But this, no less than its contrary, is a value judgment. There is no abandoning of judgment; what we call the spirit of research is a cultural attitude, one peculiar to Western civilization and its hierarchy of values. [N.B.: This is not wholly true.] We may proclaim and defend the ideals of tolerance and criticism, but we may not claim that these are neutral ideals, free from normative assumptions.

And it is not self-evident that this belief in the superiority of “the ideals of tolerance and criticism” is either inevitable or correct. Kolakowski tells a disturbing anecdote that everyone, I believe, should seriously consider:

A few years ago I visited the pre-Columbian monuments in Mexico and was lucky enough, while there, to find myself in the company of a well-known Mexican writer, thoroughly versed in the history of the Indian peoples of the region. Often, in the course of explaining to me the significance of many things I would not have understood without him, he stressed the barbarity of the Spanish soldiers who had ground the Aztec statues into dust and melted down the exquisite gold figurines to strike coins with the image of the Emperor. I said to him, “you think these people were barbarians; but were they not, perhaps, true Europeans, indeed the last true Europeans? They took their Christian and Latin civilization seriously; and it is because they took it seriously that they saw no reason to safeguard pagan idols; or to bring the curiosity and aesthetic detachment of archaeologists into their consideration of things imbued with a different, and therefore hostile, religious significance. If we are outraged at their behavior, it is because we are indifferent both to their civilization and to our own.”

It was banter, of course, but banter of a not entirely innocent kind. It may prod us into thinking about a question which could well be decisive for the survival of our world: is it possible for us to display tolerance and a benevolent interest toward other civilizations without renouncing a serious interest in our own?

Kolakowski puts this point most bluntly in this question: “At what point does the very desire not to appear barbaric, admirable as it is, itself become indifference to, or indeed approval of, barbarity?” A putatively neutral approach incurs costs; how might one decide when those costs have grown too high?

In any case, my inclination is to tell a more interested narrative, because I want to understand the relationship between the rise of the modern world, about which I am ambivalent, and the Christian Gospel, about which I am not ambivalent. I therefore keep Kolakowski’s essay in one hand while holding in the other Robert Wilken’s “Who Will Speak for the Religious Traditions?” I’ll close this post with words of Wilken’s which I have pondered in my heart for many years:

For too long we [scholars of religion] have assumed that engagement with the religious traditions is not the business of scholarship, as though the traditions will “care for” themselves. In the eighteenth century, when the weight of western Christian tradition lay heavily on intellectuals, there was reason to put distance between the scholar and the religious communities. Today that supposition is much less true and we must make place in our company for other scholarly virtues. [...]

If love is no virtue and there is no love of wisdom, if religion can only be studied from afar and as though it had no future, if the passkey to religious studies is amnesia, if we can speak about our deepest convictions only in private, our entire enterprise is not only enfeebled, it loses credibility. For if those who are engaged in the study of religion do not care for religion, should others? Without “living sympathy” and a “certain partisan enthusiasm,” Goethe once wrote to Schiller, “there is so little value in our utterance that it is not worth making at all. Pleasure, delight, sympathy with things is what alone is real and what in turn creates reality; all else is worthless and only detracts from the worth of things.”



* “In philosophy the winner of the race is the one who can run most slowly.” — Wittgenstein 

Saturday, May 28, 2016

Oppenheimer

from the Life magazine archives
Ray Monk’s biography of Robert Oppenheimer is a long but fascinating book. (Monk is also the author of a brilliant biography of Wittgenstein — I’m looking forward to reading him on Bertrand Russell at some point, though two volumes of Lord Russell may be more than I can handle....)

I admire what Monk does with Oppenheimer’s story so much because he has to balance an account of the events of the man’s life with some explanation of the incredibly complex contexts in which he lived. That means that we need to learn about what was happening in physics in the middle of the twentieth century, as well as the political deliberations that went into the building of the first atomic bomb and the later anxieties over the rise of the Soviet Union to the status of a second world power. Monk handles all this masterfully.

He does, however, take his subject’s view of things a little more often than he ought. Oppenheimer was clearly an enormously charming man, but also a manipulative man and one who made enemies he need not have made. The really horrible things Oppenheimer did as a young man – placing a poisoned apple on the desk of his advisor at Cambridge, attempting to strangle his best friend – and yes, he really did those things – Monk passes off as the result of temporary insanity, a profound but passing psychological disturbance. (There’s no real attempt by Monk to explain Oppenheimer’s attempt to get Linus Pauling’s wife Ava to run off to Mexico with him, which ended the possibility of collaboration with one of the greatest scientists of the twentieth, or any, century.) Certainly the youthful Oppenheimer did go through a period of serious mental illness; but the desire to get his own way, and feelings of enormous frustration with people who prevented him from getting his own way, seem to have been part of his character throughout his life.

Again, he had great charm, and that charm enabled him to be a very effective leader of the atomic bomb project at Los Alamos, and to be equally effective in leading the Institute for Advanced Studies at Princeton — for a while. But over the long term the charm wore off, and the manipulativeness and on some occasions cruelty began to loom larger in people's minds, so that when Oppenheimer turned sixty and there was a proposal to devote a special issue of Reviews of Modern Physics to him, Freeman Dyson, who was in charge of editing the issue, found it difficult to round up prominent physicists willing to speak on Oppenheimer’s behalf. He was very popular as a public figure, a kind of paragon of what a scientist should be in the common man’s mind, especially after people began to feel that he had been treated badly when his security clearance was withdrawn in 1954, but many of his colleagues grew frustrated with him over time and came to suspect his good will and integrity.

Jeremy Bernstein, in his memoir of Oppenheimer, tells a story that Monk also refers to. Oppenheimer had offered Bernstein, then a young physicist, a fellowship at the Institute for Advanced Studies, and a few months before coming to Princeton Bernstein got a chance to hear Oppenheimer give one of his enormously popular public lectures.

After the lecture I decided to go onto the stage and introduce myself to Oppenheimer. I was, after all, going to be one of his charges in a few months. I went up to him, and he looked at me with what I distinctly remember as icy hostility — his students referred to it as the “blue glare.” It was clear that I had better explain — quickly — why I was bothering him. When I told him I was coming to the Institute that fall, his demeanor completely changed. It was like a sunrise. He told me who would be there — an incredible list. He ended by saying that Lee and Yang were going to be there and that they would teach us about parity.... Then Oppenheimer said, with a broad smile, “We’re going to have a ball!” I will never forget that. It made it clear to me why he had been such a fantastic director at Los Alamos.

But maybe we should think a little more than either Bernstein or Monk does about that “blue glare.” It might explain a few things.

Perhaps the most interesting aspect of Monk’s biography is his documenting of Oppenheimer’s increasing awareness, as he grew older, of his own flaws. Whenever he spoke of any darkness or sin within, people always assumed that he felt guilty because of his role in building the atomic bomb that killed so many Japanese people. But when he was asked about that role, he always said that if he had it to do over again he would do the same thing, even though of course he felt uneasy about the consequences of his actions.

My belief — based wholly on Monk’s story, of course — is that Oppenheimer’s sense of sin was actually prompted by his having had to confront, during the weeks in which he was grilled by inquisitors over his security clearance, his own habitual dishonesty and manipulativeness.

In any case, Monk demonstrates that late in his life Oppenheimer often, in his many public addresses, returned to this theme. For instance,

We most of all should try to be experts in the worst about ourselves: we should not be astonished to find some evil there, that we find so very readily abroad and in all others. We should not, as Rousseau tried to, comfort ourselves that it is the responsibility and the fault of others, that we are just naturally good; nor should we let Calvin persuade us that despite our obvious duty we are without any power, however small and limited, to deal with what we find of evil in ourselves. In this knowledge, of ourselves, of our profession, of our country — our often beloved country — of our civilization itself, there is scope for what we most need: self knowledge, courage, humor, and some charity. These are the great gifts that our tradition makes to us, to prepare us for how to live tomorrow.

He spoke of “a truth whose recognition seems to me essential to the very possibility of a permanently peaceful world, and to be indispensable also in our dealings with people with radically different history and culture and tradition”:

It is the knowledge of the inwardness of evil, and an awareness that in our dealings with this we are very close to the center of life. It is true of us as a people that we tend to see all devils as foreigners; it is true of us ourselves, most of us, who are not artists, that in our public life, and to a distressing extent our private life as well, we reflect and project and externalize what we cannot bear to see within us. When we are blind to the evil in ourselves, we dehumanize ourselves, and we deprive ourselves not only of our own destiny, but of any possibility of dealing with the evil in others.

And Oppenheimer — who not only read but wrote poetry, and in his college days wanted to be a writer — used this occasion to argue for the centrality of the arts: “it is almost wholly through the arts that we have a living reminder of the terror, of the nobility of what we can be, and what we are.”

I imagine, with considerable longing, the benefit to our current moment if one of our most famous scientists spoke openly about how profoundly fallible human beings can be and how necessary the arts are to an understanding of that fallibility. But that’s not where we are. That is so not where we are.

All this softens my heart towards Oppenheimer, and helps me to realize that what I have called the Oppenheimer Principle was his statement of how scientists think, not necessarily how they should think. And I find myself meditating on something extremely shrewd and perceptive that George Kennan said at Oppenheimer’s memorial service — a good note on which to close this post. Oppenheimer was

a man who had a deep yearning for friendship, for companionship, for the warmth and richness of human communication. The arrogance which to many appeared to be a part of his personality masked in reality an overpowering desire to bestow and receive affection. Neither circumstances nor at times the asperities of his own temperament permitted the gratification of this need in a measure remotely approaching its intensity.


UPDATE: Please see, from TNA ten years ago, this superb essay-review on Oppenheimer by Algis Valiunas.

Thursday, May 26, 2016

cultural appropriation, defilement, rituals of purification

I think it's now generally understood that the disaffected cultural left and the disaffected cultural right have become mirror images of each other: the rhetorical and political strategies employed by one side will, soon enough, be picked up by the other. At this particular moment, the right seems to be borrowing from the left — in ways that make many on the left distinctly uncomfortable.

See, for instance, this recent post by Michelle Goldberg, who notices that conservatives who want to protect women from sexual predators disguised as transgendered women are using the same language of “safety” more typically deployed by the left. And Goldberg admits that it’s not easy to say why they shouldn’t:

There’s no coherent ideology in which traumatized students have the right to be shielded from material that upsets them — be it Ovid, 9½ Weeks, or the sentiments of Laura Kipnis — but not from undressing in the presence of people with different genitalia. If we’ve decided that people have the right not to feel unsafe — as opposed to the right not to be unsafe — then what’s the standard for refusing that right to conservative sexual abuse victims? Is it simply that we don’t believe them when they describe the way their trauma manifests? Aren’t we supposed to believe victims no matter what?

And if conservatives can’t logically be denied use of “safe space” language, then they can't be denied appeals to “cultural appropriation.” As I’ve noted before, I don't have a great deal of sympathy for that concept — appropriation is what cultures do — and I found myself cheering when I read these comments by C. E. Morgan:

The idea that writing about characters of another race requires a passage through a critical gauntlet, which involves apology and self-examination of an almost punitive nature, as though the act of writing race was somehow morally suspect, is a dangerous one. This approach appears culturally sensitive, but often it reveals a failure of nerve. I cannot imagine a mature artist approaching her work in such a hesitant fashion, and I believe the demand that we ought to reveals a species of fascism within the left—an embrace of political correctness with its required silences, which has left people afraid to offend or take a stand. The injunction to justify race-writing, while ostensibly considerate of marginalized groups, actually stifles transracial imagination and is inextricable from those codes of silence and repression, now normalized, which have contributed to the rise of the racist right in our country. When you leave good people afraid to speak on behalf of justice, however awkwardly or insensitively, those unafraid to speak will rise to power.

(Morgan also says “I was taught as a young person that the far political right and the far political left aren’t located on a spectrum but on a circle, where they inevitably meet in their extremity” — which is the point I made at the outset of this post.)

But if you’re going to say that cultural appropriation is a thing, and an opprobrious thing, then you can be absolutely certain that people whose views you despise will make the concept their own. Enter Pepe the frog. Pepe has been appropriated by lefties and normies, and the alt-righties who think he belongs to them are determined to take him back:

“Most memes are ephemeral by nature, but Pepe is not,” @JaredTSwift told me. “He’s a reflection of our souls, to most of us. It’s disgusting to see people (‘normies,’ if you will) use him so trivially. He belongs to us. And we’ll make him toxic if we have to.”

Anything to avoid the pollution of Our Memes being used by Them.

The more I think about these matters, the more I think my understanding of them would benefit by a re-reading of Paul Ricoeur’s great The Symbolism of Evil (1960), especially the opening chapter on defilement. Ricoeur brilliantly explains how people develop rituals of purification in order to dispel the terror of defilement. Our understanding of how we live now would be greatly enriched by a Ricoeurian anthropology of social media.

And with that, I’ll leave you to contemplate the rising political influence of people who think that Pepe the frog is a reflection of their souls.

Wednesday, May 25, 2016

Neal Pollock and and the terrible, horrible, no good, very bad city

http://rikuwrites.blogspot.com/2014/01/a-morning-commute-starting-in-post.html
Austin, Texas, after the departure of Uber (artist's representation)
It turns out that the voters of Austin, Texas have amazing powers to distort time: according to Neal Pollock, "Austin has gone back in time 20 years" by ditching Uber, even though Uber was in Austin for just two years and the company was only founded in 2009. (It didn't even have an app until 2012.)

Pollock's chief complaint is that the absence of Uber will lead to price-gouging, something that of course Uber itself would never do. Without Uber Austin is left with a "bizarre ecosystem of random auto-barter" and — you're going to think I'm making this up, but I promise, it's in the post — an "insane transit apocalypse." Only Uber can save us from certain destruction! It's like in superhero movies when the general public hate and resent superheroes but then when they're faced by alien invasion or something they come begging. Only in Austin it'll be Travis Kalanick before whom they abase themselves, not Captain America.

Oh, and: "Also, the city allows cab drivers to smoke in their cars."

Speaking of people abasing themselves, I've gotten very, very tired of bare-faced shilling for enormous tech companies passing itself off as journalistic reflection. You'd never learn from Pollack why Austin rejected Uber — or rather, demanded that Uber and Lyft follow some basic legal guidelines which Uber and Lyft pulled out of the city rather than follow. If you want to understand the facts of the case, start with the always-excellent Erica Greider. Maybe the voters of Austin are wrong, but let's try to find out what they were thinking, shall we, instead of screeching about "insane transit apocalypse." And let's try to bear in mind that companies like Uber aren't charitable organizations, sacrificing themselves for the common transportation good.

In short, we need people writing about big business — including big tech business — who have a strong moral compass that's not easily discombobulated by the magnetic fields of media-savvy companies with slick self-promotion machines. Recently I was reading an interview with the journalist Rana Foroohar in which she said this:

One of the things I wanted to do in this book was get away from a culture of blaming the bankers, blaming the CEOs, blaming the one percent. I cover these people on a daily basis. Nobody’s venal here. They really are doing what they’re incentivized to do. It’s just that over the long haul, it doesn’t happen to work.

Really? Nobody is venal? There are no venal people on Wall Street or in executive boardrooms? I guess Michael Lewis has just been making up stories all these years.... 

But also look a little closer: "Nobody’s venal here. They really are doing what they’re incentivized to do." For Foroohar, if you're just "doing what you're incentivized to do" that's a moral pass, a get-out-of-jail-free card. But for me that's the very definition of venality. 

If you're not willing to apply a moral standard to writing about big business that comes from outside the system of "incentivization," outside the pious rhetoric that thinly veneers sleaze, then I'm not interested in your opinions about the effects of business decisions on society. 


Saturday, May 21, 2016

Roberts; the Bruce

In a post this morning on Seamus Heaney’s fragmentary translation of the Aeneid, my friend Adam Roberts (inadvertently I’m sure) sent me down a trail of memory. He did it by writing this:

It's a little odd, actually: the Iliad and the Odyssey are, patently, greater works of art; yet however much I love them and return to them, the Aeneid still occupies a uniquely special place in my heart. I first read it as an undergraduate in the (alas, long defunct) Classics department at Aberdeen University.

When I was 19 years old and just beginning to be interested in Christianity, I paid a visit to the bookstore of Briarwood Presbyterian Church in my home town of Birmingham, Alabama. I wasn’t sure what I was looking for, so I wandered around aimlessly for a while, but eventually emerged with two books. One of them was Lewis’s Mere Christianity, which I soon read and enjoyed, but which had no major impact on me. (People are always surprised when I tell them that.) But the other book really changed me. It was a brief and accessible commentary on Paul’s letter to the Romans by F. F. Bruce.

What did I find so winning about that little commentary, which in the next couple of years I read several times? It was the ease and naturalness with which Bruce linked the thought of Paul with the Hellenistic cultural world from which Paul emerged. I believe I had, before reading the book, some idea that the proper Christian view of the Bible was that it emerged fully-formed from the mind of God — sort of like the Book of Mormon, engraved on golden plates and then buried. For Bruce, Paul was certainly an apostle of God, but that did not erase his humanity or remove him from his cultural frame. Bruce quoted freely from Hellenistic poets and philosophers, discerning echoes of their thoughts in Paul’s prose; he showed clearly that Paul came from an intellectually plural and culturally diverse world, and that this upbringing left its marks on him, even when he became, in relation to that world, an ideological dissident.

Bruce’s attitude surprised me, but more than that, it gratified me. It was the moment at which I began to realize that becoming a Christian would not require me to suspend or repudiate my interests in culture, in poetry, in story.

Much later I learned that Frederick Fyvie Bruce had been raised in a poor Open Brethren family near Moray Firth in Scotland, and had been able to attend university only because he won a scholarship. At Aberdeen University he, like Adam Roberts decades later, studied Latin and Greek, and, also like Adam Roberts, did graduate work at Cambridge. In one of those curious convergences of the kind I wrote about yesterday, at one point he attended lectures by the great classicist and poet A. E. Housman which only one other student attended: Enoch Powell. Tom Stoppard should write a sequel to The Invention of Love about those three in one room. (I guess it couldn't be called The History Boys, but oh well.)

Bruce's classical education became the foundation for all his future scholarship. Thus his first book — The New Testament Documents: Are They Reliable? — is based on an extended comparison of the textual history of the books of the New Testament with that of classical writers from Herodotus to Seutonius. And even this came about only after he had spent several years as a lecturer in Greek (at Edinburgh, then Leeds) who also taught Latin. The classics were Bruce’s first scholarly language, and the biblical literature a later acquisition.

If my first encounter with biblical scholarship had been with a writer less culturally assured and wide-ranging than Bruce, who knows what might have become of me? And if he had grown up in a Christian environment less sympathetic to humanistic learning, who knows what might have become of him?

Late in his career, Bruce wrote one of best books, The Canon of Scripture, and that book bears this dedication:


TO THE DEPARTMENTS 
OF HUMANITY AND GREEK 
IN THE UNIVERSITY OF ABERDEEN 
FOUNDED 1497 
AXED 1987 
WITH GRATITUDE FOR THE PAST 
AND WITH HOPE 
OF THEIR EARLY AND VIGOROUS RESURRECTION



(P.S. Couldn't resist the title, sorry)

Friday, May 20, 2016

only connecting

Everything connects; but teasing out the connections in intelligible and useful ways is hard. The book I’m currently writing requires me to describe a complex set of ideas, mainly theological and aesthetic, as they were developed by five major figures: W. H. Auden, T. S. Eliot, C. S. Lewis, Jacques Maritain, and Simone Weil. Other figures come into the story as well, most notably Reinhold Niebuhr; but keeping the connections within limits is essential, lest the story lose its coherence.

So I have to be disciplined. But there is so much I want to include in the book that I can’t — fascinating extensions of the web of ideas and human relations. For instance:

One of my major figures, Maritain, spent most of the war in New York City, where he recorded radio talks, to be broadcast in France, for the French resistance. One of the refugees who joined him in that work was the then-largely-unknown but later-to-be-enormously-famous anthropologist Claude Levi-Strauss. (Levi-Strauss’s parents, who managed to survive the war in France despite being Jewish, did not know that their son was alive until one of their neighbors heard him on the radio.) When Maritain formed the École Libre des Hautes Études, so that French intellectual life could continue in New York, Levi-Strauss joined the school and lectured on anthropology.

Through working at the Ecole Libre, Levi-Strauss met another refugee scholar, the great Russian structural linguist Roman Jakobson, who like Levi-Strauss had come to America on a cargo ship in 1941. Their exchange of ideas (they attended each other’s lectures) ultimately resulted in Levi-Strauss’s invention of the discipline of structural anthropology — one of the great developments of humanistic learning in the twentieth century.

During this period, Levi-Strauss lived in an apartment in Greenwich Village, and by a remarkable coincidence he lived on the same block as Claude Shannon, who was working for Bell Labs. (A neighbor mentioned Shannon to Levi-Strauss as a person who was “inventing an artificial brain.”) He had gotten that job largely because of his Masters thesis, which had been titled “A Symbolic Analysis of Relay and Switching Circuits” — which is to say, he was doing for electrical circuits what Jakobson was doing for linguistics and Levi-Strauss for “the elementary structures of kinship.”

(Shannon liked living in the Village because he was a serious fan of jazz, and liked hanging out in the clubs, where, during this period, Earl Hines's band, featuring Dizzie Gillsepie and Charlie Parker among others, was more-or-less accidentally transforming jazz by creating bebop. We don't know as much about that musical era as we'd like, because from 1942-44 the American Federation of Musicians were on a recording strike. They played but didn't record.)

Shannon's office was a few blocks away in the famous Bell Labs Building, which housed, among other things, work on the Manhattan Project. In January 1943 — at the very moment that the key figures in my book were giving the lectures that shaped their vision for a renewed Christian humanism — Bell Labs received a visitor: Alan Turing.

Over the next couple of months Turing acquainted himself with what was going on at Bell Labs, especially devices for encipherment, though he appears to have said little about his own top secret work in cryptography and cryptanalysis. And on the side he spent some time with Shannon, who, it appears, really was thinking about “inventing an artificial brain.” (Turing wrote to a friend, “Shannon wants to feed not just data to a Brain, but cultural things! He wants to play music to it!”) Turing shared with Shannon his great paper “On Computable Numbers,” which surely helped Shannon towards the ideas he would articulate in his classified paper of 1945, “A Mathematical Theory of Cryptography” and then his titanic “A Mathematical Theory of Communication” of 1948.

That latter paper, combined with Turing’s work on computable numbers, laid the complete theoretical foundation for digital computers, computers which in turn provided the calculations needed to produce the first hydrogen bombs, which then consolidated the dominance of a technocratic military-industrial complex — the same technocratic power that the key figures of my book were warning against throughout the war years. (See especially Lewis’s Abolition of Man and That Hideous Strength.)

This supplanting of a social order grounded in an understanding of humanity — a theory of Man — deriving from biblical and classical sources by a social order manifested in specifically technological power marks one of the greatest intellectual and social transformations of the twentieth century. You can find it everywhere if you look: consider, to cite just one more example, that the first major book by Jacques Ellul was called The Theological Foundation of Law (1946) and that it was succeeded just a few years later by The Technological Society (1954).

So yes, you can see it everywhere. But the epicenter for both the transformation and the resistance to it may well have been New York City, and more particularly Greenwich Village.

Thursday, May 19, 2016

again with the algorithms

The tragically naïve idea that algorithms are neutral and unbiased and other-than-human is a long-term concern of mine, so of course I am very pleased to see this essay by Zeynep Tufecki:

Software giants would like us to believe their algorithms are objective and neutral, so they can avoid responsibility for their enormous power as gatekeepers while maintaining as large an audience as possible. Of course, traditional media organizations face similar pressures to grow audiences and host ads. At least, though, consumers know that the news media is not produced in some “neutral” way or above criticism, and a whole network — from media watchdogs to public editors — tries to hold those institutions accountable.

The first step forward is for Facebook, and anyone who uses algorithms in subjective decision making, to drop the pretense that they are neutral. Even Google, whose powerful ranking algorithm can decide the fate of companies, or politicians, by changing search results, defines its search algorithms as “computer programs that look for clues to give you back exactly what you want.”

But this is not just about what we want. What we are shown is shaped by these algorithms, which are shaped by what the companies want from us, and there is nothing neutral about that.

One other great point Tufecki makes: the key bias at Facebook is not towards political liberalism, but rather towards whatever will keep you on Facebook rather than turning your attention elsewhere.

Tuesday, May 17, 2016

commuters and tourists, pedestrians and pilgrims

My good friend and former colleague Richard Gibson has recently started a blog on books and textuality and reading and all that sort of thing, and this new post is fascinating. Read it with care, but in brief Richard is exploring the contrast (made much of by Ivan Illich) between the monastic book and the scholastic book, and how that difference manifests itself in the appearance of the page:

Monastic readers and their (likely, shared) books belonged to a different theory and practice of the book than their scholastic counterparts, one in which the book was not “scrutable” (Illich’s word for the scholastic “bookish text”), not easily mastered or controlled. It resisted, we might say, the would-be autonomous reader. To embrace the codex’s capacity to be sampled in a back-and-forth manner is, Illich would have us recognize, to trade the “vineyard,” the “garden, the landscape for an adventuresome pilgrimage” for “the treasury, the mine, the storage room,” in other words, a store for raiding rather than a place of leisure and retreat. Such is our fate, Illich concludes, as the children of the Scholastics (our ancestor university-types):

Modern reading, especially of the academic and professional type, is an activity performed by commuters and tourists; it is no longer that of pedestrians and pilgrims.

This post has nothing whatsoever to do with the digital age.

Commuters and tourists vs. pedestrians and pilgrims — now that is a fruitful set of metaphors (and not only metaphors).

Please keep track of what Richard and his collaborators are doing there — more good stuff is to come.

Monday, May 16, 2016

print, disinhibition, neighbors

The intimate relationship between the printing press and the Reformation has long been understood, and if anything has been overstressed. What has been comparatively neglected, in part because it has left so faint a historical record, is what for lack of a better phrase we might call the European postal system. That phrase is not ideal because the delivery of messages seems to have been anything but a system, but in the early modern era it became increasingly possible to get letters and packages to people. Indeed, what we call the Republic of Letters could arise only because it was possible to get actual letters from, say, Thomas More in London to Erasmus in ... well, wherever he happened to be, perhaps at the workshop of Aldus Manutius in Venice. Again, there was nothing systematic about these networks, and couriers varied in reliability, as did the location information with which they were provided.

This is perhaps why so many letters of the period were printed and published — open letters, as it were, prefaced to books or published separately as broadsides. But this leaves us, as it left authors and readers at the time, in a somewhat ambiguous position, sliding along an ill-defined continuum between what we today would call the public and the private.

A valuable tool for understanding this situation is a concept introduced by Christopher Alexander et al. in their seminal book A Pattern Language: intimacy gradients. As I have written before, many of the tensions that afflict social media arise from incompatible assumptions about what degree of intimacy is in effect in any particular conversational exchange — the sea-lion problem, we might call it. Everyone agrees that confusions about whether a conversation is private, or public, or semi-private (e.g. a conversation at a restaurant table), coupled with the online disinhibition effect, contribute to the dysfunctional character of much online discourse; but no one, to my knowledge, has interpreted the agitated hostility of so much early-modern disputation in these terms. The violence with which Thomas More and Martin Luther address each other — e.g. “your paternity's shitty mouth, truly the shit-pool of all shit, all the muck and shit which your damnable rottenness has vomited up” — is, I believe, at least in part explained by the disinhibition generated by a new set of technologies, chief among them the printing press and postal delivery, which enable people to converse with one another who have never met and are unlikely ever to meet.

To put this in theological terms, one might say that neither More nor Luther can see his dialectical opponent as his neighbor — and therefore neither understands that even in long-distance epistolary debate one is obligated to love his neighbor as himself. Indeed, one might even argue that the philosophical concept of “the Other” arises only when certain communicative technologies allow us to converse with people who are not in any traditional or ordinary sense our neighbors. Kierkegaard’s sardonic comment, in Works of Love, is profoundly relevant here: after asking “Who is my neighbor?” he replies, “Neighbor is what philosophers would call the other.” And it is perhaps significant that Kierkegaard, who spent his whole life engaged in the political and social conflicts of what was then a small town, Copenhagen, can see the degeneration involved in the shift from “neighbor” to “other.” He is calling us back from the disinhibition, and accompanying lack of charity, generated by a set of technologies that allow us to converse and debate with people who are not, in the historic sense of the term, our neighbors. Technologies of communication that allow us to overcome the distances of space also allow us to neglect the common humanity we share with the people we now find inhabiting our world.

Friday, May 13, 2016

against time travel

[SPOILER ALERT, I GUESS, EVEN THOUGH I DON’T REVEAL ANY DETAILS]

I just finished reading Paul McAuley’s Confluence trilogy and now I’m extremely annoyed. The reason? Very, very close to the end of this 900-page novel it turns into a time-travel story, and I really, really hate time-travel stories. If I know in advance that a story contains time travel I won’t read it, and if in the process of reading a story I see time travel emerging, I typically stop reading. Thus my extreme annoyance with Confluence, in which I had invested a lot of time and attention before McAuley sprang the nastiness on me.


Maybe it’s time to figure out why I have this reaction.

Let’s start with something comparatively minor: once a writer introduces the possibility of time travel into a story, there are no natural limits to its deployment. A point my son Wesley has made in our decade-long conversation about the Harry Potter books: Once we learn (in Prisoner of Azkaban) that the Ministry of Magic owns at least one Time-Turner, and is even willing to lend it to ambitious 13-year-olds, then where is it when it’s needed for really important stuff like, you know, fighting Voldemort? And yet it never appears again. This is a significant offense against the Elementary Rules of Fictional World-Building.

But my chief complaint … well, I’m not sure I want to call it a “complaint.” Perhaps it’s a congenital imaginative defect on my part. Whatever we want to call it, it’s what prevents me from taking any pleasure in stories that feature time travel. Anyway, here goes:

Almost all stories, written or performed, play with time to some degree. We notice when they don’t: for instance, when Shakespeare goes far, far out of his way to make The Tempest unfold in “real time.” (Characters comment on the passage of time to ensure we don’t miss it.) One chapter of a novel concludes with the beginning of a journey, and the next with arrival at the destination. In 50 pages we go from a character’s birth to his adulthood, and then spend the next 500 covering a few weeks. No one is surprised or troubled by any of this, since, if we couldn’t expand and contract time at will, we couldn’t tell stories — we’d all be Funes the Memorious.

When stories introduce fantastic forms of travel through space, like teleportation or the Floo Network, they’re just off-loading to technology (magic being a kind of technology) the responsibility for skipping narratively insignificant tracts of time that storytellers otherwise handle with the kinds of structural adjustments I mentioned in the previous paragraph.

Time travel is something wholly different, because time travel undoes both cause and consequence. What is more fundamental to storytelling (and of course to life itself) than narrating and reflecting on the repercussions of events? In a very serious and very important sense tracing the effects of causes is what storytelling is. And yet with time travel every repercussion can be removed by removing or adjusting the cause that produced it. Time travel is therefore the abrogation of story itself. And since I read novels because I like stories, as soon as time travel arrives in a novel, I depart.
 

P.S. Confluence was great until McAuley sprang the Bad Thing on me.

Tuesday, May 10, 2016

Tony Stark and the view from above


Many people writing about the new Captain America: Civil War have commented on what seems to them a fundamental irony driving the story: that Tony Stark, the self-described “genius billionaire playboy philanthropist” who always does his own thing, agrees to the Sokovia Accords that place the Avengers under international political control, while Steve Rogers, Captain America, the devoted servant of his country, refuses to sign them and basically goes rogue. But I don't think there’s any real irony here, for two reasons.

The first and simplest reason is that the destruction of Sokovia, which we saw in Avengers: Age of Ultron, was Tony Stark’s fault. Ultron was his creation and no one else’s, and in this new story he is forced to remember that the blood of the people who died there is on his hands. There’s a funny moment in the movie when Ant-Man is rummaging around in Tony’s Iron Man suit to do some mischief to it, and when Tony asks who’s there, replies, “It's your conscience. We don't talk a lot these days.” But Tony’s conscience is the chief driver of this plot. Cap was not responsible for Sokovia, and so doesn't feel responsible (even though he regrets the loss of life).

But I think another point of difference between the two is more broadly significant, and relates to one of the more important themes of this here blog. Tony Stark is basically a plutocrat: a big rich boss, who controls massive amounts of material and massive numbers of people. He sits at the pinnacle of a very, very high pyramid. When the U. S. Secretary of State deals with Tony, he’s dealing with an equal, or maybe a superior: while at one point he threatens Tony with prison, he never follows through, and Tony openly jokes that he’s going to put the Secretary on hold the next time he calls — and does just that. Tony Stark’s view is always the view from above.

But Steve Rogers was, and essentially still is, a poor kid from Brooklyn whose highest ambition was to become an enlisted solider in the U. S. Army. That he became something else, a super-soldier, was initially presented to him as a choice, but quite obviously (to all those in control) a choice he wasn’t going to refuse — he wouldn't have made it into the Army if he had not been a potential subject of experimentation. After that, he did what he was told, even (in the first Captain America movie) when that meant doing pep rallies for soldiers with a troupe of dancing girls. And gradually he has come to question the generally accepted definition of a “good soldier” — because he has seen more and more of the people who make and use good soldiers, and define their existence.

I think the passion with which he defends, and tries to help, Bucky Barnes, while it obviously has much to do with their great and lasting friendship, may have even more to do with the fact that Bucky, like him, is the object of experimentation — someone who was transformed into something other than his first self because it suited the people in power so to transform him.

Tony Stark is, by inheritance and habit and preference, the experimenter; Steve Rogers is the one experimented upon. And that difference, more than any other, explains why they take the divergent paths they take.

I spoke earlier of a recurring theme of this blog, and it’s this: the importance of deciding whether to think of technology from a position of power, from above, or from a position of (at least relative) powerlessness, from below. My most recent post considered the venture capitalist’s view of “platform shifts” and “continuous productivity,” which offers absolutely zero consideration of the well-being of the people who are supposed to be continuously productive. Even more seriously, there’s this old post about a philosopher who speculates on what “we” are going to do with prisoners — because “we” will always be the jailers, never the jailed.

As with politics, so with technology: it’s always about power, and the social position from which power is considered. Tony Stark’s view from above, or Steve Rogers’s view from below. Take your pick. As for me, I’m like any other morally sane person: #teamcap all the way.

Monday, May 9, 2016

Who, whom?

A good many people — some of them very smart — are praising this post by Steven Sinofsky on “platform shifts” — in particular, the shift from PCs to tablets. I, however, think it’s a terrible piece, because it’s based on three assumptions that Sinofsky doesn't know are assumptions:

Assumption the first: “The reality is that these days most email is first seen (and often acted) on a smartphone. So without even venturing to a tablet we can (must!) agree that one can be productive without laptop, even on a tiny screen.” For whom is that “the reality”? For many people, no doubt, but how many? Not for me: I almost never emailed on my phone, even when I used a smartphone — I like dealing with email in batches, not in dribbles and drabbles throughout the day.

“But most people can’t do that! Most people have to be available all the time!” Again: this is true of many people, but most? Show me the evidence, please. And let’s make a clear distinction between people who have some kind of felt need to constantly available — either via peer pressure or innate anxiousness — and those who genuinely can’t, without losing their jobs or at least compromising their positions, be away from email and other social media. (I know not everyone has the freedom I have; but more people have it than think they have it.)

Assumption the second: that the shift to mobile platforms means a shift from PCs to tablets. That internet traffic is moving inexorably towards mobile devices is indubitable; that tablets are going to play a major role in that shift is not so obvious. It may be that since, as even Sinofsky admits, some common tasks are harder to do on a tablet than on a PC, the majority of people will do what they can on a smartphone and do what they have to on a PC.

Assumption the third (the key one): that this “platform shift” is inevitable and the only question is how well you’ll adjust to it. It’s a classic Borg Complex move. As is often the case when people deploy this rhetoric, Sinofsky’s prose overflows with faux-compassion: “Change is difficult, disruptive, and scary so we’ll talk about that.” “The hard part is that change, especially if you personally need to change, requires you to rewire your brain and change the way you do things. That’s very real and very hard and why some get uncomfortable or defensive.” The message is clear: People who do things my way are brave and exploratory, but people who want to do things differently are fearful and defensive. That’s okay, I’m here to help you be more like me.

Let’s try looking at this in another way: кто кого? Assuming that this “platform shift” happens, who benefits from whom? Answer: the companies who make the devices people will use, and the companies who want their employees to exhibit “continuous productivity.” That’s another Sinofsky post, which he ends triumphantly: Continuous productivity “makes for an incredible opportunity for developers and those creating new products and services. We will all benefit from the innovations in technology that we will experience much sooner than we think.” We will all benefit! (Except for poor schmucks like you and me who might want occasionally to have some time to call our own.)

Never believe a venture capitalist who tells you that resistance is futile.

Saturday, May 7, 2016

on the Quants and the Creatives

Over the past few months I’ve thought from time to time about this Planet Money episode on A/B testing. The episode illustrates the power of such testing by describing how people at NPR created two openings for an episode of the podcast, and sent one version out to some podcast subscribers and the second to others. Then they looked at the data from their listeners — presumably you know that such data exists and gets reported to “content providers” — and discovered that one of those openings resulted in significantly more listening time. The hosts are duly impressed with this and express some discomfort that their own preferences may have little value and could, in the future, end up being ignored altogether.

I keep thinking about this episode because at no point during it does anyone pause to reflect that no “science” went into the creation of A and B, only the decision between them. A/B testing only works with the inputs it’s given, and where do those come from? A similar blindness appears in this reflection in the NYT by Shelley Podolny: “these days, a shocking amount of what we’re reading is created not by humans, but by computer algorithms.” At no point in the essay does Podolny acknowledge the rather significant fact that algorithms are written by humans.

These wonder-struck, or horror-struck, accounts of the new Powers That Be habitually obscure the human decisions and acts that create the technologies that shape our experiences. I have written about this before — here’s a teaser — and will write about it again, because this tendentious obfuscating of human responsibility for technological Powers has enormous social and political consequences.

All this provides, I think, a useful context for reading this superb post by Tim Burke, which concerns the divide between the Quants and the Creatives — a divide that turns up with increasing frequency and across increasingly broad swaths of American life. “This is only one manifestation of a division that stretches through academia and society. I think it’s a much more momentous case of ‘two cultures’ than an opposition between the natural sciences and everything else.”

Read the whole thing for an important reflection on the rise of Trump — which, yes, is closely related to the division Tim points out. But for my purposes today I want to focus on this:

The creatives are able to do two things that the social science-driven researchers can’t. They can see the presence of change, novelty and possibility, even from very fragmentary or implied signs. And they can produce change, novelty and possibility. The creatives understand how meaning works, and how to make meaning. They’re much more fallible than the researchers: they can miss a clue or become intoxicated with a beautiful interpretation that’s wrong-headed. They’re either restricted by their personal cultural literacy in a way that the methodical researchers aren’t, and absolutely crippled when they become too addicted to telling the story about the audience that they wish was true. Creatives usually try to cover mistakes with clever rhetoric, so they can be credited for their successes while their failures are forgotten. However, when there’s a change in the air, only a creative will see it in time to profit from it. And when the wind is blowing in a stupendously unfavorable direction, only a creative has a chance to ride out the storm. Moreover, creatives know that the data that the researchers hold is often a bluff, a cover story, a performance: poke it hard enough and its authoritative veneer collapses, revealing a huge hollow space of uncertainty and speculation hiding inside of the confident empiricism. Parse it hard enough and you’ll see the ways in which small effect sizes and selective models are being used to tell a story, just as the creatives do. But the creative knows it’s about storytelling and interpretation. The researchers are often even fooling themselves, acting as if their leaps of faith are simply walking down a flight of stairs.

Now, there are multiple possible consequences of this state of affairs. It may be that the Quants are going to be able to reduce the power of the Creatives by simply attracting more and more money, and thereby in a sense sucking all the air out of the Creatives’ room. But something more interesting may happen as well: the Creatives may end up perfectly happy with the status quo, in which they can work without interference or even acknowledgement to shape the world, like Ben Rhodes in his little windowless office in the West Wing. Maybe poets are the unacknowledged legislators of the world after all.

And then? Well, maybe this:

Their complete negligence is reserved, however,
For the hoped-for invasion, at which time the happy people
(Sniggering, ruddily naked, and shamelessly drunk)
Will stun the foe by their overwhelming submission,
Corrupt the generals, infiltrate the staff,
Usurp the throne, proclaim themselves to be sun-gods,
And bring about the collapse of the whole empire.

Monday, May 2, 2016

Critiquing the Critique of Digital Humanities

Disclosure: IANADH (I am not a digital humanist), but I did get my PhD from the University of Virginia.

There’s a good deal of buzzing in the DH world about this critique of the field by Daniel Allington, Sarah Brouillette, David Golumbia — hereafter ABG — whose argument is that DH’s “most significant contribution to academic politics may lie in its (perhaps unintentional) facilitation of the neoliberal takeover of the university.” Let me do a little buzzing of my own, in three bursts.

Burst the First: In the early stages of the essay, ABG claim that the essential problem with DH, the problem that makes it either vulnerable to co-optation by the neoliberal regime or eagerly complicit in it, is its refusal to see interpretation as the essential activity of literary study. This refusal of interpretation, in ABG’s view, is the key enabler of creeping university-based neoliberalism, in literary studies anyway.

And here’s where the argument takes an odd turn. “It is telling that Digital Humanities ... has found an institutional home at the University of Virginia.” In ABG’s account, UVA is the academic version of the headquarters of Hydra, an analogy I wish I had not thought of, because now I’m casting the roles: Jerome McGann as Red Skull — that one’s obvious — Bethany Nowviskie as Viper, ... but I digress.

Anyway, ABG say that the strong digital-humanities presence at UVA makes sense because of a long institutional history of refusing the centrality of interpretation, starting with Fredson Bowers, the textual scholar who fifty years ago began building UVA’s English department into a world-class one — textual criticism being one of those modes of humanistic scholarship that de-emphasizes interpretation. (Or outright rejects it, if you’re, say, A. E. Housman.) ABG then add to Bowers another problematic figure, E. D. Hirsch — but wait, didn't Hirsch make his name by writing about hermeneutics, in Validity in Interpretation and The Aims of Interpretation? Yes, say ABG, but Hirsch had a “tightly constrained” model of interpretation, so he doesn't count. Similarly, though it would seem that the work of Rita Felski is essentially concerned with interpretation, she has suggested that there are limits to a posture of critique so she goes in the anti-interpretation camp also.

It would appear, then, that for ABG, narrow indeed is the path that leads to hermeneutical salvation, and wide is the way that leads to destruction. But an approach that credits scholars for being interested in interpretation only if they follow an extremely strict — and yet unspecified — model of that practice is just silly. ABG really need to go back to the drawing board here and make their conceptual framework more clear.

While they’re at it, they might also ask how UVA ended up hiring people like Rita Dove and Richard Rorty and Jahan Ramazani, and allowing New Literary History — perhaps the most prominent journal of literary interpretation and critique of the past fifty years — to be founded and housed there. Quite an oversight by the supervillains at Hydra.

Burst the Second: ABG write,

While the reading lists and position statements with which the events were launched make formal nods toward the importance of historical, sociological, and philosophical approaches to science and technology, the outcome was the establishment, essentially by fiat, of Digital Humanities as an academic and not a support field, with the accompanying assertion that technical and managerial expertise simply was humanist knowledge.

This notion, they say, “runs counter to the culture not only of English departments but also of Computer Science departments,” is tantamount to “the idea that technical support is the cutting edge of the humanities,” and “carried to its logical conclusion, such a declaration would entail that the workers in IT departments,” including IT departments of Big Corporations, “are engaged in humanities scholarship.”

Quelle horreur! That’s about as overt an act of boundary-policing as I have seen in quite some time. Get back in “technical support” where you people belong! And stop telling me to reboot my computer! I’ll just offer one comment followed by a question. There is a long history, and will be a long future, of major scientific research being done at large corporations: Claude Shannon worked for Bell Labs, to take but one crucial example, and every major American university has been deeply entangled with the military-industrial complex at least since World War II. Think for instance of John von Neumann, who spent several years traveling back and forth between Princeton’s Institute for Advanced Study and the Atomic Energy Commission. Do ABG really mean to suggest that in all this long entanglement of universities with big business and government the humanities managed to maintain their virginity until DH came along?

And lest you suspect that these entanglements are the product of 20th-century America, please read Chad Wellmon on Big Humanities in 19th-century Germany. The kindest thing one could say about the notion that the "neoliberal takeover of the university" is just happening now — and that it's being spearheaded by people in the humanities! — is to call it historically uninformed.

Burst the Third: The core of ABG’s argument: “Digital Humanities as social and institutional movement is a reactionary force in literary studies, pushing the discipline toward post-interpretative, non-suspicious, technocratic, conservative, managerial, lab-based practice.” This argument is based on a series of, to put it charitably, logically loose associations: many vast multinational corporations rely on digital technologies and “lab-based practice,” and DH does too, ergo.... But one could employ a very similar argument to say that many vast multinational corporations rely on scholars trained in the interpretation of texts — legal texts, primarily — and therefore it is “reactionary” to continue to produce expertise in these very practices. (Think of how many English majors trained in the intricacies of postcolonial critique have ended up in corporate law.) ABG have basically produced a guilt-by-association argument, but one which works against the scholarly models they prefer at least as well as it works against DH.

In fact: much better than it works against DH. What do we have more of in the humanities today: digital humanists, or people who fancy themselves critics of the neoliberal social order but who rely all day every day on computer hardware and software made by Big Business (Apple, Microsoft, Google, Blackboard, etc.)? Clearly the latter dramatically outnumber the former. There are many ways one might defend DH, but one of my favorite elements of the movement is its DIY character: people trained in the basic disciplines of DH learn how to get beyond the defaults imposed by the big technology companies and make our computing machines work for our purposes rather than those of the giant tech companies.

I am not sure to what extent I want to see neoliberalism vanquished, because I am not sure what neoliberalism is. But if there is any idea that has been conclusively refuted by experience, it is that the reactionary forces of late capitalism can be defeated by humanistic critique and “radical” interpretative strategies. If I shared ABG’s politics, I think I’d want to seek collaboration with DH rather than sneer at it.

Friday, April 22, 2016

Prince, tech, and the Californian Ideology


I recently gave some talks to a gathering of clergy that focused on the effects of digital technology on the cultivation of traditional Christian practices, especially the more contemplative ones. But when I talked about the dangers of having certain massive tech companies — especially the social-media giants: Facebook, Twitter, Instagram, Snapchat — dictate to us the modes of our interaction with one another, I heard mutters that I was “blaming technology.”

I found myself thinking about that experience as I read this reflection on Prince’s use of technology — and his resistance to having technological practices imposed on him by record companies.

Prince, who died Thursday at 57, understood how technology spread ideas better than almost anyone else in popular music. And so he became something of a hacker, upending the systems that predated him and fighting mightily to pioneer new ones. Sometimes he hated technology, sometimes he loved it. But more than that, at his best Prince was technology, a musician who realized that making music was not his only responsibility, that his innovation had to extend to representation, distribution, transmission and pure system invention.

Many advances in music and technology over the last three decades — particularly in the realm of distribution — were tried early, and often first, by Prince. He released a CD-ROM in 1994, Prince Interactive, which featured unreleased music and a gamelike adventure at his Paisley Park Studios. In 1997, he made the multi-disc set “Crystal Ball” set available for sale online and through an 800 number (though there were fulfillment issues later). In 2001, he began a monthly online subscription service, the NPG Music Club, that lasted five years.

These experiments were made possible largely because of Prince’s career-long emphasis on ownership: At the time of his death, Prince reportedly owned the master recordings of all his output. With no major label to serve for most of the second half of his career and no constraints on distribution, he was free to try new modes of connection.

No musician of our time understood technology better than Prince — but he wasn’t interested in being stuffed into the Procrustean bed of technologies owned by massive corporations. He wanted to own his turf and to be free to cultivate it in ways driven by his own imagination.

The megatech companies’ ability to convince us that they are not Big Business but rather just open-minded, open-hearted, exploratory technological creators is perhaps the most powerful and influential — and radically misleading — sales jobs of the past 25 years. The Californian ideology has become our ideology. Which means that many people cannot help seeing skepticism about the intentions some of the biggest companies in the world as “blaming technology.” But that way Buy n Large lies.

hello, it's me

Remember me? I used to blog here, back in the day. I stepped away because I was working on a book that overlapped a bit too much with the typical subjects of this blog; but things have changed a bit. A good bit.

The plan then was to write a short book that expanded on these Theses for Disputation — but it proved to be impossible to find the right length and right approach. So an expanded version of those theses will appear in a future issue of The New Atlantis. Please stay tuned for that.

That noted, I have a few further updates:

  • My employer, Baylor University, has graciously extended unto me a research leave for next year. 
  • My first task, as soon as the current semester is over, is to get to work finishing this book — which I hope to do by the end of this calendar year. Ora pro me
  • It is possible that as soon as that is done I'll turn to a smaller project I haven't mentioned here (or anywhere else) at all, but I'll be quiet about that for now. 
  • Either after that smaller project, or instead of it, I feel compelled to write about what on this blog I have called the technological history of modernity. I am in conversation with a publisher about a contract for that. More updates as they become available. 
  • Those theses for disputation, and my earlier book The Pleasures of Reading in an Age of Distraction, were really enriched by my writing this blog and by comments I received here from my readers. I miss this place as an idea-generator and idea-developer. I am hoping to be able to resume blogging here, perhaps irregularly; but we'll see. 
  • This morning I have a review in the Wall Street Journal of two books on technology, knowledge, and memory. Please check it out
More soon, I hope! 

Friday, September 18, 2015

coming attractions

Regular readers of this blog may remember a few months ago some posts revolving around 79 Theses on Technology that The Infernal Machine graciously published. I got some good feedback on those theses — some positive, some critical, all useful — which allowed me to deepen and extend my thinking, and to discern ways in which it needed to be deepened and extended still further.

So when Fred Appel at Princeton University Press, who commissioned and edited my biography of the Book of Common Prayer, asked if I might be interested in turning an expanded version of those theses into a short book, I jumped at the chance. And I'm working on that now.

But many of the ideas that I might normally be developing on this blog need to go into that book — which will probably mean a period of silence here, until the book is completed. (I keep reading things and thinking Hey, I want to write a post about that wait a minute I can't write a post about that.)

After I have completed Short But As Yet Unnamed Book Of Theses On Technology, I hope to return to my much bigger book on Christian intellectuals and World War II — which has been on hiatus because I ran into some intractable organizational problems. But having taken a few months away from the project, I can already begin to see how I might resume and reconstruct it.

In the longer term, I hope to return to this space to develop my thoughts on the technological history of modernity — and perhaps turn those into a book as well.

Those are the plans, anyway. I may post here from time to time, but not very often until the Theses are done. Please wish me well!

two visions of higher education?

Kwame Anthony Appiah on the two visions of American higher education:

One vision focuses on how college can be useful — to its graduates, to employers and to a globally competitive America. When presidential candidates talk about making college more affordable, they often mention those benefits, and they measure them largely in dollars and cents. How is it helping postgraduate earnings, or increasing G.D.P.? As college grows more expensive, plenty of people want to know whether they’re getting a good return on their investment. They believe in Utility U.

Another vision of college centers on what John Stuart Mill called “experiments in living,” aimed at getting students ready for life as free men and women. (This was not an entirely new thought: the “liberal” in “liberal education” comes from the Latin liberalis, which means “befitting a free person.”) Here, college is about building your soul as much as your skills. Students want to think critically about the values that guide them, and they will inevitably want to test out their ideas and ideals in the campus community. (Though more and more students are taking degrees online, most undergraduates will be on campus a lot of the time.) College, in this view, is where you hone the tools for the foundational American project, the pursuit of happiness. Welcome to Utopia U.

Together, these visions — Utility and Utopia — explain a great deal about modern colleges and universities. But taken singly, they lead to very different metrics for success.

Appiah walks through this tired old dichotomy only in order to say: Why not both?

(To be clear: like Appiah, I am only addressing the American context. Things can be different elsewhere, as, for example, in Japan, where a government minister has just asked all public universities to eliminate programs in the social sciences and humanities, including law and economics, and to focus instead on “more practical vocational education that better anticipates the needs of society.”)

A good general rule: when someone constructs an argument of this type — Between A and B there seems to be a great gulf fixed, but I have used my unique powers of insight to discern that this is a false dichotomy and we need not choose! — it is unlikely that they have described A fairly or described B fairly or described the conflict between then accurately.

So let's try to parse this out with a little more care:

  • Some colleges (mainly the for-profit ones) promise nothing but utility.
  • Some colleges (say, St. John's in Annapolis and Santa Fe) promise nothing but what Appiah calls Utopia, that is, an environment for pursuing essential and eternal questions.
  • Most colleges follow the example of Peter Quill, Star-Lord, and promise a bit of both.
  • Most students want, or at least claim to want, a bit of both. A few are driven primarily by intellectual curiosity, but they'd love to believe that a course of study organized around such explorations can also lead to a decent job after graduation; a great many more are primarily concerned to secure good job opportunities, but also want to confront interesting ideas and beautiful works of art. (Many of my best students in the Honors College at Baylor are pre-med, but love taking literature and philosophy courses for just this reason.)

Given this general state of affairs, with its range of sometimes-complementary and sometimes-conflicting forces at work, Appiah's framing is simplistic — and also serves as a way to avoid the really key question for the coming years: Who will pay, and what will they pay for?

Tuesday, September 15, 2015

a public amateur's story

There is so much that’s wonderful about Sara Hendren’s talk here that I can’t summarize it — and wouldn’t if I could. Please just watch it, and watch to the end, because in the last few minutes of the talk things come together in ways that will be unexpected to those who don't know Sara. Also be sure to check out Abler.

One of Sara’s models is the artist Claire Pentecost, who sees herself as a public amateur:

One of the things I’m attached to is learning. And one of the models I’ve developed theoretically is that of the artist as the public amateur. Not the public intellectual, which is usually a position of mastery and critique, but the public amateur, a position of inquiry and experimentation. The amateur is the learner who is motivated by love or by personal attachment, and in this case, who consents to learn in public so that the very conditions of knowledge production can be interrogated. The public amateur takes the initiative to question something in the province of a discipline in which she is not conventionally qualified, acquires knowledge through unofficial means, and assumes the authority to offer interpretations of that knowledge, especially in regard to decisions that affect our lives.

Public amateurs can have exceptional social value, not least because they dare to question experts who want to remain unquestioned simply by virtue of accredited expertise; public amateurs don't take “Trust me, I know what I’m doing” as an adequate self-justification. But perhaps the greatest contribution public amateurs make to society arises from their insistence — it’s a kind of compulsion for them — on putting together ideas and experiences that the atomizing, specializing forces of our culture try to keep in neatly demarcated compartments. This is how an artist and art historian ends up teaching at an engineering school.

There are two traits that, if you wish to be a public amateur, you simply cannot afford to possess. You can’t insist on having a plan and sticking with it, and you can’t be afraid of making mistakes. If you’re the sort of person whose ducks must always be in a neat, clean row, the life of the public amateur is not for you. But as the personal story Sara tells near the end of her talk indicates, sometimes life has a way of scrambling all your ducks. When that happens, you can rage vainly against it; or you can do what Sara did.

Monday, September 14, 2015

The Grand Academy of Silicon Valley



After writing today’s post I couldn’t shake the notion that all this conversation about simplifying and rationalizing language reminded me of something, and then it hit me: Gulliver’s visit to the grand academy of Lagado.

A number of the academicians Gulliver meets there are deeply concerned with the irrationality of language, and pursue schemes to adjust it so that it fits their understanding of what science requires. One scholar has built a frame (pictured above) comprised of a series of turnable blocks. He makes some of his students turn the handles and other students to write down the sentences produced (when sentences are produced, that is).

But more interesting in light of what Mark Zuckerberg wants are those who attempt to deal with what, in Swift’s time, was called the res et verba controversy. (You can read about it in Hans Aarsleff’s 1982 book From Locke to Saussure: Essays on the Study of Language and Intellectual History.) The controversy concerned the question of whether language could be rationalized in such a way that there is a direct one-to-one match between things (res) and words (verba). This problem some of the academicians of Lagado determined to solve — along with certain other problems, especially including death — in a very practical way:

The other project was, a scheme for entirely abolishing all words whatsoever; and this was urged as a great advantage in point of health, as well as brevity. For it is plain, that every word we speak is, in some degree, a diminution of our lunge by corrosion, and, consequently, contributes to the shortening of our lives. An expedient was therefore offered, “that since words are only names for things, it would be more convenient for all men to carry about them such things as were necessary to express a particular business they are to discourse on.” And this invention would certainly have taken place, to the great ease as well as health of the subject, if the women, in conjunction with the vulgar and illiterate, had not threatened to raise a rebellion unless they might be allowed the liberty to speak with their tongues, after the manner of their forefathers; such constant irreconcilable enemies to science are the common people. However, many of the most learned and wise adhere to the new scheme of expressing themselves by things; which has only this inconvenience attending it, that if a man’s business be very great, and of various kinds, he must be obliged, in proportion, to carry a greater bundle of things upon his back, unless he can afford one or two strong servants to attend him. I have often beheld two of those sages almost sinking under the weight of their packs, like pedlars among us, who, when they met in the street, would lay down their loads, open their sacks, and hold conversation for an hour together; then put up their implements, help each other to resume their burdens, and take their leave.

But for short conversations, a man may carry implements in his pockets, and under his arms, enough to supply him; and in his house, he cannot be at a loss. Therefore the room where company meet who practise this art, is full of all things, ready at hand, requisite to furnish matter for this kind of artificial converse.

Rationalizing language and extending human life expectancy at the same time! Mark Zuckerberg and Ray Kurzweil, meet your great forbears!

Facebook, communication, and personhood

William Davies tells us about Mark Zuckerberg's hope to create an “ultimate communication technology,” and explains how Zuckerberg's hopes arise from a deep dissatisfaction with and mistrust of the ways humans have always communicated with one another. Nick Carr follows up with a thoughtful supplement:

If language is bound up in living, if it is an expression of both sense and sensibility, then computers, being non-living, having no sensibility, will have a very difficult time mastering “natural-language processing” beyond a certain rudimentary level. The best solution, if you have a need to get computers to “understand” human communication, may to be avoid the problem altogether. Instead of figuring out how to get computers to understand natural language, you get people to speak artificial language, the language of computers. A good way to start is to encourage people to express themselves not through messy assemblages of fuzzily defined words but through neat, formal symbols — emoticons or emoji, for instance. When we speak with emoji, we’re speaking a language that machines can understand.

People like Mark Zuckerberg have always been uncomfortable with natural language. Now, they can do something about it.

I think we should be very concerned about this move by Facebook. In these contexts, I often think of a shrewd and troubling comment by Jaron Lanier: “The Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?” In this sense, the degradation of personhood is one of Facebook's explicit goals, and Facebook will increasingly require its users to cooperate in lowering their standards of intelligence and personhood.