Text Patterns - by Alan Jacobs

Thursday, March 20, 2014

what "detox" does and doesn't mean

A good many people in my Twitter feed really like this reflection on unplugging by Casey Cep, whose writing I too usually enjoy a lot. But this piece is making me scratch my head. Let me go through the end of the piece and I’ll see if I can explain my confusion:

This is why it’s strange to think of these unplugging events as anything like detox: the goal isn’t really abstinence but a return to these technologies with a renewed appreciation of how to use them.

Cep seems to think that the word “detox” has one meaning, the one associated with drug addiction: drug addicts visit a clinic to detoxify their system, with the determination not to return to their bad old habits. But we also use the word in other ways: think about the “detox spa,” which people visit for a period during which they avoid foods they usually eat and drinks they usually drink — but with every expectation of resuming their familiar practices, more or less, when they return to ordinary life. If people are thinking of “digital detox” in that sense, which is, it seems to me, far more common than the drug-addiction sense, then Cep’s critique simply doesn’t apply.

Few who unplug really want to surrender their citizenship in the land of technology; they simply want to travel outside it on temporary visas. Those who truly leave the land of technology are rarely heard from again, partly because such a way of living is so incommensurable. The cloistered often surrender the ability to speak to those of us who rely so heavily on technology. I was mindful of this earlier this month when I reviewed a book about a community of Poor Clares in Rockford, Illinois. The nuns live largely without phones or the Internet; they rarely leave their monastery. Their oral histories are available only because a scholar spent six years interviewing them, organizing their testimonies so that outsiders might have access. The very terms of their leaving the plugged-in world mean that their lives and wisdom aren’t readily accessible to those of us outside their cloister.

Is this meant as a criticism of the nuns? That their alternative way of life isn’t “accessible” to others? If not, then I don't know what the point of the anecdote is. If so, then I don't agree. No one is obliged to make his or her experience accessible to anyone else.

That is why, I think, the Day of Unplugging is such a strange thing. Those who unplug have every intention of plugging back in.

As noted above: exactly.

This sort of stunt presents an experiment, with its results determined beforehand; one finds exactly what one expects to find: never more, often less.

Wait, do we know that? I’d be quite surprised if no one who has unplugged has been surprised by the resulting experience.

It’s one of the reasons that the unplugging movement has attracted such vocal criticism from the likes of Nathan Jurgenson, Alexis Madrigal, and Evgeny Morozov. If it takes unplugging to learn how better to live plugged in, so be it.

Isn’t that often just the point? I know that when I take a vacation from Twitter, which I do sometimes, I do it in hopes that when I return I’ll enjoy it more and get more from it.

But let’s not mistake such experiments in asceticism for a sustainable way of life. For most of us, the modern world is full of gadgets and electronics, and we’d do better to reflect on how we can live there than to pretend we can live elsewhere.

I guess I just haven’t seen anybody detoxing who is thinking of it as “a sustainable way of life.” I think we can take it as axiomatic that anyone who announces his or her detox on social media isn’t undertaking severe ascesis. So as far as I can tell, Cep’s post doesn't hold up as a substantive critique.

But she’s surely right about one thing: detoxers can be obnoxiously self-congratulatory about their highly temporary withdrawals from our digital worlds.

more on knowledge and value

The tl;dr version of this post: In late capitalism, “Useful Knowledge” can take care of itself, and does. Let’s concern ourselves with other things.

In my earlier post on the value of knowledge I said I would return to some questions raised there. About some recent academic research Aaron Gordon had said, “Two questions come immediately to mind: Why would anyone study these things, and why would anyone pay someone to study these things?” I spoke to the the first question in that post, and will return to it here; then I’ll get to the important stuff.

Let’s consider this recent story from the New York Times:

American science, long a source of national power and pride, is increasingly becoming a private enterprise.

In Washington, budget cuts have left the nation’s research complex reeling. Labs are closing. Scientists are being laid off. Projects are being put on the shelf, especially in the risky, freewheeling realm of basic research. Yet from Silicon Valley to Wall Street, science philanthropy is hot, as many of the richest Americans seek to reinvent themselves as patrons of social progress through science research.

The result is a new calculus of influence and priorities that the scientific community views with a mix of gratitude and trepidation....

That personal setting of priorities is precisely what troubles some in the science establishment. Many of the patrons, they say, are ignoring basic research — the kind that investigates the riddles of nature and has produced centuries of breakthroughs, even whole industries — for a jumble of popular, feel-good fields like environmental studies and space exploration.

Please read the whole article, which treats vitally important issues.

And now let’s perform a thought-experiment. Read this list of 20th-century scientific discoveries and ask yourself: How many of them would have happened under the kind of funding regime American science is headed towards — or rather, that is already largely in place? (Those philanthropists may be funding their pet projects more directly now, but they’ve been giving to universities, with plentiful strings attached, for a long time.) Or consider something not even on that list — perhaps because it separates mathematics and technology from science: You want to talk about “esoteric”? What could possibly be more esoteric than David Hilbert’s Entscheidungsproblem? And yet it was Alan Turing’s answer to that problem that gave us digital computing — a result that no one could possibly have foreseen.

I think this thought-experiment, coupled with the NYT article on science, suggests to us a few points:

1) No one knows, and no one can know, what the future uses will be of the knowledge people are discovering, or want to discover, today.

2) Knowledge which is obviously useful, especially in the widespread sense of “potentially lucrative,” will always, in a free-market or mostly-free-market system, have its patrons.

3) Therefore it’s reasonable for society to sponsor institutions and scholars that work on the apparently esoteric, on the same principle that pharmaceutical companies pay for research into new drugs. Very, very little of that research makes its way to market — but what does pays for the rest.

All that if you want to make a largely economic, use-oriented case for the value of the apparently esoteric.

But I don't want to make that case.

In Auden’s greatest poetic achievement, the sequence Horae Canonicae, he writes with wonder of the incomprehensibility, the unpredictability, the wholly gratuitous nature, of vocation — of obedience to a calling. “To ignore the appetitive goddesses ... // what a prodigious step to have taken.”

There should be monuments, there should be odes,
to the nameless heroes who took it first,

to the first flaker of flints
who forgot his dinner,

the first collector of sea-shells
to remain celibate.

For Auden, there is nothing more delightfully and distinctively human than this obedience to an inexplicable desire to learn, to study — a desire that in some can suspend our habitual animal obedience to appetite — including, I would like to note, not just the appetites for food and sex, but also for economic security and social prestige. To heed this call to an utterly non-utilitarian studiousness is a mark of civilization in an individual — and also in a society, which, if it can afford it, should create and sustain institutions in which such studiousness can flourish.

So, to the question of whether anyone in our tremendously wealthy and astonishingly wasteful society should pay people to study the body temperature of the nesting red-footed Booby (Sula sula), I say: Absolutely. Take the money out of the athletic department’s budget if need be. And when you’re done paying them, build a freakin’ monument to them.

"It is good just by being knowledge"

Here’s a post on a familiar theme: academic papers that no one reads. Let’s take it as a given that there is too much academic publishing, that academic writing is often used to achieve or mark status rather than to add to or disseminate knowledge, and so on. Duly noted, once more. But there’s another point in the post I want to call attention to.

The author, Aaron Gordon, runs some random word searches in an academic database and lists some of the articles he finds. For instance: “Complexity of Early and Middle Successional Stages in a Rocky Intertidal Surfgrass Community,” by Teresa Turner, Oecologia, Vol. 60, No. 1 (1983), pp. 56-65. And “Darwin and Nietzsche: Selection, Evolution, and Morality,” by Catherine Wilson, Journal of Nietzsche Studies, Vol. 44, No. 2 (Summer 2013), pp. 354-370. And “Body Temperature of the Nesting Red-Footed Booby (Sula sula),” by R. J. Shallenberger, G. C. Whittow, R. M. Smith, The Condor, Vol. 76, No. 4 (Winter, 1974), pp. 476-478.

Then Gordon comments, “Two questions come immediately to mind: Why would anyone study these things, and why would anyone pay someone to study these things?” And later: “There must be some way to distinguish between the useful and the esoteric.”

But I want to say: What’s not interesting here? Darwin and Nietzsche aren’t interesting? The ecological complexities of surfgrass beaches aren’t interesting? How birds regulate their body temperature — that’s not interesting? I actually wanted to click through to many of those articles to find out more. Moral: Don't allow your own lack of intellectual curiosity to be a guide to the value of research.

And to the claim that “There must be some way to distinguish between the useful and the esoteric”: no, there mustn’t, and there almost certainly isn’t. Moreover, and more important, I’m reminded of Auden’s prophecy in “Under Which Lyre” of the dangerous powers of Apollo: “And when he occupies a college, / Truth is replaced by Useful Knowledge.” Thus also the speech of the old A. E. Housman in Tom Stoppard’s play The Invention of Love:

A scholar's business is to add to what is known. That is all. But it is capable of giving the very greatest satisfaction, because knowledge is good. It does not have to look good or even sound good or even do good. It is good just by being knowledge. And the only thing that makes it knowledge is that it is true. You can't have too much of it and there is no little too little to be worth having. There is truth and falsehood in a comma.

Obviously my view of things — Auden’s view, Stoppard’s Housman’s view — has implications for the economics of university life. And maybe I’ll get to that in another post, soon. But for now I just wanted to register some irritation and suggest a different way of thinking about these matters than Gordon’s.

Tuesday, March 18, 2014

the geeks inherit the earth

Emily Bell recently argued that some hot new tech/journalism/etc. companies that are positioning themselves as radical alternatives to business-as-usual are, in the matter of hiring women and minorites, totally business-as-usual: a bunch of white guys with a slight scattering of women and minorities.

Nate Silver, one of those whom Bell was describing, didn't like her accusation: “The idea that we’re bro-y people just couldn’t be more off. We’re a bunch of weird nerds. We’re outsiders, basically. And so we have people who are gay, people of different backgrounds. I don’t know. I found the piece reaaaally, really frustrating. And that’s as much as I’ll say.”

Zeynep Tufecki has precisely the right response to Silver’s annoyance:

What happens when formerly excluded groups gain more power, like techies? They don’t just let go of their old forms of cultural capital. Yet they may be blind to how their old ways of identifying and accepting each other are exclusionary to others. They still interpret the world through their sense of status when they were “basically, outsiders.”

Most tech people don’t think of it this way, but the fact that most of them wear jeans all the time is just another example of cultural capital, an arbitrary marker that’s valued in their habitus, both to delineate it and to preserve it. Jeans are arbitrary, as arbitrary as ties....

How does that relate to the Silver’s charged defense that his team could not be “bro-y” people? Simple: among the mostly male, smart, geeky groups that most programmers and technical people come from, there is a way of existing that is, yes, often fairly exclusionary to women but not in ways that Silver and his friends recognize as male privilege. When they think of male privilege, they are thinking of “macho” jocks and have come to believe their own habitus as completely natural, all about merit, and also in opposition to macho culture. But if brogrammer culture opposes macho culture, it does not follow that brogrammer culture is automatically welcoming to other excluded groups, such as women.

I’m reminded here of a fantastic essay Freddie deBoer wrote a while back about the triumphs of geek culture, especially in its love of fantasy and SF:

Commercial dominance, at this point, is a given. What critical arbiters would you like? Is it a Best Picture Oscar for one of their movies? Can’t be. Return of the King won it in 2003. (And ten other Academy Awards. And four Golden Globes. And every other major award imaginable.) Recognition from the “literary establishment?” Again, I don’t know what that term could refer to; there are publishers and there are academics and there are book reviewers, but there is no such thing as a literary establishment. Even a cursory look at individual actors dedicated to literature will reveal that glory for sci-fi, fantasy, and graphic novels has already arrived. Turn of the century “best book” lists made ample room for J.R.R. Tolkien, Jules Verne, Arthur C. Clarke, Philip K. Dick, and others. Serious book critics fall all over themselves to praise the graphic novels of Allison Bechdel and Art Spiegelman. Respect in the world of contemporary fiction? Michael Chabon, Lev Grossman, and other “literary fantasists” have earned rapturous reviews from the stuffiest critics. Penetration into university culture and academic literary analysis? English departments are choked with classes on sci-fi and genre fiction, in an effort to attract students. Popular academic conferences are held not just on fantasy or graphic novels but specifically on Joss Whedon and Batman. Peer-reviewed journals host special issues on cyberpunk and video game theory.

To the geeks, I promise: I’m not insulting you. I’m conceding the point that you have worked for so long to prove. Victory is yours. It has already been accomplished. It’s time to enjoy it, a little; to turn the critical facility away from the outside world and towards political and artistic problems within the world of geek culture; and if possible, maybe to defend and protect those endangered elements of high culture. They could use the help. It’s time for solidarity.

And this is what I’d also like to say to Nate Silver: Victory is yours. It has already been accomplished. Dude, you worked for the New York Times and you left it voluntarily — to work for ESPN, 80% of which is owned by Disney and the other 20% by Hearst. In 21st-century America, it is not possible to be any more inside than this. You cannot stick it to the Man — you are the Man. It’s best that you, and people in similar positions, realize that as soon as possible; and forego the illusion that you have some outsider status that exempts you from criticism like that presented by Emily Bell. Whether you agree with Bell’s argument or not, get used to it: you’re going to hear a lot more along those lines as long as you continue to be the Man.


Monday, March 17, 2014

analog memory desk

Seriously, I want one of these. Better than a Memex.

silence as luxury good

Chloe Schama writes about silence as a “luxury product”:

But the impossibility of silence says something about why it remains so alluring. Noise-related annoyances stem from emotion—frustration, disorientation, fear—as much as actual audible irritation. During late nineteenth-century industrialization, “The noise of [the railroad’s] steam whistle,” writes Emily Thompson in The Soundscape of Modernity, “was disturbing not only for its loudness but also for its unfamiliarity.” When a 1926 study determined that an individual horse and carriage was actually louder than an individual automobile, The New York Times perceptively responded that the it was not the nature of the sounds that was the trouble, but the fact that “the ear has not learned how to handle them.” In a 1929 poll of New Yorkers, noises identified as “machine-age inventions” were the ones that bothered them most. And by the late 1920s, activists and engineers had a way to quantify their irritations. In 1929, the decibel was established as standard unit of sound. Science contributes to noisiness in more than just audible output: New means of measuring heightened peoples’ awareness of their aggravation.

For what it’s worth, I wrote about this topic in my book The Pleasures of Reading in an Age of Distraction, at first in the context of the history of reading aloud:

This much is clear: the more noise surrounds us, the harder it is to read aloud. Reading aloud, and still more murmured reading, requires a quiet enough environment that you can hear what you speak; otherwise it is a pointless activity. And it might be worth pausing here to note that city life has always been loud — that is not an artifact of modern times. Bruce R. Smith’s extraordinary study The Acoustic World of Early Modern England gives us a full and rather disorienting sense of just how cacophonous the world was for many of our ancestors half-a-millennium ago. And Diana Webb in her book Privacy and Solitude in the Middle Ages argues, convincingly, that many people, men and women alike, sought monastic life less from piety than from a desperate need to find refuge from all the racket. Maybe they just wanted to find a place where they could be left alone to read.

The conclusion we may draw from all this is simply this: the noisier the environment, the more readers are driven to be silent. It is only in “privacy and solitude” that reading aloud or murmuring can ever be a reasonable option, and rarely have our ancestors had that option. The boy trying to study at the kitchen table while the clamor of family life goes on around him is a typical figure in the history of reading. No one could plausibly claim that we late-moderns are uniquely challenged in this respect: surely a higher percentage of human beings today have regular access to silence that at any time in human history. Most Americans and Western Europeans, and many people elsewhere — not all, mind you — live in environments with quiet rooms, or quiet corners. And many who lack quiet homes have had access to libraries, which have for centuries been dedicated, as it were, to silence.

For the thrilling conclusion of my thoughts on this subject, you’ll just have to buy the book. (Spoiler alert: not everyone wants to keep libraries quiet.)

Friday, March 14, 2014

the position of power

In this interview, the philosopher Rebecca Roache speculates about the future of punishment:

It’s one thing to lose your personal liberty as a result of being confined in a prison, but you are still allowed to believe whatever you want while you are in there. In the UK, for instance, you cannot withhold religious manuscripts from a prisoner unless you have a very good reason. These concerns about autonomy become particularly potent when you start talking about brain implants that could potentially control behaviour directly. The classic example is Robert G Heath [a psychiatrist at Tulane University in New Orleans], who did this famously creepy experiment [in the 1950s] using electrodes in the brain in an attempt to modify behaviour in people who were prone to violent psychosis. The electrodes were ostensibly being used to treat the patients, but he was also, rather gleefully, trying to move them in a socially approved direction. You can really see that in his infamous [1972] paper on ‘curing’ homosexuals. I think most Western societies would say ‘no thanks’ to that kind of punishment.

To me, these questions about technology are interesting because they force us to rethink the truisms we currently hold about punishment. When we ask ourselves whether it’s inhumane to inflict a certain technology on someone, we have to make sure it’s not just the unfamiliarity that spooks us. And more importantly, we have to ask ourselves whether punishments like imprisonment are only considered humane because they are familiar, because we’ve all grown up in a world where imprisonment is what happens to people who commit crimes. Is it really OK to lock someone up for the best part of the only life they will ever have, or might it be more humane to tinker with their brains and set them free? When we ask that question, the goal isn’t simply to imagine a bunch of futuristic punishments – the goal is to look at today’s punishments through the lens of the future.

To me, the key to these speculations may be found in one word in the first sentence quoted: “allowed.” To speak of prisoners as “allowed to believe whatever [they] want” while in prison is to speak of human thought as the rightful property of the State, which it may then entrust to us — but may also withhold from us if there is, as when withholding religious texts, “a very good reason.”

Understand: I am not saying that Roache is simply advocating thought control as a means of punishment or rehabilitation of lawbreakers. I am, rather, noting that her language crosses a vitally important line — the line that separates two radically different ideas about whom or what human personhood belongs to — without her demonstrating any awareness whatsoever that she has done so. It is natural and normal to her to talk as though states can do what they want with human minds and simply must decide what would work best — what would have the optimal social effects. (This is yet another mode of rationalism in politics.)

There is a kind of philosopher — an all too common kind of philosopher — who when considering such topics habitually identifies himself or herself with power. Pronouns matter a good deal here. Note that in Roache’s comments “we” are the ones who have the power to inflict punishment on “someone.” We punish; they are punished. We control; they are controlled. We decide; they are the objects of our decisions. Would Roache’s speculations have taken a different form, I wonder, if she had reversed the pronouns?

This is the danger for all of us who have some wealth and security and status: to imagine that the punitive shoe will always be on the other’s foot. In these matters it might be a useful moral discipline for philosophers to read the great classics of dystopian fiction, which habitually envision the world of power as seen by the powerless.

Wednesday, March 12, 2014

the five comments you meet on the internet

1) “Your post reminds of of something totally random and irrelevant in my own life.”

2) “You’re making this way too complicated.”

3) “You’re making this way too simple.”  

4) “You’re stupid.”

5) “You’re evil.”

the Baconians of Mountain View

One small cause of satisfaction for me in the past few years has been the decline of the use of the word “postmodern” as a kind of all-purpose descriptor of anything the speaker thinks of as recent and different. The vagueness of the term has always bothered me, but even more the lack of historical awareness embedded in most uses of it. I have regularly told my students that if they pointed to a recent statement that they thought of as postmodern I could almost certainly find a close analogue of it from a sixteenth-century writer (often enough Montaigne). To a great but unacknowledged degree, we are still living in the fallout from debates, especially debates about knowledge, that arose more than four hundred years ago.

One example will do for now. In what became a famous case in the design world, five years ago Doug Bowman left Google and explained why:

When I joined Google as its first visual designer, the company was already seven years old. Seven years is a long time to run a company without a classically trained designer. Google had plenty of designers on staff then, but most of them had backgrounds in CS or HCI. And none of them were in high-up, respected leadership positions. Without a person at (or near) the helm who thoroughly understands the principles and elements of Design, a company eventually runs out of reasons for design decisions. With every new design decision, critics cry foul. Without conviction, doubt creeps in. Instincts fail. “Is this the right move?” When a company is filled with engineers, it turns to engineering to solve problems. Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data. Data in your favor? Ok, launch it. Data shows negative effects? Back to the drawing board. And that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions.

Yes, it’s true that a team at Google couldn’t decide between two blues, so they’re testing 41 shades between each blue to see which one performs better. I had a recent debate over whether a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can’t operate in an environment like that. I’ve grown tired of debating such minuscule design decisions. There are more exciting design problems in this world to tackle.

What Bowman thought of as a bug — “data [was] paralyzing the company and preventing it from making any daring design decisions” — the leadership at Google surely thought of as a feature. What’s the value of “daring design decisions”? We’re trying to get clicks here, and we can find out how to achieve that.

With that story in mind, let’s turn to Michael Oakeshott’s great essay “Rationalism in Politics” and his account therein of Francis Bacon’s great project for setting the quest for knowledge on a secure footing:

The Novum Organum begins with a diagnosis of the intellectual situation. What is lacking is a clear perception of the nature of certainty and an adequate means of achieving it. ‘There remains,’ says Bacon, ‘but one course for the recovery of a sound and healthy condition — namely, that the entire work of understanding be commenced afresh, and the mind itself be from the very outset not left to take its own course, but guided at every step’. What is required is a ‘sure plan’, a new ‘way’ of understanding, an ‘art’ or ‘method’ of inquiry, an ‘instrument’ which (like the mechanical aids men use to increase the effectiveness of their natural strength) shall supplement the weakness of the natural reason: in short, what is required is a formulated technique of inquiry....

The art of research which Bacon recommends has three main characteristics. First, it is a set of rules; it is a true technique in that it can be formulated as a precise set of directions which can be learned by heart. Secondly, it is a set of rules whose application is purely mechanical; it is a true technique because it does not require for its use any knowledge or intelligence not given in the technique itself. Bacon is explicit on this point. The business of interpreting nature is ‘to be done as if by machinery’, ‘the strength and excellence of the wit (of the inquirer) has little to do with the matter’, the new method ‘places all wits and understandings nearly on a level’. Thirdly, it is a set of rules of universal application; it is a true technique in that it is an instrument of inquiry indifferent to the subject-matter of the inquiry.

It is hard to imagine a more precise and accurate description of the thinking of the Baconians of Mountain View. They didn’t want Bowman’s taste or experience. He might have been the most gifted designer in the world, but so what? “The strength and excellence of the wit (of the inquirer) has little to do with the matter.” Instead, decisions are “to be done as if by machinery” — no, strike that, they are to be done precisely by machinery and only by machinery. Moreover, there is no difference in technique between a design decision and any other kind of decision: the method of letting the data rule “is an instrument of inquiry indifferent to the subject-matter of the inquiry.”

Oakeshott’s essay provides a capsule history of the rise of Rationalism as a universal method of inquiry and action. It focuses largely on Bacon and Descartes as the creators of the Rationalist frame of mind and on their (less imaginative and creative) successors. It turns out that an understanding of seventeenth-century European thought is an indispensable aid to understanding the technocracy of the twenty-first century world.

Tuesday, March 11, 2014

paper works

Monday, March 10, 2014

Kevin Kelly's New Theology

Kevin Kelly’s theology is a contemporary version of the one George Bernard Shaw articulated a hundred years ago. In “The New Theology: A Sermon,” Shaw wrote,

In a sense there is no God as yet achieved, but there is that force at work making God, struggling through us to become an actual organized existence, enjoying what to many of us is the greatest conceivable ecstasy, the ecstasy of a brain, an intelligence, actually conscious of the whole, and with executive force capable of guiding it to a perfectly benevolent and harmonious end. That is what we are working to. When you are asked, “Where is God? Who is God?” stand up and say, “I am God and here is God, not as yet completed, but still advancing towards completion, just in so much as I am working for the purpose of the universe, working for the good of the whole of society and the whole world, instead of merely looking after my personal ends.” In that way we get rid of the old contradiction, we begin to perceive that the evil of the world is a thing that will finally be evolved out of the world, that it was not brought into the world by malice and cruelty, but by an entirely benevolent designer that had not as yet discovered how to carry out its benevolent intention. In that way I think we may turn towards the future with greater hope.

We might compare this rhetoric to that of Kelly’s new essay in Wired, which begins with a classic Borg Complex move: “We’re expanding the data sphere to sci-fi levels and there’s no stopping it. Too many of the benefits we covet derive from it.” But if resistance is futile, that’s no cause for worry, because resistance would be foolish.

It is no coincidence that the glories of progress in the past 300 years parallel the emergence of the private self and challenges to the authority of society. Civilization is a mechanism to nudge us out of old habits. There would be no modernity without a triumphant self.So while a world of total surveillance seems inevitable, we don’t know if such a mode will nurture a strong sense of self, which is the engine of innovation and creativity — and thus all future progress. How would an individual maintain the boundaries of self when their every thought, utterance, and action is captured, archived, analyzed, and eventually anticipated by others?The self forged by previous centuries will no longer suffice. We are now remaking the self with technology. We’ve broadened our circle of empathy, from clan to race, race to species, and soon beyond that. We’ve extended our bodies and minds with tools and hardware. We are now expanding our self by inhabiting virtual spaces, linking up to billions of other minds, and trillions of other mechanical intelligences. We are wider than we were, and as we offload our memories to infinite machines, deeper in some ways.

There’s no point asking Kelly for details. (“The self forged by previous centuries will no longer suffice” for what? Have we really “broadened our circle of empathy”? What are are we “wider” and “deeper” than, exactly? And what does that mean?) This is not an argument. It is, like Shaw’s “New Theology,” a sermon, directed primarily towards those who already believe and secondarily to sympathetic waverers, the ones with a tiny shred of conscience troubling them about the universal surveillance state whose arrival Kelly awaits so breathlessly. Those who would resist need not be addressed because they’re on their way to — let’s see, what’s that phrase? — ah yes: the “dustbin of history.”

Now, someone might protest at this point that I am not being fair to Kelly. After all, he does say that a one-way surveillance state, in which ordinary people are seen but do not see, would be “hell”; and he even says “A massively surveilled world is not a world I would design (or even desire), but massive surveillance is coming either way because that is the bias of digital technology and we might as well surveil well and civilly.”

Let’s pause for a moment to note the reappearance of the Borg here, and Kelly’s habitual offloading of responsibility from human beings to our tools: for Woody Allen, “the heart wants what it wants” but for Kelly technology wants what it wants, and such sovereign beings always get their way.

But more important, notice here that Kelly thinks it’s a simple choice to decide on two-way surveillance: we “might as well.” He admits that the omnipotent surveillance state would be hell but he obviously doesn’t think that hell has even the remotest chance of happening. Why is he so confident? Because he shares Shaw’s belief in an evolutionary religion in which all that is true and good and holy emerges in history as the result of an inevitably beneficent process. Why should we worry about possible future constrictions of selfhood when the track record of “modernity” is, says Kelly, so utterly spotless, with its “glories of progress” and its “triumphant self”? I mean, it’s not as though modernity had a dark side or anything. All the arrows point skyward. So: why worry?

The only difference between Shaw and Kelly in this respect is that for Shaw the emerging paradisal “ecstasy of a brain” is a human brain; for Kelly it’s digital. Kelly has just identified digital technology as the means by which Shaw’s evolutionary progressivist Utopia will be realized. But what else is new? The rich, powerful, and well-connected always think that they and people like them (a) will end up on the right side of history and (b) will be insulated from harm — which is after all what really counts. Kelly begins his essay thus: “I once worked with Steven Spielberg on the development of Minority Report” — a lovely opener, since it simultaneously allows Kelly to boast about his connections in the film world and to dismiss Philip K. Dick’s dystopian vision as needlessly fretful. When the pre-cog system comes, it won’t be able to hurt anyone who really matters. So let’s just cue up Donald Fagen one more time and get down to the business of learning to desire whatever it is that technology wants. The one remaining spiritual discipline in Kelly's theology is learning to love Big Brother.



A few days ago I read an interesting article in the New Republic on "trigger warnings" in college syllabi. The topic intrigued me, and I decided I wanted to write about it — but then I got busy. Way too much to do. And by the time I got a few minutes to think about what I wanted to say, so many people have written about it that it didn't seem worth my time to add my two cents to an already enormous pile of pennies.

This relieved me greatly. And then I wondered why it did.

So I thought about it, and I came to the conclusion that I didn't really want to write a post on this topic — not really, not in my heart of hearts — but felt some inchoate obligation to do so. It just seems like the sort of thing about which I ought to have an opinion that I ought to be able to state. But that's silly. There's no reason whatsoever for me to opine about this. And yet only a period of intense busyness kept me from rushing to my computer to commit opinionizing all over the internet.

I think there's something to learn from this experience. For one thing, it enables me to see more clearly what we all know already: that when I see a topic being tossed around a lot on blogs and on Twitter, it's easy to be swept along by that tide. I was looking the other day at the mute filters I have set up for my Twitter client, and I couldn't help laughing at how many of them provided a record of those brief enthusiasms that take over Twitter for a day or two or three and then disappear forever. It took me a minute to remember who Todd Akin is. It took me even longer to figure out why I had added the word "tampon" to my mute list, but I finally remembered that time when Melissa Harris-Perry was wearing tampon earrings and everybody on Twitter had something to say about that. This is why some Twitter clients have mute filters that can be set for a limited time: I would imagine that three days would almost always be sufficient. Then the tide would have passed, and would be unlikely ever to return.

But I learned something else from this experience also: you can actually use the speed of the Internet to prevent you from wasting your time – or maybe I shouldn't say wasting it, but rather using it in a less-than-ideal fashion. If you just wait 48 or 72 hours, someone you follow on Twitter will almost certainly either write or link to a post which makes the very argument that you would have made if you had been quick off the mark.

For me, these realizations – which might not be new to any of you – are helpful. They remind me to give a topic a chance to cycle through the Internet for a few days, so I can find who has written wisely about it and point others to that person; and, if there are things that haven't been said that need to be said, I can address them from a more informed perspective and with a few days’ reflection under my belt. I can also practice the discipline — or maybe it’s a luxury rather than a discipline — of thinking longer thoughts about more challenging issues than are raised by than Melissa Harris-Perry’s earrings. Or even trigger warnings.

Friday, March 7, 2014

Peter Enns and the problem of boundaries

I just came across this 2013 post by Peter Enns:

I’ve had far too many conversations over the last few years with trained, experienced, and practicing biblical scholars, young, middle aged, and near retirement, working in Evangelical institutions, trying to follow Jesus and use their brains and training to help students navigate the challenging world of biblical interpretation.

And they are dying inside.  

Just two weeks ago I had the latest in my list of long conversations with a well-known, published, respected biblical scholar, who is under inhuman stress trying to negotiate the line between institutional expectations and academic integrity. His gifts are being squandered. He is questioning his vocation. His family is suffering. He does not know where to turn.

I wish this were an isolated incident, but it’s not.  

I wish these stories could be told, but without the names attached, they are worthless. I wish I had kept a list, but even if I had, it wouldn’t have done anyone much good. I couldn’t have used it. Good people would lose their jobs.

I’m getting tired of hearing the same old story again and again. This is madness.

Enns is right that this kind of story is all too common, and all too sad. I’ve known, and talked to, and counseled, and prayed with, a number of such people over the years, and they’re not all in Biblical Studies either. But here’s the thing: I have also talked to an equal or greater number of equally distressed Christian scholars whose problem is that they teach in secular institutions where they cannot express their religious convictions — in the classroom or in their scholarship — without being turned down for tenure or promotion, or (if they are contingent faculty or pre-tenure) simply being dismissed. Odd that Enns shows no awareness of this situation.

I think he doesn't because he wants to present as a pathology of evangelicalism what is more generally and seriously a pathology of the academic job market: people feeling intimidated or utterly silenced because if they lose their professorial position they know they stand almost no chance of getting another one. Moreover, this isn’t a strictly academic issue either: people all over the world and in all walks of life feel this way about their jobs, afraid of losing them but troubled by their consciences about some aspect of their workplace. But I think these feelings are especially intense among American academics because of the number of people who can’t imagine themselves doing anything other than being a professor — and also because of the peculiar forms of closure in the most “open” academic environments.

As Stanley Fish wrote some years ago in an essay called “Vicki Frost Objects”,

What, after all, is the difference between a sectarian school which disallows challenges to the divinity of Christ and a so-called nonideological school which disallows discussion of the same question? In both contexts something goes without saying and something else cannot be said (Christ is not God or he is). There is of course a difference, not however between a closed environment and an open one but between environments that are differently closed.

So if we’re going to have compassion for academics feeling trapped in institutions that are uncongenial to their beliefs, let’s be ecumenical about it.

Moreover, I can’t tell from his post exactly what Enns thinks should be done about the situation, even within the evangelical context. If he thinks that all that Christian colleges and seminaries have to do is to relax their theological statements — well, that would be grossly naïve. No matter how tightly or loosely a religious institution defines itself, there will always be people on the boundaries, edge cases who will feel uncomfortable at best or coerced into submission at worst. And if, like the modern university, an institution insists that it has no such limitations on membership at all, then that will simply mean, as Fish makes clear, that the boundaries are there but unstated and invisible — until you cross them.

Wednesday, March 5, 2014

faith and (in) AI

Freddie deBoer:

Now people have a variety of ways to dismiss these issues. For example, there’s the notion of intelligence as an ‘emergent phenomenon.’ That is, we don’t really need to understand the computational system of the brain because intelligence/consciousness/whatever is an ‘emergent phenomenon’ that somehow arises from the process of thinking. I promise: anyone telling you something is an emergent property is trying to distract you. Calling intelligence an emergent property is a way of saying ‘I don’t really know what’s happening here, and I don’t really know where it’s happening, so I’m going to call it emergent.’ It’s a profoundly unscientific argument. Next is the claim that we only need to build very basic AI; once we have a rudimentary AI system, we can tell that system to improve itself, and presto! Singularity achieved! But this is asserted without a clear story of how it would actually work. Computers, for all of the ways in which they can iterate proscribed functions, still rely very heavily on the directives of human programmers. What would the programming look like to tell this rudimentary artificial intelligence to improve itself? If we knew that, we’d already have solved the first problem. And we have no idea how such a system would actually work, or how well. This notion often is expressed with a kind of religious faith that I find disturbing.

Freddie’s important point reminds of of a comment in Paul Bloom’s recent essay in the Atlantic on brain science: “Scientists have reached no consensus as to precisely how physical events give rise to conscious experience, but few doubt any longer that our minds and our brains are one and the same.” (By the way, I don’t know what Freddie’s precise views are on these questions of mind, brain, and consciousness, so he might not agree with where I’m taking this.) Bloom’s statement that cognitive scientists “have reached no consensus” on how consciousness arises rather understates things: it would be better to say that they have no idea whatsoever how this happens. But that’s just another way of saying that they don’t know that it does happen, that “our minds and our brains are one and the same.” It’s an article of faith.

The problems with this particular variety of faith are a significant theme in David Bentley Hart’s The Experience of God, as, for instance, in this passage:

J. J. C. Smart, an atheist philosopher of some real acuity, dismisses the problem of consciousness practically out of hand by suggesting that subjective awareness might be some kind of “proprioception” by which one part of the brain keeps an eye on other parts of the brain, rather as a device within a sophisticated robot might be programmed to monitor the robot’s own systems; and one can see, says Smart, how such a function would be evolutionarily advantageous. So the problem of how the brain can be intentionally directed toward the world is to be explained in terms of a smaller brain within the brain intentionally directed toward the brain’s perception of the world. I am not sure how this is supposed to help us understand anything about the mind, or how it does much more than inaugurate an infinite explanatory regress. Even if the mechanical metaphors were cogent (which they are not, for reasons mentioned both above and below), positing yet another material function atop the other material functions of sensation and perception still does nothing to explain how all those features of consciousness that seem to defy the physicalist narrative of reality are possible in the first place. If I should visit you at your home and discover that, rather than living in a house, you instead shelter under a large roof that simply hovers above the ground, apparently neither supported by nor suspended from anything else, and should ask you how this is possible, I should not feel at all satisfied if you were to answer, “It’s to keep the rain out”— not even if you were then helpfully to elaborate upon this by observing that keeping the rain out is evolutionarily advantageous.

I highly recommend Hart’s book on this topic (and on many others). You don’t have to be a religious believer to perceive that eliminative materialism is a theory with a great many problems.

Tuesday, March 4, 2014

the self that computers know

Ed Finn:

The idea that a computer might know you better than you know yourself may sound preposterous, but take stock of your life for a moment. How many years of credit card transactions, emails, Facebook likes, and digital photographs are sitting on some company’s servers right now, feeding algorithms about your preferences and habits? What would your first move be if you were in a new city and lost your smartphone? I think mine would be to borrow someone else’s smartphone and then get Google to help me rewire the missing circuits of my digital self.  

My point is that this is not about inconvenience — increasingly, it’s about a more profound kind of identity outsourcing....  

In history, in business, in love, and in life, the person (or machine) who tells the story holds the power. We need to keep learning how to read and write in these new languages, to start really seeing our own shadow selves and recognizing their power over us. Maybe we can even get them on our side.

A few years ago I quoted Jaron Lanier on the Turing Test:

But the Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?

Ed Finn is inadvertently illustrating Lanier’s point. What does a computer think my “identity” is, my “self” is? Why, credit card transactions and Facebook likes, of course. So Finn agrees with the computer. He for one welcomes our new cloud-based overlords.