Monday, April 14, 2014
About Stahl and Robert Herrick. If we were really serious about finding out whether Robert Herrick had used an emoticon, we’d look for his manuscripts — since we could never be sure that his printers had carried out his wishes accurately, especially in those days of highly variable printing practices. But those manuscripts, I think, are not available.
The next step would be to look online for a facsimile of the first, or at least a very early, edition, and while Google Books has just such a thing, it is not searchable. So, being the lazy guy that I am, I looked for nineteenth-century editions, and in the one I came across, there are no parentheses and hence no emoticon:
So it’s possible, I’d say likely, that the parenthesis in the poem was inserted by a modern editor. Not that parentheses weren’t used in verse in Herrick’s time — they were — but not as widely as we use them today and not in the same situations. Punctuation in general was unsettled in the seventeenth century — as unsettled as spelling: Shakespeare spelled his own name several different ways — and there were no generally accepted rules. Herrick was unlikely to have had consistent punctuational practices himself, and even if he did he couldn't expect either his printers or his readers to share them.
So more generally, I think Stahl’s guess is ahistorical. The first emoticons seem to have been invented about thirty years ago, and are clearly the artifact of the computer age, or, more specifically, a purely digital or screen-based typewriting-only environment — because if you were printing something out before sending it, you could just grab a pen and draw a perfectly legible, friendly, not-rotated-90-degrees smiley, or frowney, or whatever, as people still do. Emoticons arose to address a problem that did not and does not exist in a paper-centric world.
And one final note: in the age between the invention of the typewriter and the transition to digital text, people certainly realized that type could make images — but they were rather more ambitious about it.
Sunday, April 13, 2014
Why machine-led systems thinking is destroying the culture: A talk with Simon Head http://t.co/2NGe2qORTf
— Quentin Hardy (@qhardy) April 13, 2014
If you follow the embedded link you’ll see that Head argues that algorithm-based technologies are, in many workplaces, denying to humans the powers of judgment and discernment:
I have a friend who works in physical rehabilitation at a clinic on Park Avenue. She feels that she needs a minimum of one hour to work with a patient. Recently she was sued for $200,000 by a health insurer, because her feelings exceeded their insurance algorithm. She was taking too long.
The classroom has become a place of scientific management, so that we’ve baked the expertise of one expert across many classrooms. Teachers need a particular view. In core services like finance, personnel or education, the variation of cases is so great that you have to allow people individual judgment. My friend can’t use her skills.
To Hardy’s tweet Marc Andreesen, the creator of the early web browser Mosaic and the co-founder of Netscape, replied,
@qhardy That is so cute. I trust he wrote it on animal skins in his yurt?
— Marc Andreessen (@pmarca) April 13, 2014
Before I comment on that response, I want to look at another story that came across my Twitter feed about five minutes later, an extremely thoughtful reflection by Brendan Keogh on “games evangelists and naysayers”. Keogh is responding to a blog post by noted games evangelist Jane McGonigal encouraging all her readers to find people who have suffered some kind of trauma and get them to play a pattern-matching video game, like Tetris, as soon as possible after their trauma. And why wouldn’t you do this? Don't you want to “HELP PREVENT PTSD RIGHT NOW”?
McGonigal ... wants a #Kony2012-esque social media campaign to get 100,000 people to read her blog post. She thinks it irresponsible to sit around and wait for definitive results. She even goes so far as to label those that voice valid concerns about the project as “games naysayers” and compares them to climate change deniers.
The project is an unethical way to both present findings and to gather research data. Further, it trivialises the realities of PTSD. McGonigal runs with the study’s wording of Tetris as a potential “vaccine”. But you wouldn’t take a potential vaccine for any disease and distribute it to everyone after a single clinical trial. Why should PTSD be treated with any less seriousness? Responding to a comment on the post questioning the approach, McGonigal cites her own suffering of flashbacks and nightmares after a traumatic experience to demonstrate her good intentions (intentions which I do not doubt for a moment that she has). Yet, she wants everyone to try this because it might work. She doesn’t stop to think that one test on forty people in a controlled environment is not enough to rule out that sticking Tetris or Candy Crush Saga under the nose of someone who has just had a traumatic experience could potentially be harmful for some people (especially considering Candy Crush Saga is not even mentioned in the study itself!).
Further, and crucially, in her desire to implement this project in the real world, she makes no attempt to compare or contrast this method of battling PTSD with existing methods. It doesn’t matter. The point is that it proves games can be used for good.
If we put McGonigal’s blog post together with Andreesen’s tweet we can see the outlines of a very common line of thought in the tech world today:
1) We really earnestly want to save the world;
2) Technology — more specifically, digital technology, the technology we make — can save the world;
3) Therefore, everyone should eagerly turn over to us the keys to society.
4) Anyone who doesn’t want to turn over those keys to us either doesn't care about saving the world, or hates every technology of the past 5000 years and just wants to go back to writing on animal skins in his yurt, or both;
5) But it doesn't matter, because resistance is futile. If any expresses reservations about your plan you can just smile condescendingly and pat him on the head — “Isn’t that cute?” — because you know you’re going to own the world before too long.
And if anything happens go to astray, you can just join Peter Thiel on his libertarian-tech-floating-earthly-Paradise.
|Auden on Ischia, by George Daniell|
From the late 1940s to the late 1950s, W. H. Auden spent part of each year on the Island of Ischia in the Bay of Naples. When he bought a small house in Austria and left Italy, he wrote a lovely and funny poem called "Good-bye to the Mezzogiorno" in which he reflected on how he, as the child of a "potato, beer-or-whiskey / Guilt culture," never became anything more than a stranger in southern Italy.
This could be a reason
Why they take the silencers off their Vespas,
Turn their radios up to full volume,
And a minim saint can expect rockets — noise
As a counter-magic, a way of saying
Boo to the Three Sisters: "Mortal we may be,
But we are still here!"
Thursday, April 10, 2014
As students and their families rethink the value of the liberal arts, defenders of traditional education are understandably ambivalent. On the one hand, the diminished stature of the liberal arts seems long overdue, and this critical reevaluation might lead to thoughtful reform. On the other, this reevaluation might doom the liberal arts to irrelevance. To that end, Minding the Campus asked a list of distinguished thinkers a straightforward question: should we be unhappy that the liberal arts are going down? Here are responses from Heather Mac Donald, Thomas Lindsay, and Samuel Goldman.
Three more answers, by Patrick Deneen, Peter Wood, and Peter Lawler follow here. Each respondent agrees with the question’s premise, though there’s a partial dissent from Samuel Goldman, who notes that “liberal education can't be reduced to colleges, course offerings, or graduate programs” — liberal learning and the experience of great art happen outside these formal settings, and we don't have any reliable ways of measuring how often.
Goldman’s point is a good one, and I’d like to imitate its spirit and inquire more seriously into the assumptions underlying the conversation.
First of all, we might note that enrollment in university humanities course is not declining, despite everything you’ve heard. But the respondents to the Minding the Campus would not be consoled by this news, since, as several of them point out, humanities and arts programs have all-too-frequently abandoned the teaching of traditional great books and masterpieces of art — or at best have made the study of such works optional.
But even if that’s true, it may not support the claim that “the liberal arts are going down.” Consider the things we’d need to know before we could draw that conclusion:
- What are the geographical boundaries of our inquiry? Are we looking just at American colleges and universities, or are we considering what’s happening in other countries?
- What are the temporal boundaries of our inquiry? If we’re comparing the educational situation today to 1960 the trends may look rather different than if we’re comparing our present moment to 1600.
- What does a student need to be doing in order to qualify as studying the liberal arts in some traditional form? Do they need to be majoring in a liberal-arts program that follows some (to-be-defined) traditional model? Or would an engineering major who had participated in a core curriculum like that at Columbia, or a pre-law major here at Baylor who took the pre-law Great Texts track, count?
- What population are we looking at? We might ask this question: “What percentage of full-time college and university students are pursuing a traditional liberal arts curriculum?” But we might also ask this question: “What percentage of a given country’s 18-to-22-year-olds are pursuing a traditional liberal arts curriculum?”
That last point seems to be especially important. If we were to ask the second question, then we’d have to say that a higher percentage of young Americans are studying traditional liberal arts than are doing so in almost any other country, or have done so at almost any point in human history — would we not? When the traditional liberal-arts curriculum that Minding the Campus prefers was dominant, a far smaller percentage of Americans attended college or university. So maybe the liberal arts — however traditionally you define them — aren’t going down at all, if we take the whole American population and an expansive time-frame into account. The question just needs to be framed much more precisely.
Wednesday, April 9, 2014
The common method here goes something like this: when faced with a tricky philosophical problem, it's useful to strip away all the irrelevant contextual details so as to isolate the key issues involved, which then, so isolated, will be easier to analyze. The essential problem with this method is its assumption that we know in advance which elements of a complex problem are essential and which are extraneous. But we rarely know that; indeed, we can only know that if we have already made significant progress towards solving our problem. So in “simplifying” our choices by taking an enormous complex of knowledge — the broad range of knowledge that we bring to all of our everyday decisions — and placing almost all of it behind a veil of ignorance, we may well be creating a situation so artificially reductive that it tells us nothing at all about the subjects we’re inquiring into. Moreover, we are likely to be eliminating not just what we explicitly know but also the tacit knowledge whose vital importance to our cognitive experience Michael Polanyi has so eloquently emphasized.
By contrast to the veil-of-ignorance approach, consider its near-opposite, the approach to logic and argumentation developed by Stephen Toulmin in his The Uses of Argument. For Toulmin, the problem with most traditional approaches to logic is this very tendency to simplification I’ve been discussing — a simplification that can produce, paradoxically enough, its own unexpected complications and subtleties. Toulmin says that by the middle of the twentieth century formal philosophical logic had become unfortunately disconnected from what Aristotle had been interested in: “claims and conclusions of a kind that anyone might have occasion to make.” Toulmin comments that “it may be surprising to find how little progress has been made in our understanding of the answers in all the centuries since the birth, with Aristotle, of the science of logic.”
So Toulmin sets out to provide an account of how, in ordinary life as well as in philosophical discourse, arguments are actually made and actually received. Aristotle had in one sense set us off on the wrong foot by seeking to make logic a “formal science — an episteme.” This led in turn, and eventually, to attempts to make logic a matter of purely formal mathematical rigor. But to follow this model is to abstract arguments completely out of the lifeworld in which they take place, and leave us nothing to say about the everyday debates that shape our experience. Toulmin opts instead for a “jurisprudential analogy”: a claim that we evaluate arguments in the same complex, nuanced, and multivalent way that evidence is weighed in law. When we evaluate arguments in this way we don't get to begin by ruling very much out of bounds: many different kinds of evidence remain in play, and we just have to figure out how we see them in relation to one another. Thus Toulmin re-thinks “the uses of argument” and what counts as responsible evaluation of the arguments that we regularly confront.
It seems to me that when we try to understand intelligence and consciousness we need to imitate Toulmin’s strategy, and that if we don’t we are likely to trivialize and reduce human beings, and the human lifeworld, in pernicious ways. It’s for this reason that I would like to call for an end to simplifying thought experiments. (Not that anyone will listen.)
So: more about all this in future posts, with reflections on Mark Halpern’s 2006 essay on “The Trouble with the Turing Test”.
Tuesday, April 8, 2014
- the battery life of a Kindle Paperwhite
- the weight of a Kindle Paperwhite
- the screen resolution of a Kindle Fire (I won’t even ask for it to be color)
- the (free!) cellular connectivity of a Kindle Keyboard
- the hardware keyboard and navigating system of a Kindle Keyboard
- the highlighting/note-taking UI of the Kindle for iOS app
- the glare-freeness of a paper codex (or, failing that, of a Kindle Paperwhite)
Just a few random comments on this wishlist:
1) I want “the hardware keyboard and navigating system of a third-generation Kindle” because touchscreens handle such actions pretty badly. (Much of what follows goes for typing on a virtual keyboard also.) The chief problem is that when you’re trying to highlight using your finger — and to an extent even when you use a stylus, though even if a stylus helps it’s one more thing to have to deal with — that finger blocks your view of what you’re highlighting, which means that you have to guess whether you’re hitting your target or not, or else pivot both your head and your finger to try to get a better look. With all touchscreen devices I am regularly overshooting or undershooting the terminus ad quem of my highlight. With the good old Kindle Keyboard your hands are not on the screen, so you can see it fully and clearly — and you have the additional benefit of being able to highlight without moving your hands from their reading position: just shift your thumb a bit and you’re there.
2) In commending the Paperwhite, Amazon says “Unlike reflective tablet and smartphone screens, the latest Kindle Paperwhite reads like paper — no annoying glare, even in bright sunlight.” This is not true. The Paperwhite screen is far, far less reflective than a glass tablet screen — but it’s still considerably more reflective than a paper page, and when I’m reading on it outdoors, which I love to do, I often have to adjust the angle of the screen to eliminate glare.
3) The latest Kindle Fire is an absolutely beautiful piece of hardware: solidly built, pleasant to hold, significantly lighter than earlier versions, and featuring a glorious hi-res screen. Its response time is also considerably faster than the Paperwhite, whose lagginess can be occasionally frustrating. But the software is mediocre at best. The video app works flawlessly, but reading can be frustrating if you’re doing any highlighting or annotating. It’s hard to select text for annotating, and if a highlight crosses to the next “page” it can sometimes take me three or four tries to get the selection to end properly — often I end up selecting the whole of the next page, with no way to back up, or else I get a pop-up dictionary definition of something on the page. For someone who interacts a lot with books it’s maddening. Also, the Kindle version of the Instapaper app, which I like to use to read online posts and articles, is really buggy: when you try scrolling through an article it flickers and shudders madly, and it crashes too frequently. The overall reading experience is much, much better when using the Kindle app and Instapaper app on iOS. (Also, the iOS Instapaper app plays really nicely with other services, like Tumblr and Pinboard.)
All this said, I’d be happy enough with a Kindle Paperwhite with a hi-res screen. I read outside a lot, and when I do that device is my only (digital) option, so I just wish its text were nicer to look at and easier to navigate through. I suppose I could also wish for a Kindle Fire or iPad Mini with a totally nonreflective screen, but as far as I know that’s impossible.
A well-educated time traveller from 1914 enters a room divided in half by a curtain. A scientist tells him that his task is to ascertain the intelligence of whoever is on the other side of the curtain by asking whatever questions he pleases.
The traveller’s queries are answered by a voice with an accent that he does not recognize (twenty-first-century American English). The woman on the other side of the curtain has an extraordinary memory. She can, without much delay, recite any passage from the Bible or Shakespeare. Her arithmetic skills are astonishing — difficult problems are solved in seconds. She is also able to speak many foreign languages, though her pronunciation is odd. Most impressive, perhaps, is her ability to describe almost any part of the Earth in great detail, as though she is viewing it from the sky. She is also proficient at connecting seemingly random concepts, and when the traveller asks her a question like “How can God be both good and omnipotent?” she can provide complex theoretical answers.
Based on this modified Turing test, our time traveller would conclude that, in the past century, the human race achieved a new level of superintelligence. Using lingo unavailable in 1914, (it was coined later by John von Neumann) he might conclude that the human race had reached a “singularity” — a point where it had gained an intelligence beyond the understanding of the 1914 mind.
The woman behind the curtain, is, of course, just one of us. That is to say, she is a regular human who has augmented her brain using two tools: her mobile phone and a connection to the Internet and, thus, to Web sites like Wikipedia, Google Maps, and Quora. To us, she is unremarkable, but to the man she is astonishing. With our machines, we are augmented humans and prosthetic gods, though we’re remarkably blasé about that fact, like anything we’re used to. Take away our tools, the argument goes, and we’re likely stupider than our friend from the early twentieth century, who has a longer attention span, may read and write Latin, and does arithmetic faster.
No matter which side you take in this argument, you should take note of its terms: that “intelligence” is a matter of (a) calculation and (b) information retrieval. The only point at which the experiment even verges on some alternative model of intelligence is when Wu mentions a question about God’s omnipotence and omnibenevolence. Presumably the woman would do a Google search and read from the first page that turns up.
But what if the visitor from 1914 asks for clarification? Or wonders whether the arguments have been presented fairly? Or notes that there are more relevant passages in Aquinas that the woman has not mentioned? The conversation could come to a sudden and grinding stop, the illusion of intelligence — or rather, of factual knowledge — instantly dispelled.
Or suppose that the visitor says that the question always reminds him of the Hallelulah Chorus and its invocation of Revelation 19:6 — “Alleluia: for the Lord God omnipotent reigneth” — but that that passage rings hollow and bitter in his ears since his son was killed in the first months of what Europe was already calling the Great War. What would the woman say then? If she had a computer instead of a smartphone she could perhaps see if Eliza is installed — or she could just set aside the technology and respond as an empathetic human being. Which a machine could not do.
Similarly, what if the visitor had simply asked “What is your favorite flavor of ice cream?” Presumably then the woman would just answer his question honestly — which would prove nothing about anything. Then we would just have a person talking to another person, which we already know that we can do. “But how does that help you assess intelligence?” cries the exasperated experimenter. What’s the point of having visitors from 1914 if they’re not going to stick to the script?
These so-called “thought experiments” about intelligence deserve the scare-quotes I have just put around the phrase because they require us to suspend almost all of our intelligence: to ask questions according to a narrowly limited script of possibilities, to avoid follow-ups, to think only in terms of what is calculable or searchable in databases. They can tell us nothing at all about intelligence. They are pointless and useless.
Thursday, March 20, 2014
This is why it’s strange to think of these unplugging events as anything like detox: the goal isn’t really abstinence but a return to these technologies with a renewed appreciation of how to use them.
Cep seems to think that the word “detox” has one meaning, the one associated with drug addiction: drug addicts visit a clinic to detoxify their system, with the determination not to return to their bad old habits. But we also use the word in other ways: think about the “detox spa,” which people visit for a period during which they avoid foods they usually eat and drinks they usually drink — but with every expectation of resuming their familiar practices, more or less, when they return to ordinary life. If people are thinking of “digital detox” in that sense, which is, it seems to me, far more common than the drug-addiction sense, then Cep’s critique simply doesn’t apply.
Few who unplug really want to surrender their citizenship in the land of technology; they simply want to travel outside it on temporary visas. Those who truly leave the land of technology are rarely heard from again, partly because such a way of living is so incommensurable. The cloistered often surrender the ability to speak to those of us who rely so heavily on technology. I was mindful of this earlier this month when I reviewed a book about a community of Poor Clares in Rockford, Illinois. The nuns live largely without phones or the Internet; they rarely leave their monastery. Their oral histories are available only because a scholar spent six years interviewing them, organizing their testimonies so that outsiders might have access. The very terms of their leaving the plugged-in world mean that their lives and wisdom aren’t readily accessible to those of us outside their cloister.
Is this meant as a criticism of the nuns? That their alternative way of life isn’t “accessible” to others? If not, then I don't know what the point of the anecdote is. If so, then I don't agree. No one is obliged to make his or her experience accessible to anyone else.
That is why, I think, the Day of Unplugging is such a strange thing. Those who unplug have every intention of plugging back in.
As noted above: exactly.
This sort of stunt presents an experiment, with its results determined beforehand; one finds exactly what one expects to find: never more, often less.
Wait, do we know that? I’d be quite surprised if no one who has unplugged has been surprised by the resulting experience.
It’s one of the reasons that the unplugging movement has attracted such vocal criticism from the likes of Nathan Jurgenson, Alexis Madrigal, and Evgeny Morozov. If it takes unplugging to learn how better to live plugged in, so be it.
Isn’t that often just the point? I know that when I take a vacation from Twitter, which I do sometimes, I do it in hopes that when I return I’ll enjoy it more and get more from it.
But let’s not mistake such experiments in asceticism for a sustainable way of life. For most of us, the modern world is full of gadgets and electronics, and we’d do better to reflect on how we can live there than to pretend we can live elsewhere.
I guess I just haven’t seen anybody detoxing who is thinking of it as “a sustainable way of life.” I think we can take it as axiomatic that anyone who announces his or her detox on social media isn’t undertaking severe ascesis. So as far as I can tell, Cep’s post doesn't hold up as a substantive critique.
But she’s surely right about one thing: detoxers can be obnoxiously self-congratulatory about their highly temporary withdrawals from our digital worlds.
In my earlier post on the value of knowledge I said I would return to some questions raised there. About some recent academic research Aaron Gordon had said, “Two questions come immediately to mind: Why would anyone study these things, and why would anyone pay someone to study these things?” I spoke to the the first question in that post, and will return to it here; then I’ll get to the important stuff.
Let’s consider this recent story from the New York Times:
American science, long a source of national power and pride, is increasingly becoming a private enterprise.
In Washington, budget cuts have left the nation’s research complex reeling. Labs are closing. Scientists are being laid off. Projects are being put on the shelf, especially in the risky, freewheeling realm of basic research. Yet from Silicon Valley to Wall Street, science philanthropy is hot, as many of the richest Americans seek to reinvent themselves as patrons of social progress through science research.
The result is a new calculus of influence and priorities that the scientific community views with a mix of gratitude and trepidation....
That personal setting of priorities is precisely what troubles some in the science establishment. Many of the patrons, they say, are ignoring basic research — the kind that investigates the riddles of nature and has produced centuries of breakthroughs, even whole industries — for a jumble of popular, feel-good fields like environmental studies and space exploration.
Please read the whole article, which treats vitally important issues.
And now let’s perform a thought-experiment. Read this list of 20th-century scientific discoveries and ask yourself: How many of them would have happened under the kind of funding regime American science is headed towards — or rather, that is already largely in place? (Those philanthropists may be funding their pet projects more directly now, but they’ve been giving to universities, with plentiful strings attached, for a long time.) Or consider something not even on that list — perhaps because it separates mathematics and technology from science: You want to talk about “esoteric”? What could possibly be more esoteric than David Hilbert’s Entscheidungsproblem? And yet it was Alan Turing’s answer to that problem that gave us digital computing — a result that no one could possibly have foreseen.
I think this thought-experiment, coupled with the NYT article on science, suggests to us a few points:
1) No one knows, and no one can know, what the future uses will be of the knowledge people are discovering, or want to discover, today.
2) Knowledge which is obviously useful, especially in the widespread sense of “potentially lucrative,” will always, in a free-market or mostly-free-market system, have its patrons.
3) Therefore it’s reasonable for society to sponsor institutions and scholars that work on the apparently esoteric, on the same principle that pharmaceutical companies pay for research into new drugs. Very, very little of that research makes its way to market — but what does pays for the rest.
All that if you want to make a largely economic, use-oriented case for the value of the apparently esoteric.
But I don't want to make that case.
In Auden’s greatest poetic achievement, the sequence Horae Canonicae, he writes with wonder of the incomprehensibility, the unpredictability, the wholly gratuitous nature, of vocation — of obedience to a calling. “To ignore the appetitive goddesses ... // what a prodigious step to have taken.”
There should be monuments, there should be odes,
to the nameless heroes who took it first,
to the first flaker of flints
who forgot his dinner,
the first collector of sea-shells
to remain celibate.
For Auden, there is nothing more delightfully and distinctively human than this obedience to an inexplicable desire to learn, to study — a desire that in some can suspend our habitual animal obedience to appetite — including, I would like to note, not just the appetites for food and sex, but also for economic security and social prestige. To heed this call to an utterly non-utilitarian studiousness is a mark of civilization in an individual — and also in a society, which, if it can afford it, should create and sustain institutions in which such studiousness can flourish.
So, to the question of whether anyone in our tremendously wealthy and astonishingly wasteful society should pay people to study the body temperature of the nesting red-footed Booby (Sula sula), I say: Absolutely. Take the money out of the athletic department’s budget if need be. And when you’re done paying them, build a freakin’ monument to them.
The author, Aaron Gordon, runs some random word searches in an academic database and lists some of the articles he finds. For instance: “Complexity of Early and Middle Successional Stages in a Rocky Intertidal Surfgrass Community,” by Teresa Turner, Oecologia, Vol. 60, No. 1 (1983), pp. 56-65. And “Darwin and Nietzsche: Selection, Evolution, and Morality,” by Catherine Wilson, Journal of Nietzsche Studies, Vol. 44, No. 2 (Summer 2013), pp. 354-370. And “Body Temperature of the Nesting Red-Footed Booby (Sula sula),” by R. J. Shallenberger, G. C. Whittow, R. M. Smith, The Condor, Vol. 76, No. 4 (Winter, 1974), pp. 476-478.
Then Gordon comments, “Two questions come immediately to mind: Why would anyone study these things, and why would anyone pay someone to study these things?” And later: “There must be some way to distinguish between the useful and the esoteric.”
But I want to say: What’s not interesting here? Darwin and Nietzsche aren’t interesting? The ecological complexities of surfgrass beaches aren’t interesting? How birds regulate their body temperature — that’s not interesting? I actually wanted to click through to many of those articles to find out more. Moral: Don't allow your own lack of intellectual curiosity to be a guide to the value of research.
And to the claim that “There must be some way to distinguish between the useful and the esoteric”: no, there mustn’t, and there almost certainly isn’t. Moreover, and more important, I’m reminded of Auden’s prophecy in “Under Which Lyre” of the dangerous powers of Apollo: “And when he occupies a college, / Truth is replaced by Useful Knowledge.” Thus also the speech of the old A. E. Housman in Tom Stoppard’s play The Invention of Love:
A scholar's business is to add to what is known. That is all. But it is capable of giving the very greatest satisfaction, because knowledge is good. It does not have to look good or even sound good or even do good. It is good just by being knowledge. And the only thing that makes it knowledge is that it is true. You can't have too much of it and there is no little too little to be worth having. There is truth and falsehood in a comma.
Obviously my view of things — Auden’s view, Stoppard’s Housman’s view — has implications for the economics of university life. And maybe I’ll get to that in another post, soon. But for now I just wanted to register some irritation and suggest a different way of thinking about these matters than Gordon’s.
Tuesday, March 18, 2014
Emily Bell recently argued that some hot new tech/journalism/etc. companies that are positioning themselves as radical alternatives to business-as-usual are, in the matter of hiring women and minorites, totally business-as-usual: a bunch of white guys with a slight scattering of women and minorities.
Nate Silver, one of those whom Bell was describing, didn't like her accusation: “The idea that we’re bro-y people just couldn’t be more off. We’re a bunch of weird nerds. We’re outsiders, basically. And so we have people who are gay, people of different backgrounds. I don’t know. I found the piece reaaaally, really frustrating. And that’s as much as I’ll say.”
What happens when formerly excluded groups gain more power, like techies? They don’t just let go of their old forms of cultural capital. Yet they may be blind to how their old ways of identifying and accepting each other are exclusionary to others. They still interpret the world through their sense of status when they were “basically, outsiders.”
Most tech people don’t think of it this way, but the fact that most of them wear jeans all the time is just another example of cultural capital, an arbitrary marker that’s valued in their habitus, both to delineate it and to preserve it. Jeans are arbitrary, as arbitrary as ties....
How does that relate to the Silver’s charged defense that his team could not be “bro-y” people? Simple: among the mostly male, smart, geeky groups that most programmers and technical people come from, there is a way of existing that is, yes, often fairly exclusionary to women but not in ways that Silver and his friends recognize as male privilege. When they think of male privilege, they are thinking of “macho” jocks and have come to believe their own habitus as completely natural, all about merit, and also in opposition to macho culture. But if brogrammer culture opposes macho culture, it does not follow that brogrammer culture is automatically welcoming to other excluded groups, such as women.
I’m reminded here of a fantastic essay Freddie deBoer wrote a while back about the triumphs of geek culture, especially in its love of fantasy and SF:
Commercial dominance, at this point, is a given. What critical arbiters would you like? Is it a Best Picture Oscar for one of their movies? Can’t be. Return of the King won it in 2003. (And ten other Academy Awards. And four Golden Globes. And every other major award imaginable.) Recognition from the “literary establishment?” Again, I don’t know what that term could refer to; there are publishers and there are academics and there are book reviewers, but there is no such thing as a literary establishment. Even a cursory look at individual actors dedicated to literature will reveal that glory for sci-fi, fantasy, and graphic novels has already arrived. Turn of the century “best book” lists made ample room for J.R.R. Tolkien, Jules Verne, Arthur C. Clarke, Philip K. Dick, and others. Serious book critics fall all over themselves to praise the graphic novels of Allison Bechdel and Art Spiegelman. Respect in the world of contemporary fiction? Michael Chabon, Lev Grossman, and other “literary fantasists” have earned rapturous reviews from the stuffiest critics. Penetration into university culture and academic literary analysis? English departments are choked with classes on sci-fi and genre fiction, in an effort to attract students. Popular academic conferences are held not just on fantasy or graphic novels but specifically on Joss Whedon and Batman. Peer-reviewed journals host special issues on cyberpunk and video game theory.
To the geeks, I promise: I’m not insulting you. I’m conceding the point that you have worked for so long to prove. Victory is yours. It has already been accomplished. It’s time to enjoy it, a little; to turn the critical facility away from the outside world and towards political and artistic problems within the world of geek culture; and if possible, maybe to defend and protect those endangered elements of high culture. They could use the help. It’s time for solidarity.
And this is what I’d also like to say to Nate Silver: Victory is yours. It has already been accomplished. Dude, you worked for the New York Times and you left it voluntarily — to work for ESPN, 80% of which is owned by Disney and the other 20% by Hearst. In 21st-century America, it is not possible to be any more inside than this. You cannot stick it to the Man — you are the Man. It’s best that you, and people in similar positions, realize that as soon as possible; and forego the illusion that you have some outsider status that exempts you from criticism like that presented by Emily Bell. Whether you agree with Bell’s argument or not, get used to it: you’re going to hear a lot more along those lines as long as you continue to be the Man.
Monday, March 17, 2014
But the impossibility of silence says something about why it remains so alluring. Noise-related annoyances stem from emotion—frustration, disorientation, fear—as much as actual audible irritation. During late nineteenth-century industrialization, “The noise of [the railroad’s] steam whistle,” writes Emily Thompson in The Soundscape of Modernity, “was disturbing not only for its loudness but also for its unfamiliarity.” When a 1926 study determined that an individual horse and carriage was actually louder than an individual automobile, The New York Times perceptively responded that the it was not the nature of the sounds that was the trouble, but the fact that “the ear has not learned how to handle them.” In a 1929 poll of New Yorkers, noises identified as “machine-age inventions” were the ones that bothered them most. And by the late 1920s, activists and engineers had a way to quantify their irritations. In 1929, the decibel was established as standard unit of sound. Science contributes to noisiness in more than just audible output: New means of measuring heightened peoples’ awareness of their aggravation.
For what it’s worth, I wrote about this topic in my book The Pleasures of Reading in an Age of Distraction, at first in the context of the history of reading aloud:
This much is clear: the more noise surrounds us, the harder it is to read aloud. Reading aloud, and still more murmured reading, requires a quiet enough environment that you can hear what you speak; otherwise it is a pointless activity. And it might be worth pausing here to note that city life has always been loud — that is not an artifact of modern times. Bruce R. Smith’s extraordinary study The Acoustic World of Early Modern England gives us a full and rather disorienting sense of just how cacophonous the world was for many of our ancestors half-a-millennium ago. And Diana Webb in her book Privacy and Solitude in the Middle Ages argues, convincingly, that many people, men and women alike, sought monastic life less from piety than from a desperate need to find refuge from all the racket. Maybe they just wanted to find a place where they could be left alone to read.
The conclusion we may draw from all this is simply this: the noisier the environment, the more readers are driven to be silent. It is only in “privacy and solitude” that reading aloud or murmuring can ever be a reasonable option, and rarely have our ancestors had that option. The boy trying to study at the kitchen table while the clamor of family life goes on around him is a typical figure in the history of reading. No one could plausibly claim that we late-moderns are uniquely challenged in this respect: surely a higher percentage of human beings today have regular access to silence that at any time in human history. Most Americans and Western Europeans, and many people elsewhere — not all, mind you — live in environments with quiet rooms, or quiet corners. And many who lack quiet homes have had access to libraries, which have for centuries been dedicated, as it were, to silence.
For the thrilling conclusion of my thoughts on this subject, you’ll just have to buy the book. (Spoiler alert: not everyone wants to keep libraries quiet.)
Friday, March 14, 2014
It’s one thing to lose your personal liberty as a result of being confined in a prison, but you are still allowed to believe whatever you want while you are in there. In the UK, for instance, you cannot withhold religious manuscripts from a prisoner unless you have a very good reason. These concerns about autonomy become particularly potent when you start talking about brain implants that could potentially control behaviour directly. The classic example is Robert G Heath [a psychiatrist at Tulane University in New Orleans], who did this famously creepy experiment [in the 1950s] using electrodes in the brain in an attempt to modify behaviour in people who were prone to violent psychosis. The electrodes were ostensibly being used to treat the patients, but he was also, rather gleefully, trying to move them in a socially approved direction. You can really see that in his infamous  paper on ‘curing’ homosexuals. I think most Western societies would say ‘no thanks’ to that kind of punishment.
To me, these questions about technology are interesting because they force us to rethink the truisms we currently hold about punishment. When we ask ourselves whether it’s inhumane to inflict a certain technology on someone, we have to make sure it’s not just the unfamiliarity that spooks us. And more importantly, we have to ask ourselves whether punishments like imprisonment are only considered humane because they are familiar, because we’ve all grown up in a world where imprisonment is what happens to people who commit crimes. Is it really OK to lock someone up for the best part of the only life they will ever have, or might it be more humane to tinker with their brains and set them free? When we ask that question, the goal isn’t simply to imagine a bunch of futuristic punishments – the goal is to look at today’s punishments through the lens of the future.
To me, the key to these speculations may be found in one word in the first sentence quoted: “allowed.” To speak of prisoners as “allowed to believe whatever [they] want” while in prison is to speak of human thought as the rightful property of the State, which it may then entrust to us — but may also withhold from us if there is, as when withholding religious texts, “a very good reason.”
Understand: I am not saying that Roache is simply advocating thought control as a means of punishment or rehabilitation of lawbreakers. I am, rather, noting that her language crosses a vitally important line — the line that separates two radically different ideas about whom or what human personhood belongs to — without her demonstrating any awareness whatsoever that she has done so. It is natural and normal to her to talk as though states can do what they want with human minds and simply must decide what would work best — what would have the optimal social effects. (This is yet another mode of rationalism in politics.)
There is a kind of philosopher — an all too common kind of philosopher — who when considering such topics habitually identifies himself or herself with power. Pronouns matter a good deal here. Note that in Roache’s comments “we” are the ones who have the power to inflict punishment on “someone.” We punish; they are punished. We control; they are controlled. We decide; they are the objects of our decisions. Would Roache’s speculations have taken a different form, I wonder, if she had reversed the pronouns?
This is the danger for all of us who have some wealth and security and status: to imagine that the punitive shoe will always be on the other’s foot. In these matters it might be a useful moral discipline for philosophers to read the great classics of dystopian fiction, which habitually envision the world of power as seen by the powerless.
Wednesday, March 12, 2014
2) “You’re making this way too complicated.”
3) “You’re making this way too simple.”
4) “You’re stupid.”
5) “You’re evil.”
One example will do for now. In what became a famous case in the design world, five years ago Doug Bowman left Google and explained why:
When I joined Google as its first visual designer, the company was already seven years old. Seven years is a long time to run a company without a classically trained designer. Google had plenty of designers on staff then, but most of them had backgrounds in CS or HCI. And none of them were in high-up, respected leadership positions. Without a person at (or near) the helm who thoroughly understands the principles and elements of Design, a company eventually runs out of reasons for design decisions. With every new design decision, critics cry foul. Without conviction, doubt creeps in. Instincts fail. “Is this the right move?” When a company is filled with engineers, it turns to engineering to solve problems. Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data. Data in your favor? Ok, launch it. Data shows negative effects? Back to the drawing board. And that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions.
Yes, it’s true that a team at Google couldn’t decide between two blues, so they’re testing 41 shades between each blue to see which one performs better. I had a recent debate over whether a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can’t operate in an environment like that. I’ve grown tired of debating such minuscule design decisions. There are more exciting design problems in this world to tackle.
What Bowman thought of as a bug — “data [was] paralyzing the company and preventing it from making any daring design decisions” — the leadership at Google surely thought of as a feature. What’s the value of “daring design decisions”? We’re trying to get clicks here, and we can find out how to achieve that.
With that story in mind, let’s turn to Michael Oakeshott’s great essay “Rationalism in Politics” and his account therein of Francis Bacon’s great project for setting the quest for knowledge on a secure footing:
The Novum Organum begins with a diagnosis of the intellectual situation. What is lacking is a clear perception of the nature of certainty and an adequate means of achieving it. ‘There remains,’ says Bacon, ‘but one course for the recovery of a sound and healthy condition — namely, that the entire work of understanding be commenced afresh, and the mind itself be from the very outset not left to take its own course, but guided at every step’. What is required is a ‘sure plan’, a new ‘way’ of understanding, an ‘art’ or ‘method’ of inquiry, an ‘instrument’ which (like the mechanical aids men use to increase the effectiveness of their natural strength) shall supplement the weakness of the natural reason: in short, what is required is a formulated technique of inquiry....
The art of research which Bacon recommends has three main characteristics. First, it is a set of rules; it is a true technique in that it can be formulated as a precise set of directions which can be learned by heart. Secondly, it is a set of rules whose application is purely mechanical; it is a true technique because it does not require for its use any knowledge or intelligence not given in the technique itself. Bacon is explicit on this point. The business of interpreting nature is ‘to be done as if by machinery’, ‘the strength and excellence of the wit (of the inquirer) has little to do with the matter’, the new method ‘places all wits and understandings nearly on a level’. Thirdly, it is a set of rules of universal application; it is a true technique in that it is an instrument of inquiry indifferent to the subject-matter of the inquiry.
It is hard to imagine a more precise and accurate description of the thinking of the Baconians of Mountain View. They didn’t want Bowman’s taste or experience. He might have been the most gifted designer in the world, but so what? “The strength and excellence of the wit (of the inquirer) has little to do with the matter.” Instead, decisions are “to be done as if by machinery” — no, strike that, they are to be done precisely by machinery and only by machinery. Moreover, there is no difference in technique between a design decision and any other kind of decision: the method of letting the data rule “is an instrument of inquiry indifferent to the subject-matter of the inquiry.”
Oakeshott’s essay provides a capsule history of the rise of Rationalism as a universal method of inquiry and action. It focuses largely on Bacon and Descartes as the creators of the Rationalist frame of mind and on their (less imaginative and creative) successors. It turns out that an understanding of seventeenth-century European thought is an indispensable aid to understanding the technocracy of the twenty-first century world.
Tuesday, March 11, 2014
Monday, March 10, 2014
In a sense there is no God as yet achieved, but there is that force at work making God, struggling through us to become an actual organized existence, enjoying what to many of us is the greatest conceivable ecstasy, the ecstasy of a brain, an intelligence, actually conscious of the whole, and with executive force capable of guiding it to a perfectly benevolent and harmonious end. That is what we are working to. When you are asked, “Where is God? Who is God?” stand up and say, “I am God and here is God, not as yet completed, but still advancing towards completion, just in so much as I am working for the purpose of the universe, working for the good of the whole of society and the whole world, instead of merely looking after my personal ends.” In that way we get rid of the old contradiction, we begin to perceive that the evil of the world is a thing that will finally be evolved out of the world, that it was not brought into the world by malice and cruelty, but by an entirely benevolent designer that had not as yet discovered how to carry out its benevolent intention. In that way I think we may turn towards the future with greater hope.
We might compare this rhetoric to that of Kelly’s new essay in Wired, which begins with a classic Borg Complex move: “We’re expanding the data sphere to sci-fi levels and there’s no stopping it. Too many of the benefits we covet derive from it.” But if resistance is futile, that’s no cause for worry, because resistance would be foolish.
It is no coincidence that the glories of progress in the past 300 years parallel the emergence of the private self and challenges to the authority of society. Civilization is a mechanism to nudge us out of old habits. There would be no modernity without a triumphant self.So while a world of total surveillance seems inevitable, we don’t know if such a mode will nurture a strong sense of self, which is the engine of innovation and creativity — and thus all future progress. How would an individual maintain the boundaries of self when their every thought, utterance, and action is captured, archived, analyzed, and eventually anticipated by others?The self forged by previous centuries will no longer suffice. We are now remaking the self with technology. We’ve broadened our circle of empathy, from clan to race, race to species, and soon beyond that. We’ve extended our bodies and minds with tools and hardware. We are now expanding our self by inhabiting virtual spaces, linking up to billions of other minds, and trillions of other mechanical intelligences. We are wider than we were, and as we offload our memories to infinite machines, deeper in some ways.
There’s no point asking Kelly for details. (“The self forged by previous centuries will no longer suffice” for what? Have we really “broadened our circle of empathy”? What are are we “wider” and “deeper” than, exactly? And what does that mean?) This is not an argument. It is, like Shaw’s “New Theology,” a sermon, directed primarily towards those who already believe and secondarily to sympathetic waverers, the ones with a tiny shred of conscience troubling them about the universal surveillance state whose arrival Kelly awaits so breathlessly. Those who would resist need not be addressed because they’re on their way to — let’s see, what’s that phrase? — ah yes: the “dustbin of history.”
Now, someone might protest at this point that I am not being fair to Kelly. After all, he does say that a one-way surveillance state, in which ordinary people are seen but do not see, would be “hell”; and he even says “A massively surveilled world is not a world I would design (or even desire), but massive surveillance is coming either way because that is the bias of digital technology and we might as well surveil well and civilly.”
Let’s pause for a moment to note the reappearance of the Borg here, and Kelly’s habitual offloading of responsibility from human beings to our tools: for Woody Allen, “the heart wants what it wants” but for Kelly technology wants what it wants, and such sovereign beings always get their way.
But more important, notice here that Kelly thinks it’s a simple choice to decide on two-way surveillance: we “might as well.” He admits that the omnipotent surveillance state would be hell but he obviously doesn’t think that hell has even the remotest chance of happening. Why is he so confident? Because he shares Shaw’s belief in an evolutionary religion in which all that is true and good and holy emerges in history as the result of an inevitably beneficent process. Why should we worry about possible future constrictions of selfhood when the track record of “modernity” is, says Kelly, so utterly spotless, with its “glories of progress” and its “triumphant self”? I mean, it’s not as though modernity had a dark side or anything. All the arrows point skyward. So: why worry?
The only difference between Shaw and Kelly in this respect is that for Shaw the emerging paradisal “ecstasy of a brain” is a human brain; for Kelly it’s digital. Kelly has just identified digital technology as the means by which Shaw’s evolutionary progressivist Utopia will be realized. But what else is new? The rich, powerful, and well-connected always think that they and people like them (a) will end up on the right side of history and (b) will be insulated from harm — which is after all what really counts. Kelly begins his essay thus: “I once worked with Steven Spielberg on the development of Minority Report” — a lovely opener, since it simultaneously allows Kelly to boast about his connections in the film world and to dismiss Philip K. Dick’s dystopian vision as needlessly fretful. When the pre-cog system comes, it won’t be able to hurt anyone who really matters. So let’s just cue up Donald Fagen one more time and get down to the business of learning to desire whatever it is that technology wants. The one remaining spiritual discipline in Kelly's theology is learning to love Big Brother.
This relieved me greatly. And then I wondered why it did.
So I thought about it, and I came to the conclusion that I didn't really want to write a post on this topic — not really, not in my heart of hearts — but felt some inchoate obligation to do so. It just seems like the sort of thing about which I ought to have an opinion that I ought to be able to state. But that's silly. There's no reason whatsoever for me to opine about this. And yet only a period of intense busyness kept me from rushing to my computer to commit opinionizing all over the internet.
I think there's something to learn from this experience. For one thing, it enables me to see more clearly what we all know already: that when I see a topic being tossed around a lot on blogs and on Twitter, it's easy to be swept along by that tide. I was looking the other day at the mute filters I have set up for my Twitter client, and I couldn't help laughing at how many of them provided a record of those brief enthusiasms that take over Twitter for a day or two or three and then disappear forever. It took me a minute to remember who Todd Akin is. It took me even Longer to figure out why I had added the word "tampon" to my mute list, but I finally remembered that time when Melissa Harris-Perry was wearing tampons earrings and everybody on Twitter had something to say about that. This is why some Twitter clients that have mute filters that can be set for a limited time: I would imagine that three days would almost always be sufficient. Then the tide would have passed, and would be unlikely ever to return.
But I learned something else from this experience also: you can actually use the speed of the Internet to prevent you from wasting your time – or maybe I shouldn't say wasting it, but rather using it in a less-than-ideal fashion. If you just wait 48 or 72 hours, someone you follow on Twitter will almost certainly either write or link to a post which makes the very argument that you would have made if you had been quick off the mark.
For me, these realizations – which might not be new to any of you – are helpful. They remind me to give a topic a chance to cycle through the Internet for a few days, so I can find who has written wisely about it and point others to that person; and, if there are things that haven't been said that need to be said, I can address them from a more informed perspective and with a few days’ reflection under my belt. I can also practice the discipline — or maybe it’s a luxury rather than a discipline — of thinking longer thoughts about more challenging issues than are raised by than Melissa Harris-Perry’s earrings. Or even trigger warnings.
Friday, March 7, 2014
I’ve had far too many conversations over the last few years with trained, experienced, and practicing biblical scholars, young, middle aged, and near retirement, working in Evangelical institutions, trying to follow Jesus and use their brains and training to help students navigate the challenging world of biblical interpretation.
And they are dying inside.
Just two weeks ago I had the latest in my list of long conversations with a well-known, published, respected biblical scholar, who is under inhuman stress trying to negotiate the line between institutional expectations and academic integrity. His gifts are being squandered. He is questioning his vocation. His family is suffering. He does not know where to turn.
I wish this were an isolated incident, but it’s not.
I wish these stories could be told, but without the names attached, they are worthless. I wish I had kept a list, but even if I had, it wouldn’t have done anyone much good. I couldn’t have used it. Good people would lose their jobs.
I’m getting tired of hearing the same old story again and again. This is madness.
Enns is right that this kind of story is all too common, and all too sad. I’ve known, and talked to, and counseled, and prayed with, a number of such people over the years, and they’re not all in Biblical Studies either. But here’s the thing: I have also talked to an equal or greater number of equally distressed Christian scholars whose problem is that they teach in secular institutions where they cannot express their religious convictions — in the classroom or in their scholarship — without being turned down for tenure or promotion, or (if they are contingent faculty or pre-tenure) simply being dismissed. Odd that Enns shows no awareness of this situation.
I think he doesn't because he wants to present as a pathology of evangelicalism what is more generally and seriously a pathology of the academic job market: people feeling intimidated or utterly silenced because if they lose their professorial position they know they stand almost no chance of getting another one. Moreover, this isn’t a strictly academic issue either: people all over the world and in all walks of life feel this way about their jobs, afraid of losing them but troubled by their consciences about some aspect of their workplace. But I think these feelings are especially intense among American academics because of the number of people who can’t imagine themselves doing anything other than being a professor — and also because of the peculiar forms of closure in the most “open” academic environments.
As Stanley Fish wrote some years ago in an essay called “Vicki Frost Objects”,
What, after all, is the difference between a sectarian school which disallows challenges to the divinity of Christ and a so-called nonideological school which disallows discussion of the same question? In both contexts something goes without saying and something else cannot be said (Christ is not God or he is). There is of course a difference, not however between a closed environment and an open one but between environments that are differently closed.
So if we’re going to have compassion for academics feeling trapped in institutions that are uncongenial to their beliefs, let’s be ecumenical about it.
Moreover, I can’t tell from his post exactly what Enns thinks should be done about the situation, even within the evangelical context. If he thinks that all that Christian colleges and seminaries have to do is to relax their theological statements — well, that would be grossly naïve. No matter how tightly or loosely a religious institution defines itself, there will always be people on the boundaries, edge cases who will feel uncomfortable at best or coerced into submission at worst. And if, like the modern university, an institution insists that it has no such limitations on membership at all, then that will simply mean, as Fish makes clear, that the boundaries are there but unstated and invisible — until you cross them.
Wednesday, March 5, 2014
Now people have a variety of ways to dismiss these issues. For example, there’s the notion of intelligence as an ‘emergent phenomenon.’ That is, we don’t really need to understand the computational system of the brain because intelligence/consciousness/whatever is an ‘emergent phenomenon’ that somehow arises from the process of thinking. I promise: anyone telling you something is an emergent property is trying to distract you. Calling intelligence an emergent property is a way of saying ‘I don’t really know what’s happening here, and I don’t really know where it’s happening, so I’m going to call it emergent.’ It’s a profoundly unscientific argument. Next is the claim that we only need to build very basic AI; once we have a rudimentary AI system, we can tell that system to improve itself, and presto! Singularity achieved! But this is asserted without a clear story of how it would actually work. Computers, for all of the ways in which they can iterate proscribed functions, still rely very heavily on the directives of human programmers. What would the programming look like to tell this rudimentary artificial intelligence to improve itself? If we knew that, we’d already have solved the first problem. And we have no idea how such a system would actually work, or how well. This notion often is expressed with a kind of religious faith that I find disturbing.
Freddie’s important point reminds of of a comment in Paul Bloom’s recent essay in the Atlantic on brain science: “Scientists have reached no consensus as to precisely how physical events give rise to conscious experience, but few doubt any longer that our minds and our brains are one and the same.” (By the way, I don’t know what Freddie’s precise views are on these questions of mind, brain, and consciousness, so he might not agree with where I’m taking this.) Bloom’s statement that cognitive scientists “have reached no consensus” on how consciousness arises rather understates things: it would be better to say that they have no idea whatsoever how this happens. But that’s just another way of saying that they don’t know that it does happen, that “our minds and our brains are one and the same.” It’s an article of faith.
The problems with this particular variety of faith are a significant theme in David Bentley Hart’s The Experience of God, as, for instance, in this passage:
J. J. C. Smart, an atheist philosopher of some real acuity, dismisses the problem of consciousness practically out of hand by suggesting that subjective awareness might be some kind of “proprioception” by which one part of the brain keeps an eye on other parts of the brain, rather as a device within a sophisticated robot might be programmed to monitor the robot’s own systems; and one can see, says Smart, how such a function would be evolutionarily advantageous. So the problem of how the brain can be intentionally directed toward the world is to be explained in terms of a smaller brain within the brain intentionally directed toward the brain’s perception of the world. I am not sure how this is supposed to help us understand anything about the mind, or how it does much more than inaugurate an infinite explanatory regress. Even if the mechanical metaphors were cogent (which they are not, for reasons mentioned both above and below), positing yet another material function atop the other material functions of sensation and perception still does nothing to explain how all those features of consciousness that seem to defy the physicalist narrative of reality are possible in the first place. If I should visit you at your home and discover that, rather than living in a house, you instead shelter under a large roof that simply hovers above the ground, apparently neither supported by nor suspended from anything else, and should ask you how this is possible, I should not feel at all satisfied if you were to answer, “It’s to keep the rain out”— not even if you were then helpfully to elaborate upon this by observing that keeping the rain out is evolutionarily advantageous.
I highly recommend Hart’s book on this topic (and on many others). You don’t have to be a religious believer to perceive that eliminative materialism is a theory with a great many problems.
Tuesday, March 4, 2014
The idea that a computer might know you better than you know yourself may sound preposterous, but take stock of your life for a moment. How many years of credit card transactions, emails, Facebook likes, and digital photographs are sitting on some company’s servers right now, feeding algorithms about your preferences and habits? What would your first move be if you were in a new city and lost your smartphone? I think mine would be to borrow someone else’s smartphone and then get Google to help me rewire the missing circuits of my digital self.
My point is that this is not about inconvenience — increasingly, it’s about a more profound kind of identity outsourcing....
In history, in business, in love, and in life, the person (or machine) who tells the story holds the power. We need to keep learning how to read and write in these new languages, to start really seeing our own shadow selves and recognizing their power over us. Maybe we can even get them on our side.
A few years ago I quoted Jaron Lanier on the Turing Test:
But the Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?
Ed Finn is inadvertently illustrating Lanier’s point. What does a computer think my “identity” is, my “self” is? Why, credit card transactions and Facebook likes, of course. So Finn agrees with the computer. He for one welcomes our new cloud-based overlords.
Tuesday, February 25, 2014
However, I keep track of much of my ongoing reading — mostly for general research rather than for my classes — on my Pinboard page, so you can find lots of links and quotes there. Also, my longish article on the apparently endless “Two Cultures” controversy is up at the Books & Culture site. I’d be honored if you read it.
Thursday, February 20, 2014
ILL is a pain in the neck for libraries, though a necessary one. It has existed in one form or another for a long time, but could only become truly effective first with the computerization of library catalogues and then with the sharing of them over the internet. (I remember the dizzy rapture with which I first searched overseas university library catalogues, via telnet, back in the early ‘90s.) But ILL remains cumbersome: once requests come in they have to be processed, which in the case of books means packing and mailing, and in the case of articles in journals often means scanning to PDF before emailing. And with books there’s always the question of whether they will be returned on time or at all.
So it’s easy to see the great appeal for libraries of new tools like Ocaam’s Reader, which allows lending of e-books by providing a link that opens the books in a web-browser window. Once the allotted time is up, poof, the book (or, presumably, any other digitizable item) disappears. Howard writes that “Borrowed e-books can be read but not copied, printed out, or downloaded. The idea is to give borrowers quick access while reassuring publishers that copyrighted content will remain secure and can be shared without eating into sales.”
But if I can see it on my screen I can copy it — just with a bit of effort. For instance, I can take a screenshot, convert that image to PDF, and open it in PDFpen, which has built-in OCR capabilities. (Or, if I were going to do this often, I could change the default screenshot file format to PDF, thus saving a step.) The Ocaam’s Reader designers have done everything they can to assuage the concerns of publishers about too-easy lending limiting book sales, but any electronic format is, if you have the right tools, easier to copy and transmit than any paper format.
The great boon of e-books for publishers is obvious: it eliminates one of the great uncertainties of the book business, which is how many books to print. Established publishers are very good at guessing the likely sales of a given book, but when they’re wrong the consequences can be significant: being unable to meet demand of an unexpected hit, demand that might fade before the books are back in the stores; or, more common, getting stuck with a warehouse full of unordered or returned copies that then have to be remaindered or pulped. E-books make all those uncertainties disappear; but I suspect that many publishers are wondering whether they don't in the end amout to a Trojan horse for the book business.
Monday, February 17, 2014
The one book for which Thomas de Quincey is known today is his Confessions of an English Opium-Eater, in which he details, in a style owing much to writers of the seventeenth-century Baroque, “the pleasures and pains of Opium” — though with, it seems to many readers, more emphasis on the pleasures. He says at the outset that he admits no guilt, and seems to want more than anything else to give a plausible and self-exculpating explanation of how an English “scholar” and “philosopher” could have fallen under the sway of the drug so closely associated with undisciplined and weak-willed “Orientals.”
Perhaps feeling that he had not done enough to explain himself, De Quincey later wrote a sequel, Suspiria de Profundis, which he begins with an interesting reflection that connects, oddly enough, with the themes of this blog. NB: De Quincey writes few short sentences.
Habitually to dream magnificently, a man must have a constitutional determination to reverie. This in the first place, and even this, where it exists strongly, is too much liable to disturbance from the gathering agitation of our present English life. Already, in this year 1845, what by the procession through fifty years of mighty revolutions amongst the kingdoms of the earth, what by the continual development of vast physical agencies, steam in all its applications, light getting under harness as a slave for man , powers from heaven descending upon education and accelerations of the press, powers from hell (as it might seem, but these also celestial) coming round upon artillery and the forces of destruction, the eye of the calmest observer is troubled; the brain is haunted as if by some jealousy of ghostly beings moving amongst us; and it becomes too evident that, unless this colossal pace of advance can be retarded (a thing not to be expected), or, which is happily more probable, can be met by counter forces of corresponding magnitude, forces in the direction of religion or profound philosophy, that shall radiate centrifugally against this storm of life so perilously centripetal towards the vortex of the merely human, left to itself, the natural tendency of so chaotic a tumult must be to evil; for some minds to lunacy, for others to a regency of fleshly torpor. How much this fierce condition of eternal hurry upon an arena too exclusively human in its interests is likely to defeat the grandeur which is latent in all men, may be seen in the ordinary effect from living too constantly in varied company. The word dissipation, in one of its uses, expresses that effect; the action of thought and feeling is too much dissipated and squandered. To reconcentrate them into meditative habits, a necessity is felt by all observing persons for sometimes retiring from crowds. No man ever will unfold the capacities of his own intellect who does not at least checker his life with solitude. How much solitude, so much power.
In short, the pace of technological acceleration (including technologies of text) has the effect of making people too occupied, too social, “living too constantly in varied company.” Varied company may be good, but sometimes people need their own internal company. But they are incessantly drawn out of themselves and “dissipate” their mental powers by being deprived, or depriving themselves, of the restorative and concentrating effects of solitude.
De Quincey argues that this constant sociality limits one particular kind of human power to a great and (to him) deeply troubling degree:
Among the powers in man which suffer, by this too intense life of the social instincts, none suffers more than the power of dreaming. Let no man think this a trifle. The machinery for dreaming planted in the human brain was not planted for nothing. That faculty, in alliance with the mystery of darkness, is the one great tube through which man communicates with the shadowy. And the dreaming organ, in connection with the heart, the eye and the ear, compose the magnificent apparatus which forces the infinite into the chambers of a human brain, and throws dark reflections from eternities below all life upon the mirrors of the sleeping mind.
But just as there are technologies that dissipate human power, so too there may be technologies that concentrate it in dreams:
But if this faculty suffers from the decay of solitude, which is becoming a visionary idea in England, on the other hand, it is certain that some merely physical agencies can and do assist the faculty of dreaming almost preternaturally. Amongst these is intense exercise; to some extent at least, and for some persons; but beyond all others is opium, which indeed seems to possess a specifc power in that direction; not merely for exalting the colors of dream−scenery, but for deepening its shadows; and, above all, for strengthening the sense of its fearful realities.
De Quincey goes on to say that, yes, exercise is better for you than opium, and that he could only throw off the yoke of opium by intense exercise — but he also makes it clear in this passage that exercise does not intensify and deepen the dreams of the solitary person the way opium does.
In De Quincey's ideal world, then, the sociability of textual technologies would be countered by equally powerful but also safe solitude- and dream-conducive pharmaceutical technologies. Alas that the world is not ideal.
In the most famous passage of his Confessions, de Quincey writes this great apostrophe:
O just, subtle, and mighty opium! that to the hearts of poor and rich alike, for the wounds that will never heal, and for “the pangs that tempt the spirit to rebel,” bringest and assuaging balm; – eloquent opium! that with thy potent rhetoric stealest away the purposes of wrath, and, to the guilty man, for one night givest back the hopes of his youth, and hands washed pure from blood; and, to the proud man, a brief oblivion for “Wrongs unredressed, and insults unavenged;” that summonest to the chancery of dreams, for the triumphs of suffering innocence, false witnesses, and confoundest perjury, and dost reverse the sentences of unrighteous judges; thou buildest upon the bosom of darkness, out of the fantastic imagery of the brain, cities and temples, beyond the art of Phidias and Praxiteles, – beyond the splendour of Babylon and Hekatompylos; and, “from the anarchy of dreaming sleep,” callest into sunny light the faces of long-buried beauties, and the blessed household countenances, cleansed from the “dishonours of the grave.” Thou only givest these gifts to man; and thou hast the keys of Paradise, oh just, subtle, and mighty opium!
But, in the deepest and bitterest of ironies, de Quincey knows, and expects us to know, that he is patterning his praise on another famous apostrophe, this one written by Sir Walter Raleigh: “O eloquent, just, and mighty death! whom none could advise, thou hast persuaded; what none hath dared thou hast done; and whom all the world hath flattered, thou only hast cast out of the world and despised: thou hast drawn together all the farstretched greatness, all the pride, cruelty, and ambition of man, and covered it all over with these two narrow words, Hic jacet.”
Opium kills. The perfect pharmakon with the power to cure our technologically-induced alienation from our selves, and the power to release our full mental powers, without poisoning us or in any way harming us, had not yet been developed. How does the situation look in 2014?
Commentary on technologies of reading, writing, research, and, generally, knowledge. As these technologies change and develop, what do we lose, what do we gain, what is (fundamentally or trivially) altered? And, not least, what's fun?
Alan Jacobs is Distinguished Professor of the Humanities in the Honors Program of Baylor University and the author, most recently, of The “Book of Common Prayer”: A Biography and The Pleasures of Reading in an Age of Distraction. His homepage is here.
Sites of Interest
- August (9)
- July (8)
- June (14)
- May (28)
- April (13)
- March (24)
- February (16)
- January (23)
- December (28)
- November (19)
- October (21)
- September (25)
- August (20)
- July (33)
- June (54)
- May (44)
- April (19)
- March (24)
- February (19)
- January (25)
- December (33)
- November (33)
- October (39)
- September (27)
- August (32)
- July (36)
- June (26)
- May (25)
- April (32)
- March (34)
- February (2)
- January (31)