Text Patterns - by Alan Jacobs

Wednesday, April 30, 2014

the circle game

Here’s an excellent post by the redoubtable AKMA on desire and interpretation, with particular reference to the “Jesus’s Wife Fragment” (JWF):

For instance, why did anyone think the fragment was genuine in the first place? I am not a papyrologist, a palaeographer, or a reader of Coptic — but the early photos of the fragment looked odd to me right away. Clearly, they looked right enough to pass muster to Karen King and the experts she consulted, so my unease doesn’t count for much.

I can’t keep from thinking that somewhere in the alchemy of academic judgement, some people wanted to think the JWF was genuine, and others that it wasn’t. In fact, I’ll be bold enough to say that I know this was true. Did a prior disposition in favour of revolutionary, disruptive, rebellious parties in early Christianity have any effect on Prof. King’s judgement about the fragment? In an irreproachably sound academic way, it certainly did: she more than many other scholars is open to the possibility that non-standard traditions about Jesus circulated broadly and for centuries after the consolidation of conciliar doctrine about Jesus (as in fact it still does). Many scholars would be less disposed to consider anything about a JWF from the start. So without impugning her scholarship in the least, it seems fair to say that her disposition affected her judgement at least as far as her interest in the fragment and her willingness even to consider its genuineness.

(By the way, if you have any doubts about the fraudulence of the fragment, read this post and follow the links.) As AKMA points out, most of us tend to be far more aware of the desires of our opponents than of our own. Hang around theologically liberal biblical scholars and you’ll get the impression that they are deeply serious truth-seekers, while evangelicals and fundamentalists are too frightened of losing their comforting belief-structure to face hard truths. Hang around with those evangelicals, by contrast, and you’ll get the impression that they are doing serious, evidence-based scholarship while those liberals kowtow to the intellectual trends of the moment in order to keep their jobs at secular (or at best thoroughly secularized) universities.

I would just add that — as I suggested in this earlier post — the desires that AKMA points to are linked to incentives. Religiously conservative scholars who work in religiously conservative institutions have strong incentives to reach religiously conservative conclusions in their scholarship, lest they lose their jobs; conversely, religious believers who work at theologically liberal or secular institutions have equally strong incentives to (a) reach liberalizing and secularizing conclusions in their scholarship or (b) keep their mouths shut about their beliefs and try to limit any dissonance between their views and those of their colleagues.

Both sides, then — and this will be equally true of divided scholarly communities in many other fields — will strive to exclude those who disagree with them from serious consideration. Consider, to take but one example, this recent Boston Globe interview with Bart Ehrman:

IDEAS: Is it widely accepted among scholars that Jesus did not claim divinity?

EHRMAN: That has been a widely held scholarly view for about 300 years among critical scholars. Among scholars who are evangelical Christians who are committed to the idea that Jesus is God and knew he was God, they maintain that Jesus did say that he was God.

Note how Ehrman tries to cast objections to his view as occurring only among “evangelical Christians,” even though he knows perfectly well that countless Catholic and Orthodox scholars hold the same view. And note the reference to “critical scholars”: Truly critical scholars — the term is clearly complimentary — deny that Jesus claimed to be God, because those claims come in the Gospel of John, the historical character of which they reject. But what qualifies someone as a critical scholar? Well, among other things, the view that Jesus did not claim to be God and that the Gospel of John is non-historical. Thus the circle neatly closes.

There are of course religiously conservative versions of the same thing, which is, basically, the “no true Scotsman” fallacy. The question is: how do we get out of these loops of self-confirmation?

Wednesday, April 23, 2014

triumphalism and historical imagination

A great many of our social ills are caused, or at least intensified, by a lack of historical imagination. Imagination looks ahead to what might be, but is always informed, whether we realize it or not, by what we think has happened up to this point. Any image of what’s to come will necessarily trace a line that extends the vector of history — or history as it is perceived: which is where the problem lies.

Limited knowledge of history creates a kind of recency effect: people whose knowledge of the past extends only a few years back will perceive short-term trends as having more power and impetus than is warranted. And recency effects are amplified by prevailing ideologies: for many liberals and libertarians, belief in inevitable progress, and for social conservatives and apocalyptic Christians, belief in inevitable decline. But I think the most common and lamentable social result of this lack of historical imagination that results from ignorance is triumphalism.

Consider how, after the collapse of the Soviet empire, many advocates of capitalism came to believe that the only ideological alternative to their system was gone and gone for good. This confidence paved the way for a culture of indulgence and, yes, a “greed is good” mentality, in which CEO salaries skyrocket and finance companies still hand out extravagant bonuses when company performance is declining or even plummeting. But, thanks in part to such thoughtlessness, Marxism may be returning — or, more likely to win influence, a not-really-Marxist leftist critique of capitalism like that of Thomas Piketty.

Or consider school desegregation, a victory that many believe was won decades ago. But Jelani Cobb demonstrates that this is not at all true:

And so, sixty years after Brown, it is clear that the notion of segregation as a discrete phenomenon, an evil that could be flipped, like a switch, from on to off, by judicial edict, was deeply naïve. The intervening decades have shown, in large measure, the limits of what political efforts directed at desegregation alone could achieve, and the crumbling of both elements of “separate but equal” has left us at an ambivalent juncture. To the extent that desegregation becomes, once again, a pressing concern — and even that may be too grand a hope — it will have to involve the tax code, the minimum wage, and other efforts to redress income inequality. For the tragedy of this moment is not that black students still go to overwhelmingly black schools, long after segregation was banished by law, but that they do so for so many of the same reasons as in the days before Brown.

It turns out that victories are not always what they appear to be, and that without vigilance old habits and practices and prejudices can silently and slowly but powerfully reassert themselves.

It’s in the light of these examples that I’d like to look at a recent essay by J. Bryant Lowder on whether gays and lesbians and their supporters should use reason and persuasion to win over their opponents:

We will undoubtedly continue to employ that approach when we have the necessary energy and emotional reserves. But we also reserve the right to use the recent miracle of gradually improving public and corporate opinion to get a little nonviolent justice, even a little retributive succor, when we can. All’s fair in love and war, and until our love is no longer the subject of debate, reasonable or otherwise, this war isn’t over.

Hardly anybody takes this approach (whose goal is to punish and then extinguish dissent, by whatever means) unless they are absolutely confident that they are on “the right side of history” — which is to say, the winning side — the permanently winning side. In other words, Lowder believes exactly what the people who instituted legal discrimination against gays believed: that there will never be a time when those over whom we seek to exert power will have power over us.
Well, maybe. Maybe there will never again be anti-gay discrimination, in this country anyway, like that of the past. Maybe countries elsewhere in the world will follow the same path that the West has (recently) followed. Similarly, maybe the kind of people who become philosophers at Oxford will always be the ones deciding how convicted criminals get punished — rather than being themselves subject to state coercion. As Jake Barnes says in The Sun Also Rises, “Isn’t it pretty to think so?”

So let me close by suggesting one question: How would you act politically — what kinds of arguments would you make, what kinds of laws would you support, what means of persuasion would you use — if you knew that those whom you most despise will at some point hold the reins of political power in your country?

Tuesday, April 22, 2014

the wrong vox

The other day Vox.com ran an article claiming that the pace at which technological innovations are accepted is speeding up. The problem is, as Matt Novak pointed out, that really isn’t true. Not true at all.

And then things started getting a little weird. Vox began silently to make changes to the story, at first making slight alterations — where it has referred to “the internet” it now refers (more accurately) to the “World Wide Web.” Over the next day or so further changes were made — charts were deleted and added — still with no acknowledgement. But eventually two statements were added, at the beginning of the article:

Correction: This post originally gave incorrect dates for the introduction of radio and television technology and the invention of the cell phone. It also mis-labeled the web as the internet. We regret these errors.

and at the end:

Update: This post has been modified to include the original technology-adoption chart from the FCC that's the source for our graph. The graph has also been tweaked to more clearly denote the adoption of the web starting in 1991, not the broader internet. And Gizmodo is right: we should have noted these changes at the time. Our apologies.

“Matt Novak” or “Paleofuture” would have been better than “Gizmodo,” but this is a significant step forwards. However, it’s not all that it should be. In a smart post written before the corrections were acknowledged, Freddie deBoer wrote,

It’s okay to make corrections — better than okay, actually, it’s necessary and responsible. But you have to come out and say you did that by writing a brief section (a paragraph will do) saying “we changed X, Y, and Z, and this is why.” If you don’t, it just looks dishonest, and it risks contributing to a sense of imperiousness that is not a good look. Worse, it gives you less incentive to not make the same mistake in the future, if you just disappear the old problems. There’s an “Updated by” line at the top, but no other information, and for me, that doesn’t do enough. Don’t compound the problems, guys. Just own up to them.

By the standards Freddie lays out, which seem to me the right ones, Vox’s appended statements do half the job: they acknowledge that changes have been made, and made to correct errors, but they don't deal with the larger problem, which is that some of the key claims in the article were and remain simply incorrect. As far as I can see, Vox has corrected the factual errors which led to the inaccurate conclusions but left those conclusions in place. Which seems a little odd.

I think this little contretemps needs to be considered in light of the big essay that Ezra Klein wrote to launch Vox, “How Politics Makes Us Stupid.” Here too we find a strong argument based on what turns out to be, as Caleb Crain pointed out, a simple and straightforward misreading of the data. But Klein has made no corrections, and as far as I can discover, there’s no acknowledgement on the Vox site of Crain’s challenge.

That “as far as I can discover” is perhaps the most important point of all. Vox doesn’t have comments. There is no “letters to the editor” page. Vox has no ombudsman. You can email or tweet at its writers, but they’re free to ignore you, and who knows if the editors see any of those communications? The site has no contact page that I can find. There’s not even a search box on the site: you have to use Google to find articles. Basically, Vox.com is a black box. Now, for the “card stacks” there is apparently some kind of correction model in place — but if for card stacks why not for articles? There seems to be no policy here, and only one person — the superb tech journalist Timothy B. Lee — whom I’ve seen responding to corrections. (He’s the one who let Matt Novak and me know about the changes made to the article I refer to at the beginning of this post. If others at Vox are doing this, please let me know in the comments.)

Klein has said repeatedly — see this interview for instance — that he wants to use Vox to explain the news to people, which is cool, but the explainer model coupled with the strong discouragement of feedback sends a pretty clear message: We know, we tell, you listen.

Contrast that attitude to the the model the venerable New York Times says it wants to follow in its new endeavor, called The Upshot.

Perhaps most important, we want The Upshot to feel like a collaboration between journalists and readers. We will often publish the details behind our reporting — such as the data for our inequality project or the computer code for our Senate forecasting model — and we hope that readers will find angles we did not. We also want to get story assignments from you: Tell us what data you think deserves exploration. Tell us which parts of the news you do not understand as well as you’d like.

The staff of The Upshot is filled with people who love to learn new things. That’s why we became journalists. We consider it a great privilege to be able to delve into today’s biggest news stories and then report back to you with what we’ve found. We look forward to the conversation.

Maybe The Upshot won’t live up to these noble ideals, but such an announcement is a good start. And shouldn't a high-profile “new media” venture like Vox be even more aware of and willing to embrace the communicative possibilities of ... well, of new media? Instead, they seem to be creating a one-way street, like a Victorian newspaper. Klein has said that he and his fellow Wonkblog writers “were badly held back not just by the technology, but by the culture of journalism.” But to me, the culture of journalism is not looking so bad right now. And while Vox.com is definitely a work in progress, it's not a good sign that responsiveness to and intersection with readers doesn't seem to have been part of their initial vision at all.

I hope Vox fixes these problems. There are things about it I really like — many of the card stacks are crisply accurate and therefore quite useful, and it has some first-rate writers, like Tim Lee and Dara Lind: see Lee’s excellent explanation of the confusing Aereo case and Lind’s clear and information-rich stack on prisons. But as long as the site remains so closed-off to its readers, many people will be likely to conclude that the difference between old media and new is that the old has higher standards and more accountability.

Friday, April 18, 2014

Death and Twitter

Yesterday Gabriel Garcia Marquez died, and suddenly my Twitter feed was full of tributes to him. Person after person recalled how deeply they had been moved by his novels and stories. And yet, I don't believe that in the seven years I’ve been on Twitter I had ever before seen a single tweet about GGM.

This has happened often on Twitter: I think of Whitney Houston, Paul Walker, Philip Seymour Hoffman. People poured our their expressions of affection, gratitude, and grief — all for those whom they had never mentioned on Twitter until then. Why?

Well, death always does this to us, doesn’t it? When you hear that an old friend has died, even if you haven’t seen her in years and years, your memory draws up all the good times you had together: they appear before you enriched and intensified by the knowledge that they can no longer be added to. The story of your relationship takes vivid shape in light of its ending, as often happens also with stories we read.

But I think on Twitter that natural and probably universal experience gets amplified in the great cavern of social media. You tweet about Whitney Houston’s death in part because other people are tweeting about Whitney Houston’s death and you don’t want to seem cold or indifferent, and as the avalanche builds, it comes to seem that Whitney Houston was of great importance to a great many people — even though most of them hadn’t thought about her in fifteen years and wouldn’t have noticed if they never heard a song of hers again. Such are the effects of what Paul Ford and Matt Buchanan have called “peer-to-peer grieving”.

I’m reminded here of a brilliant piece by the playwright Bertolt Brecht called “Two Essays on Unprofessional Acting,” in which he comments:

One easily forgets that human education proceeds along highly theatrical lines. In a quite theatrical manner the child is taught how to behave; logical arguments only come later. When such-and-such occurs, it is told (or sees), one must laugh. It joins in when there is laughter, without knowing why; if asked why it is laughing it is wholly confused. In the same way it joins in shedding tears, not only weeping because the grown-ups do so but also feeling genuine sorrow. This can be seen at funerals, whose meaning escapes children entirely. These are theatrical events which form the character. The human being copies gestures, miming, tones of voice. And weeping arises from sorrow, but sorrow also arises from weeping.

To this we might add, “And tweeting arises from sorrow, but sorrow also arises from tweeting.”

And there's one more element worth noting: When someone like Philip Seymour Hoffman dies, at the height of his powers and of his fame, the grief that people express is distinctly different from that they express for a faded star like Whitney Houston. Since they have no recent encounters with her music, they cast their minds back to their own youth — which is of course lost. As Gerard Manley Hopkins said to the young girl weeping over a forest losing its leaves in autumn, “It is Margaret you mourn for”.

All this said, I wonder if it might not be useful for all of us to spend some time thinking about those artists and musicians and writers and actors and thinkers whose death would — we know now, while they’re still here, without any crowdsourced lamentation — would really and truly be a loss to our lives. And then maybe tweet a line or two of gratitude for them before death forces our hand.

Thursday, April 17, 2014

on documentation

This essay on scholarly documentation practices lays down some very useful principles — for some scholars working in some circumstances. Unfortunately, the author, Patrick Dunleavy, assumes a situation that doesn't yet exist and may not for some time to come.

Dunleavy presents as normative, indeed nearly universal, a situation in which (a) scholarly publication is natively digital because we live in “the digital age” and (b) scholars are working with open-access or public-domain sources that are readily available online. When those two conditions hold, his recommendations are excellent. But they don’t always hold, and what he calls “legacy” documentation is in fact not a legacy condition for many of us, but rather necessary and normal.

For instance: Dunleavy says of page-number citations, "That is legacy referencing, designed solely to serve the interests of commercial publishers, and 90% irrelevant now to the scholarly enterprise." I don't yet have any data about my recent biography of the Book of Common Prayer — see, and use, the links on the right of this page, please — but for my previous book, The Pleasures of Reading in an Age of Distraction, codex sales have exceeded digital sales by a factor of 10. So my 90/10 split is the opposite of what Dunleavy asserts to be the case. It makes no sense for me to think of the overwhelming majority of my readers as inhabiting a “legacy” realm and to focus my attention on documenting for the other ten percent. Page numbers are still eminently relevant to me and my readers. Dunleavy claims that “pagination in the digital age makes no sense at all,” which may be true, if and when we get to “the digital age.”

Moreover, most of my scholarly work is on figures — currently W. H. Auden, C. S. Lewis, Simone Weil, and Jacques Maritain — whose work is still largely or wholly under copyright. So I have few open-access or public-domain options for citing them. And this, too, is a common situation for scholars.

Dunleavy is thus laying down supposedly universal principles that in fact apply only to some scholars in some disciplines. Which is why this tweet from Yoni Appelbaum is so apropos:

Monday, April 14, 2014

smileys, emoticons, typewriter art

I hate to be a party pooper — no, really: I hate it — but I just don't think Levi Stahl has found an emoticon in a seventeenth-century poem — nor, for that matter, that Jennifer 8. Lee found one from 1862.

About Stahl and Robert Herrick. If we were really serious about finding out whether Robert Herrick had used an emoticon, we’d look for his manuscripts — since we could never be sure that his printers had carried out his wishes accurately, especially in those days of highly variable printing practices. But those manuscripts, I think, are not available.

The next step would be to look online for a facsimile of the first, or at least a very early, edition, and while Google Books has just such a thing, it is not searchable. So, being the lazy guy that I am, I looked for nineteenth-century editions, and in the one I came across, there are no parentheses and hence no emoticon:

So it’s possible, I’d say likely, that the parenthesis in the poem was inserted by a modern editor. Not that parentheses weren’t used in verse in Herrick’s time — they were — but not as widely as we use them today and not in the same situations. Punctuation in general was unsettled in the seventeenth century — as unsettled as spelling: Shakespeare spelled his own name several different ways — and there were no generally accepted rules. Herrick was unlikely to have had consistent punctuational practices himself, and even if he did he couldn't expect either his printers or his readers to share them.

So more generally, I think Stahl’s guess is ahistorical. The first emoticons seem to have been invented about thirty years ago, and are clearly the artifact of the computer age, or, more specifically, a purely digital or screen-based typewriting-only environment — because if you were printing something out before sending it, you could just grab a pen and draw a perfectly legible, friendly, not-rotated-90-degrees smiley, or frowney, or whatever, as people still do. Emoticons arose to address a problem that did not and does not exist in a paper-centric world.

And one final note: in the age between the invention of the typewriter and the transition to digital text, people certainly realized that type could make images — but they were rather more ambitious about it.

Sunday, April 13, 2014

the keys to society and their rightful custodians

Recently Quentin Hardy, the outstanding technology writer for the New York Times, tweeted this:

If you follow the embedded link you’ll see that Head argues that algorithm-based technologies are, in many workplaces, denying to humans the powers of judgment and discernment:

I have a friend who works in physical rehabilitation at a clinic on Park Avenue. She feels that she needs a minimum of one hour to work with a patient. Recently she was sued for $200,000 by a health insurer, because her feelings exceeded their insurance algorithm. She was taking too long.

The classroom has become a place of scientific management, so that we’ve baked the expertise of one expert across many classrooms. Teachers need a particular view. In core services like finance, personnel or education, the variation of cases is so great that you have to allow people individual judgment. My friend can’t use her skills.

To Hardy’s tweet Marc Andreesen, the creator of the early web browser Mosaic and the co-founder of Netscape, replied,

Before I comment on that response, I want to look at another story that came across my Twitter feed about five minutes later, an extremely thoughtful reflection by Brendan Keogh on “games evangelists and naysayers”. Keogh is responding to a blog post by noted games evangelist Jane McGonigal encouraging all her readers to find people who have suffered some kind of trauma and get them to play a pattern-matching video game, like Tetris, as soon as possible after their trauma. And why wouldn’t you do this? Don't you want to “HELP PREVENT PTSD RIGHT NOW”?

Keogh comments,

McGonigal ... wants a #Kony2012-esque social media campaign to get 100,000 people to read her blog post. She thinks it irresponsible to sit around and wait for definitive results. She even goes so far as to label those that voice valid concerns about the project as “games naysayers” and compares them to climate change deniers.

The project is an unethical way to both present findings and to gather research data. Further, it trivialises the realities of PTSD. McGonigal runs with the study’s wording of Tetris as a potential “vaccine”. But you wouldn’t take a potential vaccine for any disease and distribute it to everyone after a single clinical trial. Why should PTSD be treated with any less seriousness? Responding to a comment on the post questioning the approach, McGonigal cites her own suffering of flashbacks and nightmares after a traumatic experience to demonstrate her good intentions (intentions which I do not doubt for a moment that she has). Yet, she wants everyone to try this because it might work. She doesn’t stop to think that one test on forty people in a controlled environment is not enough to rule out that sticking Tetris or Candy Crush Saga under the nose of someone who has just had a traumatic experience could potentially be harmful for some people (especially considering Candy Crush Saga is not even mentioned in the study itself!).

Further, and crucially, in her desire to implement this project in the real world, she makes no attempt to compare or contrast this method of battling PTSD with existing methods. It doesn’t matter. The point is that it proves games can be used for good.

If we put McGonigal’s blog post together with Andreesen’s tweet we can see the outlines of a very common line of thought in the tech world today:

1) We really earnestly want to save the world;

2) Technology — more specifically, digital technology, the technology we make — can save the world;

3) Therefore, everyone should eagerly turn over to us the keys to society.

4) Anyone who doesn’t want to turn over those keys to us either doesn't care about saving the world, or hates every technology of the past 5000 years and just wants to go back to writing on animal skins in his yurt, or both;

5) But it doesn't matter, because resistance is futile. If any expresses reservations about your plan you can just smile condescendingly and pat him on the head — “Isn’t that cute?” — because you know you’re going to own the world before too long.

And if anything happens go to astray, you can just join Peter Thiel on his libertarian-tech-floating-earthly-Paradise.

Enjoy your yurts, chumps.

the internet and the Mezzogiorno

Auden on Ischia, by George Daniell

From the late 1940s to the late 1950s, W. H. Auden spent part of each year on the Island of Ischia in the Bay of Naples. When he bought a small house in Austria and left Italy, he wrote a lovely and funny poem called "Good-bye to the Mezzogiorno" in which he reflected on how he, as the child of a "potato, beer-or-whiskey / Guilt culture," never became anything more than a stranger in southern Italy.

As he thinks about the people of that region, he wonders if, despite the liveliness of the culture, they might be "without hope." And he muses, 

                                This could be a reason
Why they take the silencers off their Vespas,
    Turn their radios up to full volume,  

And a minim saint can expect rockets — noise
    As a counter-magic, a way of saying
Boo to the Three Sisters: "Mortal we may be,
    But we are still here!"

I thought of this poem the other day when I saw this story about how NPR played a little trick on its Facebook fans: giving them a headline that was not accompanied by an actual story, but that people commented on — vociferously, confidently — anyway. Writing like this, and it constitutes the vast majority of all online commenting, is not so much an attempt at communication or rational conversation as it is an assertion of presence: "Mortal we may be, / But we are still here!" And the more assertive your comments are, the harder it is to deny your presence. Abusing people whose (often imagined) views you disdain is like taking the silencer off your Vespa; writing in all caps is like turning your radio up to full volume.

Which raises the question of why so many people feel so strongly the need to announce their presence in the internet's comboxes. Surely not the for same reason that people like me write blog posts! 

Thursday, April 10, 2014

decline of the liberal arts?

Minding the Campus sets forth a question:

As students and their families rethink the value of the liberal arts, defenders of traditional education are understandably ambivalent. On the one hand, the diminished stature of the liberal arts seems long overdue, and this critical reevaluation might lead to thoughtful reform. On the other, this reevaluation might doom the liberal arts to irrelevance. To that end, Minding the Campus asked a list of distinguished thinkers a straightforward question: should we be unhappy that the liberal arts are going down? Here are responses from Heather Mac Donald, Thomas Lindsay, and Samuel Goldman.

Three more answers, by Patrick Deneen, Peter Wood, and Peter Lawler follow here. Each respondent agrees with the question’s premise, though there’s a partial dissent from Samuel Goldman, who notes that “liberal education can't be reduced to colleges, course offerings, or graduate programs” — liberal learning and the experience of great art happen outside these formal settings, and we don't have any reliable ways of measuring how often.

Goldman’s point is a good one, and I’d like to imitate its spirit and inquire more seriously into the assumptions underlying the conversation.

First of all, we might note that enrollment in university humanities course is not declining, despite everything you’ve heard. But the respondents to the Minding the Campus would not be consoled by this news, since, as several of them point out, humanities and arts programs have all-too-frequently abandoned the teaching of traditional great books and masterpieces of art — or at best have made the study of such works optional.

But even if that’s true, it may not support the claim that “the liberal arts are going down.” Consider the things we’d need to know before we could draw that conclusion:

  • What are the geographical boundaries of our inquiry? Are we looking just at American colleges and universities, or are we considering what’s happening in other countries?
  • What are the temporal boundaries of our inquiry? If we’re comparing the educational situation today to 1960 the trends may look rather different than if we’re comparing our present moment to 1600.
  • What does a student need to be doing in order to qualify as studying the liberal arts in some traditional form? Do they need to be majoring in a liberal-arts program that follows some (to-be-defined) traditional model? Or would an engineering major who had participated in a core curriculum like that at Columbia, or a pre-law major here at Baylor who took the pre-law Great Texts track, count?
  • What population are we looking at? We might ask this question: “What percentage of full-time college and university students are pursuing a traditional liberal arts curriculum?” But we might also ask this question: “What percentage of a given country’s 18-to-22-year-olds are pursuing a traditional liberal arts curriculum?”

That last point seems to be especially important. If we were to ask the second question, then we’d have to say that a higher percentage of young Americans are studying traditional liberal arts than are doing so in almost any other country, or have done so at almost any point in human history — would we not? When the traditional liberal-arts curriculum that Minding the Campus prefers was dominant, a far smaller percentage of Americans attended college or university. So maybe the liberal arts — however traditionally you define them — aren’t going down at all, if we take the whole American population and an expansive time-frame into account. The question just needs to be framed much more precisely.

Wednesday, April 9, 2014

simplification where it doesn't belong

This is not a topic to which I can do justice in a single post, or even, I expect, a series of posts, but let me make this a placeholder and a promise of more to come. I want to register a general and vigorous protest against thought-experiments of the Turing test and Chinese room variety. These two experiments are specific to debates about intelligence (natural or artificial) and consciousness (ditto), but may also be understood as subsets of a much larger category of what we might call veil-of-ignorance strategies. These strategies, in turn, imitate the algebraic simplification of expressions.

The common method here goes something like this: when faced with a tricky philosophical problem, it's useful to strip away all the irrelevant contextual details so as to isolate the key issues involved, which then, so isolated, will be easier to analyze. The essential problem with this method is its assumption that we know in advance which elements of a complex problem are essential and which are extraneous. But we rarely know that; indeed, we can only know that if we have already made significant progress towards solving our problem. So in “simplifying” our choices by taking an enormous complex of knowledge — the broad range of knowledge that we bring to all of our everyday decisions — and placing almost all of it behind a veil of ignorance, we may well be creating a situation so artificially reductive that it tells us nothing at all about the subjects we’re inquiring into. Moreover, we are likely to be eliminating not just what we explicitly know but also the tacit knowledge whose vital importance to our cognitive experience Michael Polanyi has so eloquently emphasized.

By contrast to the veil-of-ignorance approach, consider its near-opposite, the approach to logic and argumentation developed by Stephen Toulmin in his The Uses of Argument. For Toulmin, the problem with most traditional approaches to logic is this very tendency to simplification I’ve been discussing — a simplification that can produce, paradoxically enough, its own unexpected complications and subtleties. Toulmin says that by the middle of the twentieth century formal philosophical logic had become unfortunately disconnected from what Aristotle had been interested in: “claims and conclusions of a kind that anyone might have occasion to make.” Toulmin comments that “it may be surprising to find how little progress has been made in our understanding of the answers in all the centuries since the birth, with Aristotle, of the science of logic.”

So Toulmin sets out to provide an account of how, in ordinary life as well as in philosophical discourse, arguments are actually made and actually received. Aristotle had in one sense set us off on the wrong foot by seeking to make logic a “formal science — an episteme.” This led in turn, and eventually, to attempts to make logic a matter of purely formal mathematical rigor. But to follow this model is to abstract arguments completely out of the lifeworld in which they take place, and leave us nothing to say about the everyday debates that shape our experience. Toulmin opts instead for a “jurisprudential analogy”: a claim that we evaluate arguments in the same complex, nuanced, and multivalent way that evidence is weighed in law. When we evaluate arguments in this way we don't get to begin by ruling very much out of bounds: many different kinds of evidence remain in play, and we just have to figure out how we see them in relation to one another. Thus Toulmin re-thinks “the uses of argument” and what counts as responsible evaluation of the arguments that we regularly confront.

It seems to me that when we try to understand intelligence and consciousness we need to imitate Toulmin’s strategy, and that if we don’t we are likely to trivialize and reduce human beings, and the human lifeworld, in pernicious ways. It’s for this reason that I would like to call for an end to simplifying thought experiments. (Not that anyone will listen.)

So: more about all this in future posts, with reflections on Mark Halpern’s 2006 essay on “The Trouble with the Turing Test”.

Tuesday, April 8, 2014

my e-reader wishlist

My ideal e-reader would have

  • the battery life of a Kindle Paperwhite
  • the weight of a Kindle Paperwhite
  • the screen resolution of a Kindle Fire (I won’t even ask for it to be color)
  • the (free!) cellular connectivity of a Kindle Keyboard
  • the hardware keyboard and navigating system of a Kindle Keyboard
  • the highlighting/note-taking UI of the Kindle for iOS app
  • the glare-freeness of a paper codex (or, failing that, of a Kindle Paperwhite)

Just a few random comments on this wishlist:

1) I want “the hardware keyboard and navigating system of a third-generation Kindle” because touchscreens handle such actions pretty badly. (Much of what follows goes for typing on a virtual keyboard also.) The chief problem is that when you’re trying to highlight using your finger — and to an extent even when you use a stylus, though even if a stylus helps it’s one more thing to have to deal with — that finger blocks your view of what you’re highlighting, which means that you have to guess whether you’re hitting your target or not, or else pivot both your head and your finger to try to get a better look. With all touchscreen devices I am regularly overshooting or undershooting the terminus ad quem of my highlight. With the good old Kindle Keyboard your hands are not on the screen, so you can see it fully and clearly — and you have the additional benefit of being able to highlight without moving your hands from their reading position: just shift your thumb a bit and you’re there.

2) In commending the Paperwhite, Amazon says “Unlike reflective tablet and smartphone screens, the latest Kindle Paperwhite reads like paper — no annoying glare, even in bright sunlight.” This is not true. The Paperwhite screen is far, far less reflective than a glass tablet screen — but it’s still considerably more reflective than a paper page, and when I’m reading on it outdoors, which I love to do, I often have to adjust the angle of the screen to eliminate glare.

3) The latest Kindle Fire is an absolutely beautiful piece of hardware: solidly built, pleasant to hold, significantly lighter than earlier versions, and featuring a glorious hi-res screen. Its response time is also considerably faster than the Paperwhite, whose lagginess can be occasionally frustrating. But the software is mediocre at best. The video app works flawlessly, but reading can be frustrating if you’re doing any highlighting or annotating. It’s hard to select text for annotating, and if a highlight crosses to the next “page” it can sometimes take me three or four tries to get the selection to end properly — often I end up selecting the whole of the next page, with no way to back up, or else I get a pop-up dictionary definition of something on the page. For someone who interacts a lot with books it’s maddening. Also, the Kindle version of the Instapaper app, which I like to use to read online posts and articles, is really buggy: when you try scrolling through an article it flickers and shudders madly, and it crashes too frequently. The overall reading experience is much, much better when using the Kindle app and Instapaper app on iOS. (Also, the iOS Instapaper app plays really nicely with other services, like Tumblr and Pinboard.)

All this said, I’d be happy enough with a Kindle Paperwhite with a hi-res screen. I read outside a lot, and when I do that device is my only (digital) option, so I just wish its text were nicer to look at and easier to navigate through. I suppose I could also wish for a Kindle Fire or iPad Mini with a totally nonreflective screen, but as far as I know that’s impossible.

testing intelligence — or testing nothing?

Tim Wu suggests an experiment:

A well-educated time traveller from 1914 enters a room divided in half by a curtain. A scientist tells him that his task is to ascertain the intelligence of whoever is on the other side of the curtain by asking whatever questions he pleases.

The traveller’s queries are answered by a voice with an accent that he does not recognize (twenty-first-century American English). The woman on the other side of the curtain has an extraordinary memory. She can, without much delay, recite any passage from the Bible or Shakespeare. Her arithmetic skills are astonishing — difficult problems are solved in seconds. She is also able to speak many foreign languages, though her pronunciation is odd. Most impressive, perhaps, is her ability to describe almost any part of the Earth in great detail, as though she is viewing it from the sky. She is also proficient at connecting seemingly random concepts, and when the traveller asks her a question like “How can God be both good and omnipotent?” she can provide complex theoretical answers.

Based on this modified Turing test, our time traveller would conclude that, in the past century, the human race achieved a new level of superintelligence. Using lingo unavailable in 1914, (it was coined later by John von Neumann) he might conclude that the human race had reached a “singularity” — a point where it had gained an intelligence beyond the understanding of the 1914 mind.

The woman behind the curtain, is, of course, just one of us. That is to say, she is a regular human who has augmented her brain using two tools: her mobile phone and a connection to the Internet and, thus, to Web sites like Wikipedia, Google Maps, and Quora. To us, she is unremarkable, but to the man she is astonishing. With our machines, we are augmented humans and prosthetic gods, though we’re remarkably blasé about that fact, like anything we’re used to. Take away our tools, the argument goes, and we’re likely stupider than our friend from the early twentieth century, who has a longer attention span, may read and write Latin, and does arithmetic faster.

No matter which side you take in this argument, you should take note of its terms: that “intelligence” is a matter of (a) calculation and (b) information retrieval. The only point at which the experiment even verges on some alternative model of intelligence is when Wu mentions a question about God’s omnipotence and omnibenevolence. Presumably the woman would do a Google search and read from the first page that turns up.

But what if the visitor from 1914 asks for clarification? Or wonders whether the arguments have been presented fairly? Or notes that there are more relevant passages in Aquinas that the woman has not mentioned? The conversation could come to a sudden and grinding stop, the illusion of intelligence — or rather, of factual knowledge — instantly dispelled.

Or suppose that the visitor says that the question always reminds him of the Hallelulah Chorus and its invocation of Revelation 19:6 — “Alleluia: for the Lord God omnipotent reigneth” — but that that passage rings hollow and bitter in his ears since his son was killed in the first months of what Europe was already calling the Great War. What would the woman say then? If she had a computer instead of a smartphone she could perhaps see if Eliza is installed — or she could just set aside the technology and respond as an empathetic human being. Which a machine could not do.

Similarly, what if the visitor had simply asked “What is your favorite flavor of ice cream?” Presumably then the woman would just answer his question honestly — which would prove nothing about anything. Then we would just have a person talking to another person, which we already know that we can do. “But how does that help you assess intelligence?” cries the exasperated experimenter. What’s the point of having visitors from 1914 if they’re not going to stick to the script?

These so-called “thought experiments” about intelligence deserve the scare-quotes I have just put around the phrase because they require us to suspend almost all of our intelligence: to ask questions according to a narrowly limited script of possibilities, to avoid follow-ups, to think only in terms of what is calculable or searchable in databases. They can tell us nothing at all about intelligence. They are pointless and useless.