Text Patterns - by Alan Jacobs

Friday, September 18, 2015

coming attractions

Regular readers of this blog may remember a few months ago some posts revolving around 79 Theses on Technology that The Infernal Machine graciously published. I got some good feedback on those theses — some positive, some critical, all useful — which allowed me to deepen and extend my thinking, and to discern ways in which it needed to be deepened and extended still further.

So when Fred Appel at Princeton University Press, who commissioned and edited my biography of the Book of Common Prayer, asked if I might be interested in turning an expanded version of those theses into a short book, I jumped at the chance. And I'm working on that now.

But many of the ideas that I might normally be developing on this blog need to go into that book — which will probably mean a period of silence here, until the book is completed. (I keep reading things and thinking Hey, I want to write a post about that wait a minute I can't write a post about that.)

After I have completed Short But As Yet Unnamed Book Of Theses On Technology, I hope to return to my much bigger book on Christian intellectuals and World War II — which has been on hiatus because I ran into some intractable organizational problems. But having taken a few months away from the project, I can already begin to see how I might resume and reconstruct it.

In the longer term, I hope to return to this space to develop my thoughts on the technological history of modernity — and perhaps turn those into a book as well.

Those are the plans, anyway. I may post here from time to time, but not very often until the Theses are done. Please wish me well!

two visions of higher education?

Kwame Anthony Appiah on the two visions of American higher education:

One vision focuses on how college can be useful — to its graduates, to employers and to a globally competitive America. When presidential candidates talk about making college more affordable, they often mention those benefits, and they measure them largely in dollars and cents. How is it helping postgraduate earnings, or increasing G.D.P.? As college grows more expensive, plenty of people want to know whether they’re getting a good return on their investment. They believe in Utility U.

Another vision of college centers on what John Stuart Mill called “experiments in living,” aimed at getting students ready for life as free men and women. (This was not an entirely new thought: the “liberal” in “liberal education” comes from the Latin liberalis, which means “befitting a free person.”) Here, college is about building your soul as much as your skills. Students want to think critically about the values that guide them, and they will inevitably want to test out their ideas and ideals in the campus community. (Though more and more students are taking degrees online, most undergraduates will be on campus a lot of the time.) College, in this view, is where you hone the tools for the foundational American project, the pursuit of happiness. Welcome to Utopia U.

Together, these visions — Utility and Utopia — explain a great deal about modern colleges and universities. But taken singly, they lead to very different metrics for success.

Appiah walks through this tired old dichotomy only in order to say: Why not both?

(To be clear: like Appiah, I am only addressing the American context. Things can be different elsewhere, as, for example, in Japan, where a government minister has just asked all public universities to eliminate programs in the social sciences and humanities, including law and economics, and to focus instead on “more practical vocational education that better anticipates the needs of society.”)

A good general rule: when someone constructs an argument of this type — Between A and B there seems to be a great gulf fixed, but I have used my unique powers of insight to discern that this is a false dichotomy and we need not choose! — it is unlikely that they have described A fairly or described B fairly or described the conflict between then accurately.

So let's try to parse this out with a little more care:

  • Some colleges (mainly the for-profit ones) promise nothing but utility.
  • Some colleges (say, St. John's in Annapolis and Santa Fe) promise nothing but what Appiah calls Utopia, that is, an environment for pursuing essential and eternal questions.
  • Most colleges follow the example of Peter Quill, Star-Lord, and promise a bit of both.
  • Most students want, or at least claim to want, a bit of both. A few are driven primarily by intellectual curiosity, but they'd love to believe that a course of study organized around such explorations can also lead to a decent job after graduation; a great many more are primarily concerned to secure good job opportunities, but also want to confront interesting ideas and beautiful works of art. (Many of my best students in the Honors College at Baylor are pre-med, but love taking literature and philosophy courses for just this reason.)

Given this general state of affairs, with its range of sometimes-complementary and sometimes-conflicting forces at work, Appiah's framing is simplistic — and also serves as a way to avoid the really key question for the coming years: Who will pay, and what will they pay for?

Tuesday, September 15, 2015

a public amateur's story

There is so much that’s wonderful about Sara Hendren’s talk here that I can’t summarize it — and wouldn’t if I could. Please just watch it, and watch to the end, because in the last few minutes of the talk things come together in ways that will be unexpected to those who don't know Sara. Also be sure to check out Abler.

One of Sara’s models is the artist Claire Pentecost, who sees herself as a public amateur:

One of the things I’m attached to is learning. And one of the models I’ve developed theoretically is that of the artist as the public amateur. Not the public intellectual, which is usually a position of mastery and critique, but the public amateur, a position of inquiry and experimentation. The amateur is the learner who is motivated by love or by personal attachment, and in this case, who consents to learn in public so that the very conditions of knowledge production can be interrogated. The public amateur takes the initiative to question something in the province of a discipline in which she is not conventionally qualified, acquires knowledge through unofficial means, and assumes the authority to offer interpretations of that knowledge, especially in regard to decisions that affect our lives.

Public amateurs can have exceptional social value, not least because they dare to question experts who want to remain unquestioned simply by virtue of accredited expertise; public amateurs don't take “Trust me, I know what I’m doing” as an adequate self-justification. But perhaps the greatest contribution public amateurs make to society arises from their insistence — it’s a kind of compulsion for them — on putting together ideas and experiences that the atomizing, specializing forces of our culture try to keep in neatly demarcated compartments. This is how an artist and art historian ends up teaching at an engineering school.

There are two traits that, if you wish to be a public amateur, you simply cannot afford to possess. You can’t insist on having a plan and sticking with it, and you can’t be afraid of making mistakes. If you’re the sort of person whose ducks must always be in a neat, clean row, the life of the public amateur is not for you. But as the personal story Sara tells near the end of her talk indicates, sometimes life has a way of scrambling all your ducks. When that happens, you can rage vainly against it; or you can do what Sara did.

Monday, September 14, 2015

The Grand Academy of Silicon Valley

After writing today’s post I couldn’t shake the notion that all this conversation about simplifying and rationalizing language reminded me of something, and then it hit me: Gulliver’s visit to the grand academy of Lagado.

A number of the academicians Gulliver meets there are deeply concerned with the irrationality of language, and pursue schemes to adjust it so that it fits their understanding of what science requires. One scholar has built a frame (pictured above) comprised of a series of turnable blocks. He makes some of his students turn the handles and other students to write down the sentences produced (when sentences are produced, that is).

But more interesting in light of what Mark Zuckerberg wants are those who attempt to deal with what, in Swift’s time, was called the res et verba controversy. (You can read about it in Hans Aarsleff’s 1982 book From Locke to Saussure: Essays on the Study of Language and Intellectual History.) The controversy concerned the question of whether language could be rationalized in such a way that there is a direct one-to-one match between things (res) and words (verba). This problem some of the academicians of Lagado determined to solve — along with certain other problems, especially including death — in a very practical way:

The other project was, a scheme for entirely abolishing all words whatsoever; and this was urged as a great advantage in point of health, as well as brevity. For it is plain, that every word we speak is, in some degree, a diminution of our lunge by corrosion, and, consequently, contributes to the shortening of our lives. An expedient was therefore offered, “that since words are only names for things, it would be more convenient for all men to carry about them such things as were necessary to express a particular business they are to discourse on.” And this invention would certainly have taken place, to the great ease as well as health of the subject, if the women, in conjunction with the vulgar and illiterate, had not threatened to raise a rebellion unless they might be allowed the liberty to speak with their tongues, after the manner of their forefathers; such constant irreconcilable enemies to science are the common people. However, many of the most learned and wise adhere to the new scheme of expressing themselves by things; which has only this inconvenience attending it, that if a man’s business be very great, and of various kinds, he must be obliged, in proportion, to carry a greater bundle of things upon his back, unless he can afford one or two strong servants to attend him. I have often beheld two of those sages almost sinking under the weight of their packs, like pedlars among us, who, when they met in the street, would lay down their loads, open their sacks, and hold conversation for an hour together; then put up their implements, help each other to resume their burdens, and take their leave.

But for short conversations, a man may carry implements in his pockets, and under his arms, enough to supply him; and in his house, he cannot be at a loss. Therefore the room where company meet who practise this art, is full of all things, ready at hand, requisite to furnish matter for this kind of artificial converse.

Rationalizing language and extending human life expectancy at the same time! Mark Zuckerberg and Ray Kurzweil, meet your great forbears!

Facebook, communication, and personhood

William Davies tells us about Mark Zuckerberg's hope to create an “ultimate communication technology,” and explains how Zuckerberg's hopes arise from a deep dissatisfaction with and mistrust of the ways humans have always communicated with one another. Nick Carr follows up with a thoughtful supplement:

If language is bound up in living, if it is an expression of both sense and sensibility, then computers, being non-living, having no sensibility, will have a very difficult time mastering “natural-language processing” beyond a certain rudimentary level. The best solution, if you have a need to get computers to “understand” human communication, may to be avoid the problem altogether. Instead of figuring out how to get computers to understand natural language, you get people to speak artificial language, the language of computers. A good way to start is to encourage people to express themselves not through messy assemblages of fuzzily defined words but through neat, formal symbols — emoticons or emoji, for instance. When we speak with emoji, we’re speaking a language that machines can understand.

People like Mark Zuckerberg have always been uncomfortable with natural language. Now, they can do something about it.

I think we should be very concerned about this move by Facebook. In these contexts, I often think of a shrewd and troubling comment by Jaron Lanier: “The Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?” In this sense, the degradation of personhood is one of Facebook's explicit goals, and Facebook will increasingly require its users to cooperate in lowering their standards of intelligence and personhood.

Friday, September 11, 2015

on microaggressions and administrative power

Let’s try to put a few things together that need to be put together.

First, read this post by Jonathan Haidt excerpting and summarizing this article on the culture of campus microaggressions. A key passage:

Campbell and Manning describe how this culture of dignity is now giving way to a new culture of victimhood in which people are encouraged to respond to even the slightest unintentional offense, as in an honor culture. But they must not obtain redress on their own; they must appeal for help to powerful others or administrative bodies, to whom they must make the case that they have been victimized. It is the very presence of such administrative bodies, within a culture that is highly egalitarian and diverse (i.e., many college campuses) that gives rise to intense efforts to identify oneself as a fragile and aggrieved victim. This is why we have seen the recent explosion of concerns about microaggressions, combined with demands for trigger warnings and safe spaces, that Greg Lukianoff and I wrote about in The Coddling of the American Mind.

Now, take a look at this post by Conor Friedersdorf illustrating how this kind of thing works in practice. Note especially the account of an Oberlin student accused of microaggression and the way the conflict escalates.

And finally, to give you the proper socio-political context for all this, please read Freddie deBoer’s outstanding essay in the New York Times Magazine. Here’s an absolutely vital passage:

Current conditions result in neither the muscular and effective student activism favored by the defenders of current campus politics nor the emboldened, challenging professors that critics prefer. Instead, both sides seem to be gradually marginalized in favor of the growing managerial class that dominates so many campuses. Yes, students get to dictate increasingly elaborate and punitive speech codes that some of them prefer. But what could be more corporate or bureaucratic than the increasingly tight control on language and culture in the workplace? Those efforts both divert attention from the material politics that the administration often strenuously opposes (like divestment campaigns) and contribute to a deepening cultural disrespect for student activism. Professors, meanwhile, cling for dear life, trying merely to preserve whatever tenure track they can, prevented by academic culture, a lack of coordination and interdepartmental resentments from rallying together as labor activists. That the contemporary campus quiets the voices of both students and teachers — the two indispensable actors in the educational exchange — speaks to the funhouse-mirror quality of today’s academy.

I wish that committed student activists would recognize that the administrators who run their universities, no matter how convenient a recipient of their appeals, are not their friends. I want these bright, passionate students to remember that the best legacy of student activism lies in shaking up administrators, not in making appeals to them. At its worst, this tendency results in something like collusion between activists and administrators.

This is brilliantly incisive stuff by Freddie, and anyone who cares about the state of American higher education needs to reflect on it. When students demand the intervention of administrative authority to solve every little conflict, they end up simply reinforcing a power structure in which students and faculty alike are stripped of moral agency, in which all of us in the university — including the administrators themselves, since they’re typically reading responses from an instruction manual prepared in close consultation with university lawyers — are instruments in the hands of a self-perpetuating bureaucratic regime. Few social structures could be more alien to the character of true education.

Friedersdorf’s post encourages us to consider whether these habits of mind are characteristic of society as a whole. That seems indubitable to me. When people in the workplace routinely make complaints to HR officers instead of dealing directly with their colleagues, or calling the police when they see kids out on their own rather than talking to the parents, they’re employing the same strategy of enlisting Authority to fight their battles for them — and thereby consolidating the power of those who are currently in charge. Not exactly a strategy for changing the world. Nor for creating a minimally responsible citizenry.

In a fascinating article called “The Japanese Preschool’s Pedagogy of Peripheral Participation,”, Akiko Hayashi and Joseph Tobin describe a twofold strategy commonly deployed in Japan to deal with preschoolers’ conflicts: machi no hoiku and mimamoru. The former means “caring by waiting”; the second means “standing guard.” When children come into conflict, the teacher makes sure the students know that she is present, that she is watching — she may even add, kamisama datte miterun, daiyo (the gods too are watching) — but she does not intervene unless absolutely necessary. Even if the children start to fight she may not intervene; that will depend on whether a child is genuinely attempting to hurt another or the two are halfheartedly “play-fighting.”

The idea is to give children every possible opportunity to resolve their own conflicts — even past the point at which it might, to an American observer, seem that a conflict is irresolvable. This requires patient waiting; and of course one can wait too long — just as one can intervene too quickly. The mimamoru strategy is meant to reassure children that their authorities will not allow anything really bad to happen to them, though perhaps some unpleasant moments may arise. But those unpleasant moments must be tolerated, else how will the children learn to respond constructively and effectively to conflict — conflict which is, after all, inevitable in any social environment? And if children don't begin to learn such responses in preschool when will they learn it? Imagine if at university, or even in the workplace, they had developed no such abilities and were constantly dependent on authorities to ease every instance of social friction. What a mess that would be.

UPDATE: Please see Josh's comment below.

Wednesday, September 9, 2015

don't just quit, quitpiece!

I think Ian Bogost is correct about the essential boringness of quitpieces — essays or articles or posts by former academics explaining why they bailed out — but it’s hard, for me anyway, not to comment on this one by Oliver Lee, because of Lee’s almost charmingly absolute self-regard.

While much of the post is supposed to describe what Lee learned about academe in his time as a professor, his thoughts on the State of Higher Education are disjointed and incoherent. One example will suffice: he goes in one sentence from counseling liberal-arts students to skip college in in favor of watching relevant YouTube videos to declaring that “Online education isn't the solution,” and doesn't even notice the disconnect. But his prose becomes vibrant when he’s describing ... well, himself.

It turns out that Oliver Lee was absolutely fantastic as an academic. He “launched several digital humanities initiatives”; he speaks of his “effusive student evaluations”; he “coached [his] university's legal debate team to a national championship bid” — though I was sort of sad to see the vague word “bid” tacked on to the end there, since by this point I was expecting nothing less than, you know, an actual championship. But the self-celebration goes on for quite a while.

But in every Eden there’s a serpent, or several. Lee became the object of “sniping” by his colleagues; he was beset by “politics”; one of his projects was “derided as bewildering and gimmicky.” Even his students let him down: immediately after telling us what an excellent lecturer he was — “By professor standards, which admittedly aren't that high, I could rock the mic” (that apparently humble caveat isn’t really a caveat at all, since the only context of this whole essay is academia, and it just gives him a way to demean professors) — he describes how a friend visiting his class was distracted by a student watching Breaking Bad on a laptop. It seems pretty clear that if someone hadn’t told him, Lee never would have guessed that some students failed to notice his mic-rocking abilities.

By the time I got to the end of Lee’s personal narrative I had developed a very strong suspicion that what he may really have been saying was “You can’t fire me, I quit.”

But in any case, I’m reminded by this feature of the quitpiece genre: it is almost always immensely self-congratulatory. People will describe in detail their levels of commitment and energy and the superb work they elicited from their students, and will imply or say explicitly that they were targeted by colleagues or department chairs precisely because they did their work so well. If they acknowledge that they were criticized, such criticisms are invariably dismissed as motivated either by jealousy or by fantastically misplaced priorities.

Within the moral economy of the quitpiece genre — which is not, I suspect, reliably indicative of why most people who leave academia do so — to walk away from an academic job is to turn your back on an institution that doesn't deserve you, isn’t good or pure or rightly-ordered enough for you. I’m longing for a quitpiece that says “I left my job as a professor because I didn't like it and wasn’t very good at it.”

Tuesday, September 8, 2015

academic publishers and greed

This post by “Anonymous Academic” on what the author thinks of as price-gouging by academic publishers is ... possibly worrisome, but also possibly misleading. Yes, books that will only be bought by academic libraries tend to be outrageously expensive, but:

(a) They cost the publishers money to produce, and in many cases there is simply no possibility of a larger market, so those publishers need to recoup costs;

(b) Once those books are published they are (barring cultural catastrophe) permanently available to professors, students, and other interested parties who need them, so the publishers do provide something of a service for the scholarly community.

Of course, some academic books are certainly overpriced; but without learning more than I currently know about editing, printing, and distribution costs, I’m not sure which ones fit that bill. Anonymous Academic’s certainty that such publishers are “greedy” seems misplaced to me; I’m pretty sure that if I were driven by greed academic publishing is not be the business I’d go into....

Anyway — speaking purely autobiographically here — I have always wanted to reach the largest audience possible, consistent with being a responsible scholar. And that’s why I think it’s largely been a good thing that a number of the stronger academic publishers — including three that I have written or are writing for, the university presses of Oxford, Princeton, and Harvard — have in recent years become hybrids: still committed to the scholarly standards embodied by peer review, but also eager to find the largest possible audience for their books. They can even produce the occasional bestseller.

Not every scholarly press can do this, and publishers aren't infallible in deciding which books can be effectively promoted and how to promote them. But the picture of greedy, price-gouging academic presses painted in that post is not, I think, a very accurate one.

Monday, September 7, 2015

Ferrante's tragedy

Once you are well inside the world of Elena Ferrante’s just-completed quartet — what English-language reviewers are calling the Neapolitan novels but what is really a single long novel published in four volumes — you are not likely to escape. The books are utterly compelling and the world they create as real as real can be. Somewhere Iris Murdoch writes of the kind of story you read to which you simply say, “It is so.” Ferrante has written that kind of story. I had been telling myself for some time that I simply no longer have the tolerance for contemporary realistic fiction. Then I started this story and thought: Oh. I just haven’t come across anything this masterful in a while.

But one thing I find curious: the universal description of these books as being centrally the story of a friendship. I think they are much better described as the story of an overwhelmingly intense, identity-forming, lifelong hatred. Much has been made of the ambiguity of the adjective in the first installment's title: L'Amica Genial, The Genius Friend, or, in Ann Goldstein's English translation, My Brilliant Friend — it is what each girl thinks of the other. But not enough attention, I think, has been brought to the deeply misleading, or at best ambivalent, character of the noun Amica.

There is no doubt that Elena, the narrator, is fascinated by, obsessed with, in need of, Lila; and Lila is probably just as obsessed by Elena — though Lila's mind remains to some extent obscure to us, in part because Elena tells this story and no one can ever enter fully into the mind of another, and in part because Elena, I think, does not want us to have full access to Lila's inner world and resists entering that world herself. When Elena gets access to documents that reveal much of Lila's thinking, she describes their contents rather sketchily, and then destroys them, unable to remain any longer in their presence.

Elena's destruction of Lila's documents — though surely foreseen by Lila — is just one example of what may be the novel's chief recurring theme: that neither woman ever misses an opportunity to harm the other, to hurt as badly as she can possibly hurt without ending their relationship forever. (To act nastily enough to cause a separation of several years, that each of them will do.) Even when they help one another, such assistance serves to acquire leverage that is later used for cruelty.

Each woman is to the other what the Ring is to Gollum: “He hated it and loved it, as he hated and loved himself. He could not get rid of it. He had no will left in the matter.” So Elena: “I loved Lila. I wanted her to last. But I wanted it to be I who made her last.” This kind of relationship cannot be described simply, but I don't think there’s any meaningful sense in which it can be called a friendship.

The novel can and should be read as, among other things, an appropriately scalding, scarifying indictment of a society that made it impossible for Lenù and Lila to be genuine friends. They were made to be friends, I would say, deeply complementary personalities, helps and correctives for one another. But the world they are brought up in — with its harsh, rigid codes of masculinity and femininity untempered and uncorrected by a Christian message (despite the presence and apparent authority of the Church), with its relentlessly soul-grinding social and economic injustices that generate either defeatism or wild grasping attempts to escape — deforms their connection almost from the start, perverts it, twists it.

On the first disastrous day of Lila’s disastrous marriage, her brother says to her new husband, “She was born twisted and I’m sorry for you.” The new husband replies, “Twisted things get straightened out.” But the overwhelming message of the novels is that they don't. This is not a story of a friendship. It is the tragedy of what should have been a friendship but never was.

Saturday, September 5, 2015


No surprises here, of course, but when you ask people who teach creative writing in American universities what books they assign, almost all of them assign books written in the past few years. A couple of people reach all the way back to Chinua Achebe, Saul Bellow, and Jean Rhys, and one bold trailblazer — Joel Brouwer, who teaches at my alma mater, the University of Alabama — actually assigns Homer and Virgil. But the rest don't dare look any further back than yesterday, and, moreover, the great majority of the texts they assign are by Americans.

This studied avoidance of the past, of the world — of anything that isn't immediate and local — is bad for the future of fiction and bad for the American mind more generally. The default assumption that our writers can be valid only when they're working in the idioms of their peers is something close to a death sentence for artistic creativity. Looking at reading lists like this, I can't help thinking that they play a significant role in maintaining the dreary sameness that is so characteristic of the fiction and poetry that come out of contemporary MFA programs.

Friday, September 4, 2015

an imaginary student replies to Freddie deBoer

Freddie deBoer imagines a kind of universal trigger warning, perhaps to be issued to students on their arrival at college:

You’re going to be exposed to stuff you don’t like at college. We will try to give you a heads up about the stuff that might upset you, but what is considered potentially offensive is an inherently political, value-laden question, and we aren’t always going to agree with your prior beliefs about that question. We cannot guarantee that everything you might be offended by will come with a warning, and we are under no obligation to attempt to provide one. We will try to work with you with compassion and respect, but ultimately it’s your responsibility to deal with the curriculum that we impose, and not our responsibility to make sure that it doesn’t bother you. If you can’t handle that, you don’t belong in college.

That’s very well said, and I agree with pretty much every word — but I think that a great many students in almost all of our universities will dissent from its premises. They may not be able to articulate their dissent clearly; they may not even consciously formulate it; but I think a dissent is implicit in much of what I read about the various trigger-warnings controversies.

It might go something like this:

You speak of “the curriculum [you] impose,” but I deny that you have the right to impose anything. I am passing through this place, headed for the next stage of my life — possibly graduate education in some form or another, more probably a job — and I am paying you to prepare me for that next stage. In short, we have a business contract in which I am your client, and it is your job to serve what I perceive my needs to be, not what you may happen to think they are. It’s not as though we’re living in that long-ago age when universities were considered repositories of timeless wisdom and professors custodians of that wisdom. You faculty are employees of an ideological state apparatus in a neoliberal regime that constitutes itself by a series of implied or explicit contracts in which goods are exchanged for fees. Please stop acting like this is the University of Paris in the age of Aquinas and we’re all seeking transcendent wisdom. I control my own values and am not even interested in yours, much less willing to be subservient to them. So do the job I am paying you to do and shut up about all that other crap.

Wednesday, September 2, 2015

aging and literary taste

Charles McGrath writes, “Who isn’t a critic? We are born picky and judgmental, and as we get older we only become more opinionated and more sure of ourselves.” Is that true? I think I’m less sure of myself now than I ever have been.

Or maybe that’s not quite right. My tastes are perhaps more limited, even fixed, than they once were — I am more likely to say, of a book or a movie or a record that people are praising, “Maybe it’s as good as they say, but I pass” — and, moreover, to feel comfortable with that decision and untempted to revisit it. But I’m not inclined to think that my tastes have become increasingly precise, ever more sophisticated; rather, I’m simply aware of the passage of time, the shrinking of the years in front of me, and am less prone to devotes lots of time and energy to things that (experience teaches me) I am not likely to find rewarding.

Might I miss some cool stuff? Indeed. And not just “might” but “will.” But here’s the thing about being fifty-six: I know I’ve missed lots of cool stuff. And I’m still here, and not obviously worse off for it. That makes it easier to go with my gut — to grab what looks good and to ignore what doesn’t — but not because I’m smarter than I used to be or more discerning. It’s just a matter of reckoning with the brevity of life. All in all, I’d rather read Jane Austen again.

That said, the next book on my list is the first book of Elena Ferrante’s Neapolitan tetralogy. So I’m not only re-reading the faves.

Monday, August 31, 2015

the American university and resource dependence

We've heard a lot in recent years about the decline in American states' support for higher education — which has indeed been happening — with the implicit or explicit corollary worry that this decline is leading to the privatization of the university, the subjugation of academe to the demands of the marketplace, etc. And I don't think those worries are wholly misbegotten. But this post by Beth Popp Berman suggests that there may be something larger to think about: the transfer of universities' resource dependency from state governments to the federal government, thanks to a pretty massive rise in federal student aid — which comes with strings attached, for students and institutions alike. Here's Berman's conclusion: 

If organization theory tells us anything, it’s that resource dependence matters. When, five years down the road, we get a Race to the Top rewarding colleges that meet completion and job placement goals at a given tuition cost, I know where I’ll be looking: at that point in 2002 where higher ed waved goodbye to the states and hello to the feds.

Given the close collaboration of our national government with the world's biggest businesses, it seems unlikely that this development will bring about a rescue from privatization. Rather, the feds are likely to be a very effective instrument for implementing the values and priorities of the market.

Among other things, this means that finding ways to create educational environments that are genuinely intellectually independent, and genuinely countercultural, is just going to get harder. James Poulos has suggested that small, private, online courses may be the future of educational seriousness and genuine innovation. That's an argument worth considering, and I hope eventually to do so here.

Sunday, August 30, 2015

on Aurora

My friend Adam Roberts, whose critical judgment is superb, loved Kim Stanley Robinson’s new novel Aurora; I didn’t. At all. And while such differences in literary experience are inevitable and commonplace — “People who like this sort of thing will find this the sort of thing they like” is the most truthful of all reviews — I’m a little uncomfortable to be so far from Adam in my response.

And that’s because I didn’t like the book. If I had liked it more than Adam did I wouldn’t be bothered; but I’d prefer not to be the sort of reader whose insufficient catholicity of taste, or readily insensitivity, blocks him from appreciating things that deserve appreciation. But I didn’t care for Aurora, and I think I can say why: I was not moved or convinced by the cultural world it portrays.

Adam writes, “Aurora is a magnificent piece of writing, certainly Robinson’s best novel since his mighty Mars trilogy, perhaps his best ever.” So since he compared it to the Mars trilogy, I will too — even though in one sense that’s unfair, since the Mars books gave Robinson at least three times as many words in which to portray a fictional world. But the stories have a fundamental three-part structure in common:

  1. The decision to send human beings to another world.
  2. How they get there.
  3. What they do when they arrive.

The proportions vary greatly: the Mars trilogy is overwhelmingly about number 3, Aurora more focused on number 2. And you could make an argument that the richer cultural world of the Mars trilogy is a function not only of its greater length but its dominant setting. Still, as I read Aurora I kept thinking about the two-dimensionality of lives of the people living on their ship headed for Tau Ceti. They were all focused on personal relationships, political questions, and the technologies needed to manage life in a strange environment. That’s it. One group of people, living in one biome, had developed a kind of ritual in which they introduce young people to the fact that they are living in a starship … but if any of the other biomic cultures had done something similar, we don’t hear about it. Also, sometimes people play music. But that exhausts the cultural life of the ship — which the people haven’t even named. The ship’s AI suggests that it be called “Ship.” But I cannot imagine that human beings living for generations on a starship wouldn’t name the damned thing.

On Mars, in Robinson’s trilogy, there are poets, and composers, and dramatists — a rich cultural and artistic life. There are serious (and endless, and fascinating) philosophical debates about what they’re doing on Mars and why they’re doing it. Is it too much to expect that something of the kind on Aurora’s generational ship? I don’t think so. Czeslaw Milosz writes somewhere — in The Witness of Poetry, I think — about situations of extreme suffering and deprivation in which poetry becomes “as necessary as bread.” And I am persuaded by the governing conceit of Emily St. John Mandel’s Station Eleven (about which I wrote briefly here): that if civilization collapsed people would value all the more the music and drama and poetry that had seemed so frivolous and ancillary in a fully-functioning world.

I’m trying not to spoil Aurora too much here, but I think it’s okay to say that when the ship finally gets to the Tau Ceti system an intense dispute arises about whether the people should stay there or go somewhere else. When some characters are taken aback by the passionate intensity of those who want to stick with the original plan, one person comments, “I do think it helps to think of the stayers as holding a religious position. The Tau Ceti system has been their religion all their lives, they say, and now they are being told that it won’t work here, that the idea was a fantasy. They can’t accept it.”

But I don’t see any evidence in the text that people think/thought act/acted in a religious way — about this, or about anything else. Robinson seems to portray them as simply being excited about coming to the end of a long voyage. There doesn’t seem to be much (any) reflection about those thousands of people who were born on the ship and died on the ship — like Israelites who were born in the wilderness and died before reaching the Promised Land. Surely this is something that people would have thought about in the 160 or so years that the ship had been sailing through space, and probably even before they departed Earth. It’s hard for me not to imagine that on such a ship there would be whole philosophical schools — not formal, not professional, but comprised of people deeply invested in the key questions. You see something like that in Neal Stephenson’s Seveneves, a book I also have commented on. Yet the people of Aurora seem myopically focused on the immediate and practical; and in that sense they don’t seem fully human to me.

That’s why I didn’t like the book very much.

Friday, August 28, 2015

on difficulty

In this exchange on literary difficulty I think Leslie Jamison gives us something far more useful than Heller.

Here’s Heller:

Recently, when I read Christine Schutt’s short story “You Drive” with a graduate writing class, several of the students complained that they found the story baffling. They couldn’t make out the chronology of the events it described; they weren’t always sure which character was speaking; the story, they concluded, “didn’t work.” The fact that they had trouble following Schutt’s elliptical prose was not in itself a surprise. What did take me aback was their indignation — their certainty that the story’s difficulty was a needless imposition on readerly good will. It was as if any writing that didn’t welcome them in and offer them the literary equivalent of a divan had failed a crucial hospitality test.

The “as if” in that last sentence is doing a lot of work, and rather snide work at that. Why should Heller conclude that her students’ dislike of one story is revelatory of a sense of readerly entitlement, a universal demand that texts “welcome them in and offer them the literary equivalent of a divan”? Maybe she assigned a poor story, and the students would have responded more positively to an equally demanding one that was better-crafted. You can’t tell what people think about “any writing” on the basis of their opinions about a single text. 

It’s easy and natural for teachers to explain every classroom clunker by blaming the inadequacies of their students. It’s also a tendency very much to be resisted.

Jamison, by contrast, approaches the question of difficulty in a much more specific way, and what I like best about her brief narrative is its acknowledgment that a reader might approach a given book with a very different spirit in one set of circumstances — or at one moment of her life — than in another. It’s something I have said and written often: that one need not think that setting a book aside is a permanent and unverifiable verdict on the book — or on oneself. People change; frames of mind and heart come and go; and if a book and a reader happen to find each other, it’s beautiful

Wednesday, August 26, 2015

Twitter and emotional resilience

It seems to me that one of the most universal and significant differences between young people and their elders is the emotional resilience of the young. Most young people — the damaged always excepted — can plunge into the deepest and wildest waters of their inner lives because they know that they have what it takes to take the buffeting, even be energized by the buffeting, and to recover easily, quickly, completely.

I’ve seen this often with students over the years. I’ve had people come to my office and disintegrate before my eyes, collapse in convulsive weeping — and then, fifteen minutes later, walk out into the world utterly composed and even cheerful. There was a time when I could have done the same. When I was their age and feeling angry, I wanted music that echoed and amplified that anger; when I was deep in melancholy, I would drive the streets at 2 A.M. and listen to Kind of Blue over and over. But looking back on these habits, I think I allowed them because, on some level, I knew I could climb out of the pit when I needed to.

Those days are past. When the world’s rough waters have buffeted you for several decades, you wear down, you lose your resilience. Now if I feel agitated or melancholy, I seek countervailing forces: the more peaceable and orderly music of Bach and Mozart and Handel, the movies of Preston Sturges, the prose of Jane Austen or P. D. James. (Classic mysteries, with their emphasis on finding and purging the sources of social disorder, have become increasingly important to me.) These are coping mechanisms, ways for me to keep my emotional balance.

This morning my Twitter feed was overwhelmed by yet another Twitter tsunami, this one prompted by the murder of two television journalists in Virginia. This one one is a little different than the usual, because much of the conversation is centering on people who, with crassly absolute insensitivity, are retweeting footage of the actual murder itself: thanks to the curse of video autoplay, thousands and thousands of people are being confronted by frightening, disturbing scenes that they never wanted to see. But in general it follows the same pattern as all the other tsunamis: hundreds and hundreds of tweets and retweets of the same information, over and over, all day long.

And I think: I don’t need this. I could make some principled, or “principled,” arguments against it — that there's no reason to pay more attention to this murder than any of the several dozen others that will happen in America today, that this is a classic illustration of the "society of the spectacle", that we should follow Augustine's example in denouncing curiositas — but my real problem is that it just makes me very sad and very tired, and I have too much to do to be sad and tired.

And then it occurs to me: maybe Twitter — maybe social media more generally — really is a young person's thing after all. Intrinsically, not just accidentally.

Monday, August 24, 2015

social media triage

We all have to find ways to manage our social-media lives. I have a few rules about engaging with other people, developed over the past several years, which I hold to pretty firmly, though not with absolute consistency.

1) If on Twitter or in blog comments you're not using your real name, I won't reply to you.

2) I never read the comments on any post that appears on a high-traffic online site, even if I have written it.

3) I have Twitter set up so that I typically see replies only from people I follow. Every once in a while I may look through my replies, but honestly, I try not to. So if you're asking me a question on Twitter, I will either never see it or, probably, will see it only some days or weeks after you've asked.

4) If I happen to see that you have tweeted me-wards but I don't know you, I will probably not reply.

Why do I follow these rules? Because my experiences in conversing with strangers online have been about 95% unpleasant. Especially as one reaches what the French call un certain âge, cutting unnecessary losses — conserving intellectual and emotional energy — becomes more important than creating new experiences. At least how that's how it's been for me. This is unfortunate for, and perhaps unfair to, people who want to engage constructively; but y'all are greatly outnumbered by the trollish, the snarky, those who reply to things they haven't read, and the pathologically contentious. And in the limited time I have to spend on social media, I prefer to nurture relationships I already have.

I've said some of these things before, but since in the past week I've received three why-didn't-you-answer-my-tweet emails, I thought it might be worthwhile to say them again.

podcasts redux

Perhaps the chief thing I learned from my post on podcasting is that a great many people take “podcast” to mean something like “any non-music audio you can listen to on your smartphone.” Okay, fair enough; the term often is used that way. And I sort of used it that way myself, even though I didn’t really mean to. This made my post less coherent than it ought to have been. 

In more precise usage, a podcast is something like an audio blog post: born digital and distributed to interested parties via web syndication. We commonly distinguish between a magazine article that gets posted online and a blog post, even when the magazine posts the article to its blog and you see it in your RSS reader; similarly, In Our Time and This American Life are radio programs that you can get in podcast form, not podcasts as such. The Mars Hill Audio Journal is an audio periodical and even farther from the podcast model because it isn’t syndicated: you have to purchase and download its episodes — and you should!  (By the way, I couldn’t help smiling at all the people who told me that I should give Mars Hill a try, given this. How did they manage to miss me?) (Also by the way, MHAJ has an occasional podcast: here.)

So clearly I should not have used In Our Time to illustrate a point about podcasts, even if I do typically listen to it in podcast form. My bad.

In Our Time has a great many fans, it seems, and while on one level I understand why, I'm typically frustrated by the show. It typically begins with Melvyn Bragg saying something like, "So Nigel, who was Maimonides?" — to which Nigel, a senior lecturer in Judaic Studies at University College, London, replies, "Maimonides was born...." And then off we go for half-an-hour of being bludgeoned with basic facts by three academics with poor voices for radio. Only in the last few minutes of the episode might an actual conversation or debate break out. If you don't especially like reading, then I guess this is a reasonably painless way to learn some stuff, but it doesn't do a lot for me.

I also discovered that EconTalk has a great many fans, and indeed, you can learn a good deal on EconTalk about stuff it would be hard to discover elsewhere. But EconTalk is basically people talking on the phone, and the complete lack of production values grates on me.

So, sorting through all these responses, I have come to two conclusions. The first is that for a great many people podcast-listening is primarily a means of downloading information or entertainment to their brains. It's content they want, and the form and quality of presentation don't, for these people, count for a lot.

The second conclusion is that in these matters I have been really, really spoiled by the Mars Hill Audio Journal. Even though it is not a podcast, it is, I now realize, the standard by which I tend to judge podcasts. And they rarely match up. Ken Myers has a really exceptional skill set: he is deeply knowledgable and intelligent, he is a friendly but incisive interviewer, he is a magnificent editor, and he has the technical skills to produce a top-quality audio presentation. I’ve come to realize, over the past few days of conversing about all this, that what I really want is for all podcasts to be like the MHAJ. And while that may be an understandable desire, it’s an unreasonable expectation.

Tuesday, August 18, 2015


Just a quick follow-up to a comment I made on Twitter. Over the past several years I have listened to dozens and dozens of podcasts, on a very wide range of subjects, with the result that there is now not a single podcast that I listen to regularly.

Podcasts, overall, are

(1) People struggling to articulate for you stuff you could find out by looking it up on Wikipedia (e.g. In Our Time);

(2) People using old-timey radio tricks to fool you into thinking that a boring and inconsequential story is fascinating (e.g. Serial);

(3) People leveraging their celebrity in a given field as permission to ramble incoherently about whatever happens to come to their minds (e.g. The Talk Show); or

(4) People using pointless audio-production tricks to make a pedestrian story seem cutting-edge (e.g. Radiolab).

The world of podcasting desperately needs people to take it seriously and invest real thought and creativity into it. There are a lot of not-so-smart people who invest all they have in podcasts; there are a lot of smart people who do podcasts as an afterthought, giving them a fraction of the attention they give to their "real work." So far it's a medium of exceptional potential almost wholly unrealized.

All that said, The Memory Palace is pretty good.

Monday, August 17, 2015

reification and modernity

Until this morning I was certain that I had posted this some weeks ago ... but I can't find it. So maybe not. Apologies if this is, after all, a rerun.

One of the chief themes of Peter Harrison's recent book The Territories of Science and Religion is the major semantic alteration both terms of his title — science (scientia) and religion (religio) — have undergone over the centuries. For instance,

In an extended treatment of the virtues in the Summa theologiae, Aquinas observes that science (scientia) is a habit of mind or an “intellectual virtue.” The parallel with religio, then, lies in the fact that we are now used to thinking of both religion and science as systems of beliefs and practices, rather than conceiving of them primarily as personal qualities. And for us today the question of their relationship is largely determined by their respective doctrinal content and the methods through which that content is arrived at. For Aquinas, however, both religio and scientia were, in the first place, personal attributes.

The transformation in each term is, then, a form of reification: a "personal attribute," a habit or virtue, gradually becomes externalized — becomes a kind of thing, though not a material thing — becomes something out there in the world.

What's especially interesting about this, to me, is that scientia and religio aren't the only important words this happens to. Harrison mentions also the case of "doctrine":

In antiquity, doctrina meant “teaching” — literally, the activity of a doctor — and “the habit produced by instruction,” in addition to referring to the knowledge imparted by teaching. Doctrina is thus an activity or a process of training and habituation. Both of these understandings are consistent with the general point that Christianity was understood more as a way of life than a body of doctrines. Moreover they will also correlate with the notion of theology as an intellectual habit, as briefly noted in the previous chapter. As for the subject matter of doctrina — its cognitive component, if you will — this was then understood to be scripture itself, rather than “doctrines” in the sense of systematically arranged and logically related theological tenets. To take the most obvious example, Augustine’s De doctrina Christiana (On Christian Teaching) was devoted to the interpretation of scripture, and not to systematic theology.

So from "the activity of a doctor" — what a learned man does — doctrine becomes a body of propositions.

Curiously, the same thing has happened to a word that I am professionally quite familiar with: "literature." We now use it to refer to a category of texts ("That's really more literature than philosophy, don't you think?") or to a body or collection of texts ("Victorian literature"). But in Dr. Johnson's Dictionary literature is defined as "Learning; skill in letters." And this remains the first meaning in the OED:

Familiarity with letters or books; knowledge acquired from reading or studying books, esp. the principal classical texts associated with humane learning (see humane adj. 2); literary culture; learning, scholarship. Also: this as a branch of study. Now hist.

"Now hist." — historical, no longer current. Yet for Johnson it was the only meaning. (It's interesting, though, that the examples of such usage he cites seem to me to fit the modern meaning better than the one he offers — as though the meaning of the term is already changing in ways Johnson fails to see.)

So here we have a series of personal attributes — traits acquired through the exercise of discipline until they become virtues — that become external, more-or-less objective stuff. (Gives a new resonance to Alasdair MacIntyre's famous title After Virtue.) Which makes me wonder: is there a link between the rise of modernity and this reifying tendency in language? And if so, might this link be related to the technological aspect of modernity that I've been asking about lately? If a social order is increasingly defined and understood in terms of what it makes and uses — of things external to the people making using them — then might that not create a habit of mind that would lead to the reifying of acts, habits, traits, and virtues? What is important about us, in this way of thinking, would not be who we are but what we make, what we surround ourselves with, what we wield.

Tuesday, August 11, 2015

algorithms and responsibility

One of my fairly regular subthemes here is the increasing power of algorithms over our daily lives and what Ted Striphas has called “the black box of algorithmic culture”. So I am naturally interested in this interview with Cynthia Dwork on algorithms and bias — more specifically, on the widespread, erroneous, and quite poisonous notion that if decisions are being made by algorithms they can’t be biased. (See also theses 54 through 56 here.)

I found this exchange especially interesting:

Q: Whose responsibility is it to ensure that algorithms or software are not discriminatory?

A: This is better answered by an ethicist. I’m interested in how theoretical computer science and other disciplines can contribute to an understanding of what might be viable options. The goal of my work is to put fairness on a firm mathematical foundation, but even I have just begun to scratch the surface. This entails finding a mathematically rigorous definition of fairness and developing computational methods — algorithms — that guarantee fairness.

Good for Dwork that she’s concerned about these things, but note her rock-solid foundational assumption that fairness is something that can be “guaranteed” by the right algorithms. And yet when asked a question about right behavior that’s clearly not susceptible to an algorithmic answer — Who is responsible here? — Dwork simply punts: “This is better answered by an ethicist.”

One of Cornel West’s early books is called The American Evasion of Philosophy, and — if I may riff on his title more than on the particulars of his argument — this is a classic example of that phenomenon in all of its aspects. First, there is the belief that we don't need to think philosophically because we can solve our problems by technology; and then, second, when technology as such fails, to call in expertise, in this case in the form of an “ethicist.” And then, finally, in the paper Dwork co-authored on fairness that prompted this interview, we find the argument that the parameters of fairness “would be externally imposed, for example, by a regulatory body, or externally proposed, by a civil rights organization,” accompanied by a citation of John Rawls.

In the Evasion of Philosophy sweepstakes, that’s pretty much the trifecta: moral reflection and discernment by ordinary people replaced by technological expertise, academic expertise, and political expertise — the model of expertise being technical through and through. ’Cause that’s just how we roll.

Friday, August 7, 2015


Portrait of Virginia Woolf by Roger Fry (Wikimedia)

Suzanne Berne on Virginia Woolf: A Portrait, by Viviane Forrester:

But it’s Leonard who gets dragged in front of the firing squad. Not only did he encourage Bell’s patronizing portrayal of Virginia; according to Forrester, he was also responsible for his wife’s only true psychotic episode, and probably helped usher her toward suicide. These accusations are fierce and emphatic: Leonard projected his own neuroses and his own frigidity onto Woolf (he had a horror of beginner sex and found most women’s bodies “extraordinarily ugly”). He married her strictly to get out of Ceylon, where he was in the British Foreign Service and where he had fallen into a suicidal depression. (He hated both the place and the position, though he pretended later to have thrown over a fabulous career for Virginia.) Without medical corroboration, he decreed that she was too unbalanced to have children, triggering her legendary mental breakdown immediately after their honeymoon. Then he held the threat of institutionalization over her, coercing her into a secluded country lifestyle that suited him but isolated and disheartened her, while using his marriage as entrée to an aristocratic, intellectual world that, as “a penniless Jew” from the professional class — just barely out of a shopkeeper’s apron — he could not have otherwise hoped to join.

This excerpt from Forrester’s book confirms Berne’s description of Forrester's attack on Leonard Woolf.

When, in March of 1941, Woolf decided to take her own life, here is the heart-wrenching letter she left for Leonard:


I feel certain I am going mad again. I feel we can’t go through another of those terrible times. And I shan’t recover this time. I begin to hear voices, and I can’t concentrate. So I am doing what seems the best thing to do. You have given me the greatest possible happiness. You have been in every way all that anyone could be. I don’t think two people could have been happier till this terrible disease came. I can’t fight any longer. I know that I am spoiling your life, that without me you could work. And you will I know. You see I can’t even write this properly. I can’t read. What I want to say is I owe all the happiness of my life to you. You have been entirely patient with me and incredibly good. I want to say that – everybody knows it. If anybody could have saved me it would have been you. Everything has gone from me but the certainty of your goodness. I can’t go on spoiling your life any longer.

I don’t think two people could have been happier than we have been.

Forrester quotes that last line in her book (p. 203) and offers one line of commentary on it: “What was Virginia Woolf denied? Respect.”

What counts as denying someone respect? Offhand, I’d say that if I believed that I understood the emotional life and intimate relationships of a great artist I had never met, and who died before I came of age, better than she understood them herself … that would be denying her respect.

Thursday, August 6, 2015

the humanities and the university

A few years ago, the American Academy of Arts and Sciences commissioned a report on the place of the humanities and social sciences in America in the coming years — here’s a PDF. And here’s how the report, The Heart of the Matter, begins:

Who will lead America into a bright future?

Citizens who are educated in the broadest possible sense, so that they can participate in their own governance and engage with the world. An adaptable and creative workforce. Experts in national security, equipped with the cultural understanding, knowledge of social dynamics, and language proficiency to lead our foreign service and military through complex global conflicts. Elected officials and a broader public who exercise civil political discourse, founded on an appreciation of the ways our differences and commonalities have shaped our rich history. We must prepare the next generation to be these future leaders.

And in this vein the report continues: study the humanities so you can become a leader in your chosen profession.

Which is a great argument, as long as there is reliable evidence that investing tens of thousands of dollars to study the humanities pays off in income and status later on. But what if that isn't true, or ceases to be true? The Heart of the Matter puts all its argumentative eggs in the income-and-status basket; I'm not sure that's such a great idea.

If the general public comes to believe that the humanities don't pay — at least, not in the way The Heart of the Matter suggests — then that won't be the end of the humanities. Friends will still meet to discuss Dante; a few juvenile offenders will still read Dostoevsky.

And the digital realm will play a part also: James Poulos has recently written about SPOCs — not MOOCs, Massive Open Online Courses, but Small Private Online Courses:

In small, private forums, pioneers who want to pursue wisdom can find a radically alternate education — strikingly contemporary, yet deeply rooted in the ancient practice of conversational exegesis.

Everyone wins if that happens. Wisdom-seekers can connect cheaply, effectively, intimately, and quickly, even if they're dispersed over vast distances. Universities can withdraw fully from the wisdom business, and focus on the pedigree business. And the rest of us can get on with our lives.

In a similar vein, Johann Neem has imagined an “academy in exile”:

As the university becomes more vocational and less academic in its orientation, we academics may need to find new ways to live out our calling. The academy is not the university; the university has simply been a home for academics. University education in our country is increasingly not academic: it is vocational; it is commercial; it is becoming anti-intellectual; and, more and more, it is offering standardized products that seek to train and certify rather than to educate people. In turn, an increasing proportion of academics, especially in the humanities, have become adjuncts, marginalized by the university’s growing emphasis on producing technical workers.

The ideas offered above all build on the core commitments of the academy, and the tradition of seeing the academy as a community of independent scholars joined together by their commitment to producing and sharing knowledge. Increasingly, however, universities claim to own the knowledge we produce, as do for-profit vendors who treat knowledge as proprietary. To academics, each teacher is an independent scholar working with her or his students and on her or his research, but also a citizen committed to sharing her or his insights with the world as part of a larger community of inquiry.

I do not agree with Poulos that in this severance of the humanities (in their wisdom-seeking capacity) from the university “everybody wins”: I think that would make an impoverishment of both the humanities and the university. Those dedicated to the pursuit of wisdom need the challenge of those who pursue other ends, and vice versa, and the university has been a wonderful place for those challenges to happen.

Moreover, I believe the place of the humanities — the wisdom-seeking humanities — in the contemporary American university is not a lost cause. It can still be defended — but not, I think, in the way that The Heart of the Matter tries to defend it. Some of us are working on an alternative. Stay tuned.

Monday, August 3, 2015

Thrun, fisked

Let’s work through this post by Sebastian Thrun. All of it.

You’re at the wheel, tired. You close your eyes, drift from your lane. This time you are lucky. You awaken, scared. If you are smart, you won’t drive again when you are about to fall asleep.

Well ... people don't always have a choice about this kind of thing. I mean, sometimes people drive when they’re about to fall asleep because they’ve been working a really long time and driving is the only way they can get home. But never mind. Proceed.

Through your mistakes, you learn. But other drivers won’t learn from your mistakes. They have to make the same mistakes by themselves — risking other people’s lives.

This is true. Also, when I learned to walk, to read, to hit a forehand, to drive a manual-transmission car, no one else but me learned from my mistakes. This seems to be how learning works, in general. However, some of the people who taught me these things explained them to me in ways that helped me to avoid mistakes; and often they were drawing on their own experience. People may even have told me to load up on caffeine before driving late at night. This kind of thing happens a lot among humans — the sharing of knowledge and experience.

Not so the self-driving car. When it makes a mistake, all the other cars learn from it, courtesy of the people programming them. The first time a self-driving car encountered a couch on the highway, it didn’t know what to do and the human safety driver had to take over. But just a few days later, the software of all cars was adjusted to handle such a situation. The difference? All self-driving cars learn from this mistake, not just one. Including future, “unborn” cars.

Okay, so the cars learn ... but I guess the people in the cars don't learn anything.

When it comes to artificial intelligence (AI), computers learn faster than people.

I don't understand what “when it comes to” means in this sentence, but “Some computers learn some things faster than some people” would be closer to a true statement. Let’s stick with self-driving cars for a moment: you and I have no trouble discerning and avoiding a pothole, but Google’s cars can’t do that at all. You and I can tell when a policeman on the side of the road is signaling for you to slow down or stop, and can tell whether that’s a big rock in the road or just a piece of cardboard, but Google’s cars are clueless.

The Gutenberg Bible is a beautiful early example of a technology that helped humans distribute information from brain to brain much more efficiently. AI in machines like the self-driving car is the Gutenberg Bible, on steroids.

“On steroids”?

The learning speed of AI is immense, and not just for self-driving cars. Similar revolutions are happening in fields as diverse as medical diagnostics, investing, and online information access.

I wonder what simple, everyday tasks those systems are unable to perform.

Because machines can learn faster than people, it would seem just a matter of time before we will be outranked by them.


Today, about 75 percent of the United States workforce is employed in offices — and most of this work will be taken away by AI systems. A single lawyer or accountant or secretary will soon be 100 times as effective with a good AI system, which means we’ll need fewer lawyers, accountants, and secretaries.

What do you mean by “effective”?

It’s the digital equivalent of the farmers who replaced 100 field hands with a tractor and plow. Those who thrive will be the ones who can make artificial intelligence give them superhuman capabilities.

“Make them”? How?

But if people become so very effective on the job, you need fewer of them, which means many more people will be left behind.

“Left behind” in what way? Left behind to die on the side of the road? Or what?

That places a lot of pressure on us to keep up, to get lifelong training for the skills necessary to play a role.

“Lifelong training”? Perhaps via those MOOCs that have been working so well? And what does “play a role” mean? The role of making artificial intelligence give me superhuman capabilities?

The ironic thing is that with the effectiveness of these coming technologies we could all work one or two hours a day and still retain today’s standard of living.

How? No, seriously, how would that play out? How do I, in my job, get to “one or two hours a day”? How would my doctor do it? How about a plumber? I’m not asking for a detailed roadmap of the future, but just sketch out a path, dude. Otherwise I might think you’re just talking through your artificially intelligent hat. Also, do you know what “ironic” means?

But when there are fewer jobs — in some places the chances are smaller of landing a position at Walmart than gaining admission to Harvard —

That’s called lying with statstics, but never mind, keep going.

— one way to stay employed is to work even harder. So we see people working more, not less.

If by “people,” you mean “Americans,” then that is probably true — but these things have been highly variable throughout history. Any anyway, how does “people working more” fit with your picture of the coming future?

Get ready for unprecedented times.

An evergreen remark, that one is.

We need to prepare for a world in which fewer and fewer people can make meaningful contributions.

Meaningful contributions to what?

Only a small group will command technology and command AI.

What do you mean by “command”? Does a really good plumber “command technology”? If not, why not? How important is AI in comparison to other technologies, like, for instance, farming?

What this will mean for all of us, I don’t know.

Finally, an honest and useful comment. Thrun doesn't know anything about what he was asked to comment on, but that didn't stop him from extruding a good deal of incoherent vapidity, nor did it stop an editor at Pacific Standard from presenting it to the world.

Thursday, July 30, 2015

Disagreement, Modernity, Technology

In the last couple of weeks I have published three posts over at The American Conservative on disagreement and its management.




Since this blog largely deals with technological and academic questions, I tend to move over to AmCon when I have something to say about political and social issues … but there’s a lot of overlap to these broad categories, and I seriously thought about posting the third entry in that series here at Text Patterns.

Instead, I’m just linking to the series, but I want to point out something that seems important to me: that there is a clear and strong connection between (a) the need to think acutely about how social media shape our politics and ethics and (b) the need that I’ve been emphasizing here for a technological history of modernity. The pathologies of our shared socio-political life do not just arise from immediate contexts and recent technologies, but have been generated by disputes and technologies that go back at least half a millennium. The history of modernity’s rise and the critique of new media are in a sense a single enterprise, a point which, for all he may have gotten wrong, Marshall McLuhan understood profoundly.