Text Patterns - by Alan Jacobs

Monday, August 31, 2015

the American university and resource dependence


We've heard a lot in recent years about the decline in American states' support for higher education — which has indeed been happening — with the implicit or explicit corollary worry that this decline is leading to the privatization of the university, the subjugation of academe to the demands of the marketplace, etc. And I don't think those worries are wholly misbegotten. But this post by Beth Popp Berman suggests that there may be something larger to think about: the transfer of universities' resource dependency from state governments to the federal government, thanks to a pretty massive rise in federal student aid — which comes with strings attached, for students and institutions alike. Here's Berman's conclusion: 

If organization theory tells us anything, it’s that resource dependence matters. When, five years down the road, we get a Race to the Top rewarding colleges that meet completion and job placement goals at a given tuition cost, I know where I’ll be looking: at that point in 2002 where higher ed waved goodbye to the states and hello to the feds.

Given the close collaboration of our national government with the world's biggest businesses, it seems unlikely that this development will bring about a rescue from privatization. Rather, the feds are likely to be a very effective instrument for implementing the values and priorities of the market.

Among other things, this means that finding ways to create educational environments that are genuinely intellectually independent, and genuinely countercultural, is just going to get harder. James Poulos has suggested that small, private, online courses may be the future of educational seriousness and genuine innovation. That's an argument worth considering, and I hope eventually to do so here.

Sunday, August 30, 2015

on Aurora

My friend Adam Roberts, whose critical judgment is superb, loved Kim Stanley Robinson’s new novel Aurora; I didn’t. At all. And while such differences in literary experience are inevitable and commonplace — “People who like this sort of thing will find this the sort of thing they like” is the most truthful of all reviews — I’m a little uncomfortable to be so far from Adam in my response.

And that’s because I didn’t like the book. If I had liked it more than Adam did I wouldn’t be bothered; but I’d prefer not to be the sort of reader whose insufficient catholicity of taste, or readily insensitivity, blocks him from appreciating things that deserve appreciation. But I didn’t care for Aurora, and I think I can say why: I was not moved or convinced by the cultural world it portrays.

Adam writes, “Aurora is a magnificent piece of writing, certainly Robinson’s best novel since his mighty Mars trilogy, perhaps his best ever.” So since he compared it to the Mars trilogy, I will too — even though in one sense that’s unfair, since the Mars books gave Robinson at least three times as many words in which to portray a fictional world. But the stories have a fundamental three-part structure in common:

  1. The decision to send human beings to another world.
  2. How they get there.
  3. What they do when they arrive.

The proportions vary greatly: the Mars trilogy is overwhelmingly about number 3, Aurora more focused on number 2. And you could make an argument that the richer cultural world of the Mars trilogy is a function not only of its greater length but its dominant setting. Still, as I read Aurora I kept thinking about the two-dimensionality of lives of the people living on their ship headed for Tau Ceti. They were all focused on personal relationships, political questions, and the technologies needed to manage life in a strange environment. That’s it. One group of people, living in one biome, had developed a kind of ritual in which they introduce young people to the fact that they are living in a starship … but if any of the other biomic cultures had done something similar, we don’t hear about it. Also, sometimes people play music. But that exhausts the cultural life of the ship — which the people haven’t even named. The ship’s AI suggests that it be called “Ship.” But I cannot imagine that human beings living for generations on a starship wouldn’t name the damned thing.

On Mars, in Robinson’s trilogy, there are poets, and composers, and dramatists — a rich cultural and artistic life. There are serious (and endless, and fascinating) philosophical debates about what they’re doing on Mars and why they’re doing it. Is it too much to expect that something of the kind on Aurora’s generational ship? I don’t think so. Czeslaw Milosz writes somewhere — in The Witness of Poetry, I think — about situations of extreme suffering and deprivation in which poetry becomes “as necessary as bread.” And I am persuaded by the governing conceit of Emily St. John Mandel’s Station Eleven (about which I wrote briefly here): that if civilization collapsed people would value all the more the music and drama and poetry that had seemed so frivolous and ancillary in a fully-functioning world.

I’m trying not to spoil Aurora too much here, but I think it’s okay to say that when the ship finally gets to the Tau Ceti system an intense dispute arises about whether the people should stay there or go somewhere else. When some characters are taken aback by the passionate intensity of those who want to stick with the original plan, one person comments, “I do think it helps to think of the stayers as holding a religious position. The Tau Ceti system has been their religion all their lives, they say, and now they are being told that it won’t work here, that the idea was a fantasy. They can’t accept it.”

But I don’t see any evidence in the text that people think/thought act/acted in a religious way — about this, or about anything else. Robinson seems to portray them as simply being excited about coming to the end of a long voyage. There doesn’t seem to be much (any) reflection about those thousands of people who were born on the ship and died on the ship — like Israelites who were born in the wilderness and died before reaching the Promised Land. Surely this is something that people would have thought about in the 160 or so years that the ship had been sailing through space, and probably even before they departed Earth. It’s hard for me not to imagine that on such a ship there would be whole philosophical schools — not formal, not professional, but comprised of people deeply invested in the key questions. You see something like that in Neal Stephenson’s Seveneves, a book I also have commented on. Yet the people of Aurora seem myopically focused on the immediate and practical; and in that sense they don’t seem fully human to me.

That’s why I didn’t like the book very much.

Friday, August 28, 2015

on difficulty

In this exchange on literary difficulty I think Leslie Jamison gives us something far more useful than Heller.

Here’s Heller:

Recently, when I read Christine Schutt’s short story “You Drive” with a graduate writing class, several of the students complained that they found the story baffling. They couldn’t make out the chronology of the events it described; they weren’t always sure which character was speaking; the story, they concluded, “didn’t work.” The fact that they had trouble following Schutt’s elliptical prose was not in itself a surprise. What did take me aback was their indignation — their certainty that the story’s difficulty was a needless imposition on readerly good will. It was as if any writing that didn’t welcome them in and offer them the literary equivalent of a divan had failed a crucial hospitality test.

The “as if” in that last sentence is doing a lot of work, and rather snide work at that. Why should Heller conclude that her students’ dislike of one story is revelatory of a sense of readerly entitlement, a universal demand that texts “welcome them in and offer them the literary equivalent of a divan”? Maybe she assigned a poor story, and the students would have responded more positively to an equally demanding one that was better-crafted. You can’t tell what people think about “any writing” on the basis of their opinions about a single text. 

It’s easy and natural for teachers to explain every classroom clunker by blaming the inadequacies of their students. It’s also a tendency very much to be resisted.

Jamison, by contrast, approaches the question of difficulty in a much more specific way, and what I like best about her brief narrative is its acknowledgment that a reader might approach a given book with a very different spirit in one set of circumstances — or at one moment of her life — than in another. It’s something I have said and written often: that one need not think that setting a book aside is a permanent and unverifiable verdict on the book — or on oneself. People change; frames of mind and heart come and go; and if a book and a reader happen to find each other, it’s beautiful

Wednesday, August 26, 2015

Twitter and emotional resilience

It seems to me that one of the most universal and significant differences between young people and their elders is the emotional resilience of the young. Most young people — the damaged always excepted — can plunge into the deepest and wildest waters of their inner lives because they know that they have what it takes to take the buffeting, even be energized by the buffeting, and to recover easily, quickly, completely.

I’ve seen this often with students over the years. I’ve had people come to my office and disintegrate before my eyes, collapse in convulsive weeping — and then, fifteen minutes later, walk out into the world utterly composed and even cheerful. There was a time when I could have done the same. When I was their age and feeling angry, I wanted music that echoed and amplified that anger; when I was deep in melancholy, I would drive the streets at 2 A.M. and listen to Kind of Blue over and over. But looking back on these habits, I think I allowed them because, on some level, I knew I could climb out of the pit when I needed to.

Those days are past. When the world’s rough waters have buffeted you for several decades, you wear down, you lose your resilience. Now if I feel agitated or melancholy, I seek countervailing forces: the more peaceable and orderly music of Bach and Mozart and Handel, the movies of Preston Sturges, the prose of Jane Austen or P. D. James. (Classic mysteries, with their emphasis on finding and purging the sources of social disorder, have become increasingly important to me.) These are coping mechanisms, ways for me to keep my emotional balance.

This morning my Twitter feed was overwhelmed by yet another Twitter tsunami, this one prompted by the murder of two television journalists in Virginia. This one one is a little different than the usual, because much of the conversation is centering on people who, with crassly absolute insensitivity, are retweeting footage of the actual murder itself: thanks to the curse of video autoplay, thousands and thousands of people are being confronted by frightening, disturbing scenes that they never wanted to see. But in general it follows the same pattern as all the other tsunamis: hundreds and hundreds of tweets and retweets of the same information, over and over, all day long.

And I think: I don’t need this. I could make some principled, or “principled,” arguments against it — that there's no reason to pay more attention to this murder than any of the several dozen others that will happen in America today, that this is a classic illustration of the "society of the spectacle", that we should follow Augustine's example in denouncing curiositas — but my real problem is that it just makes me very sad and very tired, and I have too much to do to be sad and tired.

And then it occurs to me: maybe Twitter — maybe social media more generally — really is a young person's thing after all. Intrinsically, not just accidentally.

Monday, August 24, 2015

social media triage

We all have to find ways to manage our social-media lives. I have a few rules about engaging with other people, developed over the past several years, which I hold to pretty firmly, though not with absolute consistency.

1) If on Twitter or in blog comments you're not using your real name, I won't reply to you.

2) I never read the comments on any post that appears on a high-traffic online site, even if I have written it.

3) I have Twitter set up so that I typically see replies only from people I follow. Every once in a while I may look through my replies, but honestly, I try not to. So if you're asking me a question on Twitter, I will either never see it or, probably, will see it only some days or weeks after you've asked.

4) If I happen to see that you have tweeted me-wards but I don't know you, I will probably not reply.

Why do I follow these rules? Because my experiences in conversing with strangers online have been about 95% unpleasant. Especially as one reaches what the French call un certain âge, cutting unnecessary losses — conserving intellectual and emotional energy — becomes more important than creating new experiences. At least how that's how it's been for me. This is unfortunate for, and perhaps unfair to, people who want to engage constructively; but y'all are greatly outnumbered by the trollish, the snarky, those who reply to things they haven't read, and the pathologically contentious. And in the limited time I have to spend on social media, I prefer to nurture relationships I already have.

I've said some of these things before, but since in the past week I've received three why-didn't-you-answer-my-tweet emails, I thought it might be worthwhile to say them again.

podcasts redux

Perhaps the chief thing I learned from my post on podcasting is that a great many people take “podcast” to mean something like “any non-music audio you can listen to on your smartphone.” Okay, fair enough; the term often is used that way. And I sort of used it that way myself, even though I didn’t really mean to. This made my post less coherent than it ought to have been. 

In more precise usage, a podcast is something like an audio blog post: born digital and distributed to interested parties via web syndication. We commonly distinguish between a magazine article that gets posted online and a blog post, even when the magazine posts the article to its blog and you see it in your RSS reader; similarly, In Our Time and This American Life are radio programs that you can get in podcast form, not podcasts as such. The Mars Hill Audio Journal is an audio periodical and even farther from the podcast model because it isn’t syndicated: you have to purchase and download its episodes — and you should!  (By the way, I couldn’t help smiling at all the people who told me that I should give Mars Hill a try, given this. How did they manage to miss me?) (Also by the way, MHAJ has an occasional podcast: here.)

So clearly I should not have used In Our Time to illustrate a point about podcasts, even if I do typically listen to it in podcast form. My bad.

In Our Time has a great many fans, it seems, and while on one level I understand why, I'm typically frustrated by the show. It typically begins with Melvyn Bragg saying something like, "So Nigel, who was Maimonides?" — to which Nigel, a senior lecturer in Judaic Studies at University College, London, replies, "Maimonides was born...." And then off we go for half-an-hour of being bludgeoned with basic facts by three academics with poor voices for radio. Only in the last few minutes of the episode might an actual conversation or debate break out. If you don't especially like reading, then I guess this is a reasonably painless way to learn some stuff, but it doesn't do a lot for me.

I also discovered that EconTalk has a great many fans, and indeed, you can learn a good deal on EconTalk about stuff it would be hard to discover elsewhere. But EconTalk is basically people talking on the phone, and the complete lack of production values grates on me.

So, sorting through all these responses, I have come to two conclusions. The first is that for a great many people podcast-listening is primarily a means of downloading information or entertainment to their brains. It's content they want, and the form and quality of presentation don't, for these people, count for a lot.

The second conclusion is that in these matters I have been really, really spoiled by the Mars Hill Audio Journal. Even though it is not a podcast, it is, I now realize, the standard by which I tend to judge podcasts. And they rarely match up. Ken Myers has a really exceptional skill set: he is deeply knowledgable and intelligent, he is a friendly but incisive interviewer, he is a magnificent editor, and he has the technical skills to produce a top-quality audio presentation. I’ve come to realize, over the past few days of conversing about all this, that what I really want is for all podcasts to be like the MHAJ. And while that may be an understandable desire, it’s an unreasonable expectation.

Tuesday, August 18, 2015

podcasts

Just a quick follow-up to a comment I made on Twitter. Over the past several years I have listened to dozens and dozens of podcasts, on a very wide range of subjects, with the result that there is now not a single podcast that I listen to regularly.

Podcasts, overall, are

(1) People struggling to articulate for you stuff you could find out by looking it up on Wikipedia (e.g. In Our Time);

(2) People using old-timey radio tricks to fool you into thinking that a boring and inconsequential story is fascinating (e.g. Serial);

(3) People leveraging their celebrity in a given field as permission to ramble incoherently about whatever happens to come to their minds (e.g. The Talk Show); or

(4) People using pointless audio-production tricks to make a pedestrian story seem cutting-edge (e.g. Radiolab).

The world of podcasting desperately needs people to take it seriously and invest real thought and creativity into it. There are a lot of not-so-smart people who invest all they have in podcasts; there are a lot of smart people who do podcasts as an afterthought, giving them a fraction of the attention they give to their "real work." So far it's a medium of exceptional potential almost wholly unrealized.

All that said, The Memory Palace is pretty good.

Monday, August 17, 2015

reification and modernity

Until this morning I was certain that I had posted this some weeks ago ... but I can't find it. So maybe not. Apologies if this is, after all, a rerun.



One of the chief themes of Peter Harrison's recent book The Territories of Science and Religion is the major semantic alteration both terms of his title — science (scientia) and religion (religio) — have undergone over the centuries. For instance,

In an extended treatment of the virtues in the Summa theologiae, Aquinas observes that science (scientia) is a habit of mind or an “intellectual virtue.” The parallel with religio, then, lies in the fact that we are now used to thinking of both religion and science as systems of beliefs and practices, rather than conceiving of them primarily as personal qualities. And for us today the question of their relationship is largely determined by their respective doctrinal content and the methods through which that content is arrived at. For Aquinas, however, both religio and scientia were, in the first place, personal attributes.

The transformation in each term is, then, a form of reification: a "personal attribute," a habit or virtue, gradually becomes externalized — becomes a kind of thing, though not a material thing — becomes something out there in the world.

What's especially interesting about this, to me, is that scientia and religio aren't the only important words this happens to. Harrison mentions also the case of "doctrine":

In antiquity, doctrina meant “teaching” — literally, the activity of a doctor — and “the habit produced by instruction,” in addition to referring to the knowledge imparted by teaching. Doctrina is thus an activity or a process of training and habituation. Both of these understandings are consistent with the general point that Christianity was understood more as a way of life than a body of doctrines. Moreover they will also correlate with the notion of theology as an intellectual habit, as briefly noted in the previous chapter. As for the subject matter of doctrina — its cognitive component, if you will — this was then understood to be scripture itself, rather than “doctrines” in the sense of systematically arranged and logically related theological tenets. To take the most obvious example, Augustine’s De doctrina Christiana (On Christian Teaching) was devoted to the interpretation of scripture, and not to systematic theology.

So from "the activity of a doctor" — what a learned man does — doctrine becomes a body of propositions.

Curiously, the same thing has happened to a word that I am professionally quite familiar with: "literature." We now use it to refer to a category of texts ("That's really more literature than philosophy, don't you think?") or to a body or collection of texts ("Victorian literature"). But in Dr. Johnson's Dictionary literature is defined as "Learning; skill in letters." And this remains the first meaning in the OED:

Familiarity with letters or books; knowledge acquired from reading or studying books, esp. the principal classical texts associated with humane learning (see humane adj. 2); literary culture; learning, scholarship. Also: this as a branch of study. Now hist.

"Now hist." — historical, no longer current. Yet for Johnson it was the only meaning. (It's interesting, though, that the examples of such usage he cites seem to me to fit the modern meaning better than the one he offers — as though the meaning of the term is already changing in ways Johnson fails to see.)

So here we have a series of personal attributes — traits acquired through the exercise of discipline until they become virtues — that become external, more-or-less objective stuff. (Gives a new resonance to Alasdair MacIntyre's famous title After Virtue.) Which makes me wonder: is there a link between the rise of modernity and this reifying tendency in language? And if so, might this link be related to the technological aspect of modernity that I've been asking about lately? If a social order is increasingly defined and understood in terms of what it makes and uses — of things external to the people making using them — then might that not create a habit of mind that would lead to the reifying of acts, habits, traits, and virtues? What is important about us, in this way of thinking, would not be who we are but what we make, what we surround ourselves with, what we wield.

Tuesday, August 11, 2015

algorithms and responsibility

One of my fairly regular subthemes here is the increasing power of algorithms over our daily lives and what Ted Striphas has called “the black box of algorithmic culture”. So I am naturally interested in this interview with Cynthia Dwork on algorithms and bias — more specifically, on the widespread, erroneous, and quite poisonous notion that if decisions are being made by algorithms they can’t be biased. (See also theses 54 through 56 here.)

I found this exchange especially interesting:

Q: Whose responsibility is it to ensure that algorithms or software are not discriminatory?

A: This is better answered by an ethicist. I’m interested in how theoretical computer science and other disciplines can contribute to an understanding of what might be viable options. The goal of my work is to put fairness on a firm mathematical foundation, but even I have just begun to scratch the surface. This entails finding a mathematically rigorous definition of fairness and developing computational methods — algorithms — that guarantee fairness.

Good for Dwork that she’s concerned about these things, but note her rock-solid foundational assumption that fairness is something that can be “guaranteed” by the right algorithms. And yet when asked a question about right behavior that’s clearly not susceptible to an algorithmic answer — Who is responsible here? — Dwork simply punts: “This is better answered by an ethicist.”

One of Cornel West’s early books is called The American Evasion of Philosophy, and — if I may riff on his title more than on the particulars of his argument — this is a classic example of that phenomenon in all of its aspects. First, there is the belief that we don't need to think philosophically because we can solve our problems by technology; and then, second, when technology as such fails, to call in expertise, in this case in the form of an “ethicist.” And then, finally, in the paper Dwork co-authored on fairness that prompted this interview, we find the argument that the parameters of fairness “would be externally imposed, for example, by a regulatory body, or externally proposed, by a civil rights organization,” accompanied by a citation of John Rawls.

In the Evasion of Philosophy sweepstakes, that’s pretty much the trifecta: moral reflection and discernment by ordinary people replaced by technological expertise, academic expertise, and political expertise — the model of expertise being technical through and through. ’Cause that’s just how we roll.

Friday, August 7, 2015

respect

Portrait of Virginia Woolf by Roger Fry (Wikimedia)

Suzanne Berne on Virginia Woolf: A Portrait, by Viviane Forrester:

But it’s Leonard who gets dragged in front of the firing squad. Not only did he encourage Bell’s patronizing portrayal of Virginia; according to Forrester, he was also responsible for his wife’s only true psychotic episode, and probably helped usher her toward suicide. These accusations are fierce and emphatic: Leonard projected his own neuroses and his own frigidity onto Woolf (he had a horror of beginner sex and found most women’s bodies “extraordinarily ugly”). He married her strictly to get out of Ceylon, where he was in the British Foreign Service and where he had fallen into a suicidal depression. (He hated both the place and the position, though he pretended later to have thrown over a fabulous career for Virginia.) Without medical corroboration, he decreed that she was too unbalanced to have children, triggering her legendary mental breakdown immediately after their honeymoon. Then he held the threat of institutionalization over her, coercing her into a secluded country lifestyle that suited him but isolated and disheartened her, while using his marriage as entrée to an aristocratic, intellectual world that, as “a penniless Jew” from the professional class — just barely out of a shopkeeper’s apron — he could not have otherwise hoped to join.

This excerpt from Forrester’s book confirms Berne’s description of Forrester's attack on Leonard Woolf.

When, in March of 1941, Woolf decided to take her own life, here is the heart-wrenching letter she left for Leonard:

Dearest,

I feel certain I am going mad again. I feel we can’t go through another of those terrible times. And I shan’t recover this time. I begin to hear voices, and I can’t concentrate. So I am doing what seems the best thing to do. You have given me the greatest possible happiness. You have been in every way all that anyone could be. I don’t think two people could have been happier till this terrible disease came. I can’t fight any longer. I know that I am spoiling your life, that without me you could work. And you will I know. You see I can’t even write this properly. I can’t read. What I want to say is I owe all the happiness of my life to you. You have been entirely patient with me and incredibly good. I want to say that – everybody knows it. If anybody could have saved me it would have been you. Everything has gone from me but the certainty of your goodness. I can’t go on spoiling your life any longer.

I don’t think two people could have been happier than we have been.

Forrester quotes that last line in her book (p. 203) and offers one line of commentary on it: “What was Virginia Woolf denied? Respect.”

What counts as denying someone respect? Offhand, I’d say that if I believed that I understood the emotional life and intimate relationships of a great artist I had never met, and who died before I came of age, better than she understood them herself … that would be denying her respect.

Thursday, August 6, 2015

the humanities and the university

A few years ago, the American Academy of Arts and Sciences commissioned a report on the place of the humanities and social sciences in America in the coming years — here’s a PDF. And here’s how the report, The Heart of the Matter, begins:

Who will lead America into a bright future?

Citizens who are educated in the broadest possible sense, so that they can participate in their own governance and engage with the world. An adaptable and creative workforce. Experts in national security, equipped with the cultural understanding, knowledge of social dynamics, and language proficiency to lead our foreign service and military through complex global conflicts. Elected officials and a broader public who exercise civil political discourse, founded on an appreciation of the ways our differences and commonalities have shaped our rich history. We must prepare the next generation to be these future leaders.

And in this vein the report continues: study the humanities so you can become a leader in your chosen profession.

Which is a great argument, as long as there is reliable evidence that investing tens of thousands of dollars to study the humanities pays off in income and status later on. But what if that isn't true, or ceases to be true? The Heart of the Matter puts all its argumentative eggs in the income-and-status basket; I'm not sure that's such a great idea.

If the general public comes to believe that the humanities don't pay — at least, not in the way The Heart of the Matter suggests — then that won't be the end of the humanities. Friends will still meet to discuss Dante; a few juvenile offenders will still read Dostoevsky.

And the digital realm will play a part also: James Poulos has recently written about SPOCs — not MOOCs, Massive Open Online Courses, but Small Private Online Courses:

In small, private forums, pioneers who want to pursue wisdom can find a radically alternate education — strikingly contemporary, yet deeply rooted in the ancient practice of conversational exegesis.

Everyone wins if that happens. Wisdom-seekers can connect cheaply, effectively, intimately, and quickly, even if they're dispersed over vast distances. Universities can withdraw fully from the wisdom business, and focus on the pedigree business. And the rest of us can get on with our lives.

In a similar vein, Johann Neem has imagined an “academy in exile”:

As the university becomes more vocational and less academic in its orientation, we academics may need to find new ways to live out our calling. The academy is not the university; the university has simply been a home for academics. University education in our country is increasingly not academic: it is vocational; it is commercial; it is becoming anti-intellectual; and, more and more, it is offering standardized products that seek to train and certify rather than to educate people. In turn, an increasing proportion of academics, especially in the humanities, have become adjuncts, marginalized by the university’s growing emphasis on producing technical workers.

The ideas offered above all build on the core commitments of the academy, and the tradition of seeing the academy as a community of independent scholars joined together by their commitment to producing and sharing knowledge. Increasingly, however, universities claim to own the knowledge we produce, as do for-profit vendors who treat knowledge as proprietary. To academics, each teacher is an independent scholar working with her or his students and on her or his research, but also a citizen committed to sharing her or his insights with the world as part of a larger community of inquiry.

I do not agree with Poulos that in this severance of the humanities (in their wisdom-seeking capacity) from the university “everybody wins”: I think that would make an impoverishment of both the humanities and the university. Those dedicated to the pursuit of wisdom need the challenge of those who pursue other ends, and vice versa, and the university has been a wonderful place for those challenges to happen.

Moreover, I believe the place of the humanities — the wisdom-seeking humanities — in the contemporary American university is not a lost cause. It can still be defended — but not, I think, in the way that The Heart of the Matter tries to defend it. Some of us are working on an alternative. Stay tuned.

Monday, August 3, 2015

Thrun, fisked

Let’s work through this post by Sebastian Thrun. All of it.

You’re at the wheel, tired. You close your eyes, drift from your lane. This time you are lucky. You awaken, scared. If you are smart, you won’t drive again when you are about to fall asleep.

Well ... people don't always have a choice about this kind of thing. I mean, sometimes people drive when they’re about to fall asleep because they’ve been working a really long time and driving is the only way they can get home. But never mind. Proceed.

Through your mistakes, you learn. But other drivers won’t learn from your mistakes. They have to make the same mistakes by themselves — risking other people’s lives.

This is true. Also, when I learned to walk, to read, to hit a forehand, to drive a manual-transmission car, no one else but me learned from my mistakes. This seems to be how learning works, in general. However, some of the people who taught me these things explained them to me in ways that helped me to avoid mistakes; and often they were drawing on their own experience. People may even have told me to load up on caffeine before driving late at night. This kind of thing happens a lot among humans — the sharing of knowledge and experience.

Not so the self-driving car. When it makes a mistake, all the other cars learn from it, courtesy of the people programming them. The first time a self-driving car encountered a couch on the highway, it didn’t know what to do and the human safety driver had to take over. But just a few days later, the software of all cars was adjusted to handle such a situation. The difference? All self-driving cars learn from this mistake, not just one. Including future, “unborn” cars.

Okay, so the cars learn ... but I guess the people in the cars don't learn anything.

When it comes to artificial intelligence (AI), computers learn faster than people.

I don't understand what “when it comes to” means in this sentence, but “Some computers learn some things faster than some people” would be closer to a true statement. Let’s stick with self-driving cars for a moment: you and I have no trouble discerning and avoiding a pothole, but Google’s cars can’t do that at all. You and I can tell when a policeman on the side of the road is signaling for you to slow down or stop, and can tell whether that’s a big rock in the road or just a piece of cardboard, but Google’s cars are clueless.

The Gutenberg Bible is a beautiful early example of a technology that helped humans distribute information from brain to brain much more efficiently. AI in machines like the self-driving car is the Gutenberg Bible, on steroids.

“On steroids”?

The learning speed of AI is immense, and not just for self-driving cars. Similar revolutions are happening in fields as diverse as medical diagnostics, investing, and online information access.

I wonder what simple, everyday tasks those systems are unable to perform.

Because machines can learn faster than people, it would seem just a matter of time before we will be outranked by them.

“Outranked”?

Today, about 75 percent of the United States workforce is employed in offices — and most of this work will be taken away by AI systems. A single lawyer or accountant or secretary will soon be 100 times as effective with a good AI system, which means we’ll need fewer lawyers, accountants, and secretaries.

What do you mean by “effective”?

It’s the digital equivalent of the farmers who replaced 100 field hands with a tractor and plow. Those who thrive will be the ones who can make artificial intelligence give them superhuman capabilities.

“Make them”? How?

But if people become so very effective on the job, you need fewer of them, which means many more people will be left behind.

“Left behind” in what way? Left behind to die on the side of the road? Or what?

That places a lot of pressure on us to keep up, to get lifelong training for the skills necessary to play a role.

“Lifelong training”? Perhaps via those MOOCs that have been working so well? And what does “play a role” mean? The role of making artificial intelligence give me superhuman capabilities?

The ironic thing is that with the effectiveness of these coming technologies we could all work one or two hours a day and still retain today’s standard of living.

How? No, seriously, how would that play out? How do I, in my job, get to “one or two hours a day”? How would my doctor do it? How about a plumber? I’m not asking for a detailed roadmap of the future, but just sketch out a path, dude. Otherwise I might think you’re just talking through your artificially intelligent hat. Also, do you know what “ironic” means?

But when there are fewer jobs — in some places the chances are smaller of landing a position at Walmart than gaining admission to Harvard —

That’s called lying with statstics, but never mind, keep going.

— one way to stay employed is to work even harder. So we see people working more, not less.

If by “people,” you mean “Americans,” then that is probably true — but these things have been highly variable throughout history. Any anyway, how does “people working more” fit with your picture of the coming future?

Get ready for unprecedented times.

An evergreen remark, that one is.

We need to prepare for a world in which fewer and fewer people can make meaningful contributions.

Meaningful contributions to what?

Only a small group will command technology and command AI.

What do you mean by “command”? Does a really good plumber “command technology”? If not, why not? How important is AI in comparison to other technologies, like, for instance, farming?

What this will mean for all of us, I don’t know.

Finally, an honest and useful comment. Thrun doesn't know anything about what he was asked to comment on, but that didn't stop him from extruding a good deal of incoherent vapidity, nor did it stop an editor at Pacific Standard from presenting it to the world.