Text Patterns - by Alan Jacobs

Monday, December 5, 2016

on re-reading Le Guin

I’ve recently re-read Ursula Le Guin’s most famous novels, The Left Hand of Darkness (1969) and The Dispossessed (1974) — the former for the first time in, yeeesh, I don’t want to think about how long. The latter, which has always been my favorite among her novels, revealed some structural flaws this time around: I really don’t think she brings Shevek’s story to as successful a conclusion as it deserves. The Dispossessed would have been better as a longer and more sweeping book, something more Tolstoyan in scope, perhaps with more of the history of the Odonian movement — but then, Le Guin really doesn’t do Tolstoyan sweep. A shame, in a way, given that so many of her themes invite it. (I wonder if Virginia Woolf’s famous comment in A Room of One’s Own that women’s books are likely to be shorter than those of men is relevant here?) By contrast, on this reading The Left Hand of Darkness struck me as a genuine masterpiece, perfectly calibrated and balanced, and even more moving than I had remembered.

In both books, Le Guin is great on sexual politics, in several senses of that phrase: she shows the ways that the political order is shaped by sexual experience, and sexual experience by the political order. (The former is primary in The Left Hand of Darkness, the latter in The Dispossessed.) I’m reminded that both books were written in the era of “The personal is the poltical”, and it shows — in important and useful ways.

Le Guin’s interest in showing how dimensions or facets of our experience that we like to keep separate, or at least to conceptualize separately, ceaselessly impinge on one another is a testimony to her moral realism, her unsentimental acknowledgment of what we Christians would call fallen human nature. There’s an important passage in The Dispossessed where Shevek’s friend Bedap argues that the very inequities of power that the Odonians fled when they colonized Anarres have subtly and quietly found their way back into the society. He illustrates this by referring to Sabul, an intellectually limited physicist who has been clever enough to build up his own little sphere of power, and is constantly thwarting Shevek’s work.

“You can’t crush ideas by suppressing them. You can only crush them by ignoring them. By refusing to think, refusing to change. And that’s precisely what our society is doing! Sabul uses you where he can, and where he can’t, he prevents you from publishing, from teaching, even from working. Right? In other words, he has power over you. Where does he get it from? Not from vested authority, there isn’t any. Not from intellectual excellence, he hasn’t any. He gets it from the innate cowardice of the average human mind. Public opinion! That’s the power structure he’s part of, and knows how to use. The unadmitted, inadmissible government that rules the Odonian society by stifling the individual mind.”

When I read this passage I think of “The Day Before the Revolution,” the companion story to The Dispossessed, in which Odo reflects on her own life’s work:

She had never feared or despised the city. It was her country. There would not be slums like this, if the Revolution prevailed. But there would be misery. There would always be misery, waste, cruelty. She had never pretended to be changing the human condition, to be Mama taking tragedy away from the children so they won’t hurt themselves. Anything but. So long as people were free to choose, if they chose to drink flybane and live in sewers, it was their business. Just so long as it wasn’t the business of Business, the source of profit and the means of power for other people.

Human nature is such that “misery, waste, cruelty” can never be eliminated. Thus The Dispossessed is not a utopia, even an “ambiguous utopia,” in the phrase that has gradually become the book’s more-or-less-official subtitle. For Le Guin, the question is whether we accept a social order that is effectively designed to exacerbate misery, waste, and cruelty, or whether we will choose one that makes domination more difficult for the Sabuls of the world. Either way there will be costs, and Le Guin isn’t shy about showing what they are. That’s why, for all its flaws, The Dispossessed is an essential book for our times.

Saturday, December 3, 2016

open letter to Adam Roberts on the Protocols of the Elders of the Internet

This started as a reply to a comment Adam made on my previous post. But then it underwent gigantism.



Adam, I see these matters a little differently than you do — let’s see if I can find out why. I’ll start with cars. I’d say that the main thing that makes it possible for there to be an enormous variety of automobiles is the road. A road is an immensely powerful platform — in this case literally a platform — because it is so simple. Anything that walks, runs, or rolls can use it, which causes problems sometimes, as when a cow wanders onto the street; but for the most part that openness to multiple uses makes it an indispensable technology. Even if your British automobile has its steering wheel on the wrong side, you can still drive it in France or Germany.

By contrast, railway lines are rather less useful because of the problem called break of gauge, which has in the past forced people to get off one train at a national boundary and get on another one that fits the gauge of the tracks in that country. (And of course for a period there was no standardization of gauge even in England.)

Notice that the lack of standardization only becomes a problem when you get beyond your locality — but that’s precisely what mechanical transportation is for: to take us away from our homes. The technology’s power creates its problems, which new technologies must often be devised to fix.

All of these difficulties are dramatically magnified when we get to the the internet, which used to be called, in a significant nickname, the “information superhighway.” Here it seems to me that we have an ongoing struggle between the differentiation that arises from economic competition and the standardization that “platforms” are always looking for. PC makers want you to buy PCs, while Apple wants you to buy Macs, so they differentiate themselves from one another; but Microsoft just wants you to use Word and Excel and PowerPoint and so makes that software for both platforms. Though they’d prefer a world in which every computer ran Windows, they have enough of an interest in standardization to make cross-platform applications. And now they make web versions of their apps that you can use even on Linux.

They do that by employing protocols that were designed back in the era when there were few computers in the world but the ones that existed were made using a variety of architectures and a wide array of parts. When you email the Word file you created on your PC to a Mac user like me, you do so using those same protocols, and updated versions of the communications lines that were first laid out more than a century ago. So again: a tension, in purely technical terms, between variability and standardization.

This tension may be seen in other digital technologies as well. Board games all employ the same basic and highly flexible technological platform: paper and ink (with a bit of plastic or metal, perhaps, though those flourishes are unnecessary). But when it comes to video games, contra Roberts, everyone does not have an Xbox. My house is a PlayStation house. Which means that there are some games that we can’t play, and that’s the way the console makers like it: Microsoft wants us to choose Xbox and stick with it; Sony wants us to choose PlayStation and stick with it. However, the makers of video games don’t like seeing their markets artificially reduced in this way, and so, if they have the resources to do so, will make their games available on multiple platforms, and then will use those internet protocols mentioned above to enable players to play with and against each other.

But the most popular games will always be online ones, because they allow almost anyone who has a computer and an internet connection to play, and to interact with one another: like the makers of board games, they look for the broadest possible platform — but they also encourage us to look beyond the local, indeed to ignore locality when playing with others (players typically have no idea where their opponents and teammates are).

So digital technologies, like the mechanical transportation technologies mentioned earlier, are meant to transcend locality, to remove emplacement as a limitation on sociability. (You can typically only play board games with people who are in the same room with you — though it should be noted that there was a longstanding if not almost abandoned tradition of playing chess by mail.) But this can only be accomplished with the development of either (a) standardized platforms or (b) shared protocols designed to bridge the gaps created by platform variability. Thus Google wants to solve that problem you have sharing photos with your wife: you use the Android version of the Google Photos app, she uses the iOS version, and presto! Solution achieved.

So — still working this through, please bear with me — let’s look at Twitter in light of this analysis. Twitter may be understood as a platform-agnostic MMORPG which, like most other MMORPGs, relies on the standard set of internet protocols, and therefore exchanges data with everything else that uses those protocols. This means that while Roberts is once again wrong when he says that everyone is on Twitter — it has maybe 20% as many users as Facebook and 80% as many as Instagram — those larger platforms, and of course the great vastness of the open web, can be used to magnify the influence of anything tweeted. In that sense there are a great many people in the world who, despite not being on Twitter, are on Twitter. So while some people blame Trump’s skillful Twitter provocations for both his political success and the debasing of our political culture, and others place the blame on the fake-news-wholesaling of Facebook, in fact the two work together, along with Google’s algorithms. In these matters I’m a conspiracy theorist, and I blame the Protocols of the Elders of the Internet.

So, Adam, in your response you referred to “homogenization,” whereas I’ve been referring to “standardization.” At this point we have the standardization of practices without the homogenization of ideology — and that’s the source of all of our conflicts. The platforms that allow us to connect with like-minded people are equally open to people whose ideas we despise, and we have no reliable means of shutting them out; but the encoded, baked-in tendencies of Twitter as a platform are universally distributed, which means that whether you’re a SJW or an alt-righty, you’re probably going to respond to people you disagree by instantaneous minimalist sneering. (The tendencies of early print culture were rather different, but produced a similar degree of hostility, which I've discussed in this post.)

I think, though, that this conflict between standardization and homogenization could be a temporary state of affairs, at least for people who rely on the Protocols, and that means most of us. That is, while standardization does not inevitably produce homogeneity, it certainly nudges everyone strongly in that direction. There’s no way that public opinion in the U.S. about same-sex marriage could have changed so quickly without social media. TV certainly had a significant influence, but social media are collectively a powerful force-and-speed multiplier for opinion alteration. And if you feel good about that, then you might consider how social media have also nudged tens of millions of Americans towards profound fear of immigrants.

So in this environment, majority opinions and opinions that are held very strongly by sizable minorities are going to be the chief beneficiaries. And that could lead ultimately to significantly increased homogeneity of opinion, a homogeneity that you only stand a chance of avoiding if you minimize your exposure to the Protocols. And here, Adam, it seems to me that your novel New Model Army is disturbingly relevant.

For much of the novel, the soldiers who fight for Pantegral are independent, free agents. When they fight, they fight according to, yes, protocols established and enforced by the software they all use — but they can stop fighting when they want to, they come and go. Indeed, this is one of the chief appeals to them of the New Model Army: it doesn’t own them. Or doesn’t at first; or doesn’t seem to. In the end the protocols prove to be more coercively powerful (or should I say more intensely desirable?) than they had ever expected. And then we have homogeneity indeed — on a truly gigantic scale.

Caveat lector, is what I’m saying.

Wednesday, November 30, 2016

the giant in the library

The technological history of modernity, as I conceive of it, is a story to be told in light of a theological anthropology. As what we now call modernity was emerging, in the sixteenth century, this connection was widely understood. Consider for instance the great letter that Rabelais’ giant Gargantua writes to his son Pantagruel when the latter is studying at the University of Paris. Gargantua first wants to impress upon his son how quickly and dramatically the human world, especially the world of learning, has changed:

And even though Grandgousier, my late father of grateful memory, devoted all his zeal towards having me progress towards every perfection and polite learning, and even though my toil and study did correspond very closely to his desire – indeed surpassed them – nevertheless, as you can well understand, those times were neither so opportune nor convenient for learning as they now are, and I never had an abundance of such tutors as you have. The times were still dark, redolent of the disaster and calamity of the Goths, who had brought all sound learning to destruction; but, by the goodness of God, light and dignity have been restored to literature during my lifetime: and I can see such an improvement that I would hardly be classed nowadays among the first form of little grammar-schoolboys, I who (not wrongly) was reputed the most learned of my century as a young man.

(I’m using the Penguin translation by M. A. Screech, not the old one I linked to above.) And this change is the product, in large part, of technology:

Now all disciplines have been brought back; languages have been restored: Greek – without which it is a disgrace that any man should call himself a scholar – Hebrew, Chaldaean, Latin; elegant and accurate books are now in use, printing having been invented in my lifetime through divine inspiration just as artillery, on the contrary, was invented through the prompting of the devil. The whole world is now full of erudite persons, full of very learned teachers and of the most ample libraries, such indeed that I hold that it was not as easy to study in the days of Plato, Cicero nor Papinian as it is now.

Note that technologies come to human beings as gifts (from God) and curses (from the Devil); it requires considerable discernment to tell the one from the other. The result is that human beings have had their powers augmented and extended in unprecedented ways, which is why, I think, Rabelais makes his characters giants: enormously powerful beings who lack full control over their powers and therefore stumble and trample through the world, with comical but also sometimes worrisome consequences.

But note how Gargantua draws his letter to a conclusion:

But since, according to Solomon, ‘Wisdom will not enter a soul which [deviseth] evil,’ and since ‘Science without conscience is but the ruination of the soul,’ you should serve, love and fear God, fixing all your thoughts and hopes in Him, and, by faith informed with charity, live conjoined to Him in such a way as never to be cut off from Him by sin. Beware of this world’s deceits. Give not your mind unto vanity, for this is a transitory life, but the word of God endureth for ever. Be of service to your neighbours and love them as yourself. Venerate your teachers. Flee the company of those whom you do not wish to resemble; and the gifts of grace which God has bestowed upon you receive you not in vain. Then once you know that you have acquired all there is to learn over there, come back to me so that I may see you and give you my blessing before I die.

The “science without conscience” line is probably a Latin adage playing on scientia and conscientia: as Peter Harrison explains, in the late medieval world Rabelais was educated in, scientia is primarily an intellectual virtue, the disciplined pursuit of systematic knowledge. The point of the adage, then, is that even that intellectual virtue can serve vice and “ruin the soul” if it is not governed by the greater virtues of faith, hope, and love. (Note also how the story of Prospero in The Tempest fits this template. The whole complex Renaissance discourse, and practice, of magic is all about these very matters.)

So I want to note three intersecting notions here: first, the dramatic augmentation, in the early-modern period, of human power by technology; second, the necessity of understanding the full potential of those new technologies both for good and for evil within the framework of a sound theological anthropology, an anthropology that parses the various interactions of intellect and will; and third, the unique ability of narrative art to embody and illustrate the coming together of technology and theological anthropology. These are the three key elements of the technological history of modernity, as I conceive it and hope (eventually) to narrate it.

The ways that narrative art pursues the interrelation of technology and the human is a pretty major theme of mine: see, for instance, here and here and here. (Note how that last piece connects to Rabelais.) It will be an even bigger theme in the future. Stay tuned for further developments — though probably not right away. I have books to finish….

Tuesday, November 29, 2016

is text our friend?

I'm not so sure about this argument by Hossein Derakhshan:

Before I went to prison, I blogged frequently on what I now call the open Web: it was decentralized, text-centered, and abundant with hyperlinks to source material and rich background. It nurtured varying opinions. It was related to the world of books.
Then for six years I got disconnected; when I left prison and came back online, I was confronted by a brave new world. Facebook and Twitter had replaced blogging and had made the Internet like TV: centralized and image-centered, with content embedded in pictures, without links.

Like TV it now increasingly entertains us, and even more so than television it amplifies our existing beliefs and habits. It makes us feel more than think, and it comforts more than challenges. The result is a deeply fragmented society, driven by emotions, and radicalized by lack of contact and challenge from outside.

Therefore, he concludes, "we need more text than videos in order to remain rational animals. Typography, as Postman describes, is in essence much more capable of communicating complex messages that provoke thinking. This means we should write and read more, link more often, and watch less television and fewer videos—and spend less time on Facebook, Instagram, and YouTube."

I don't think this is right. Much of the damage done to truth and charity done in this past election was done with text. (It's worth noting that Donald Trump rarely uses images in his tweets.) And of all the major social media, the platform with the lowest levels of abuse, cruelty, and misinformation is clearly Instagram.

No: it's not the predominance of image over text that's hurting us. It's the use of platforms whose code architecture promotes novelty, instantaneous response, and the quick dissemination of lies.

Monday, November 28, 2016

modernity as temporal self-exile

In The Theological Origins of Modernity, Michael Allen Gillespie writes,

What then does it mean to be modern? As the term is used in everyday discourse, being modern means being fashionable, up to date, contemporary. This common usage actually captures a great deal of the truth of the matter, even if the deeper meaning and significance of this definition are seldom understood. In fact, it is one of the salient characteristics of modernity to focus on what is right in front of us and thus to overlook the deeper significance of our origins. What the common understanding points to, however, is the uncommon fact that, at its core, to think of oneself as modern is to define one’s being in terms of time. This is remarkable. In previous ages and other places, people have defined themselves in terms of their land or place, their race or ethnic group, their traditions or their gods, but not explicitly in terms of time. Of course, any self-understanding assumes some notion of time, but in all other cases the temporal moment has remained implicit. Ancient peoples located themselves in terms of a seminal event, the creation of the world, an exodus from bondage, a memorable victory, or the first Olympiad, to take only a few examples, but locating oneself temporally in any of these ways is different than defining oneself in terms of time. To be modern means to be “new,” to be an unprecedented event in the flow of time, a first beginning, something different than anything that has come before, a novel way of being in the world, ultimately not even a form of being but a form of becoming.

The notion that there is some indissoluble and definitive link between my identity and my moment accounts for some of the most characteristic rhetorical flourishes in our political debates: When people say that history is on their side, or ask how someone can hold Position X in the twenty-first century, or explain that they care about the things they do because of the generation they belong to, or insist that someone they don’t like acts the way he does because of the generation he belongs to, they’re assuming that link. But if time is so definitive time is also a prison: we are bound to our moment and cannot think or live outside it.

And yet people who are so bound congratulate themselves on being emancipated from “their land or place, their race or ethnic group, their traditions or their gods.” They believe they are free, but in fact they have exchanged defining structures that can (and often do) offer security and meaning for a defining abstraction that can offer neither — a home for a prison. This helps to explain why people who believe they are emancipated nevertheless tend to seek, with an intensity born of unacknowledged nostalgia, compensatory stories set in fantastic realms where the longed-for structures are firmly in place. To be imprisoned-by-emancipation is the fate of those who define their being in terms of time. Modernity is thus temporal self-exile — though it may be other things as well.

Thursday, November 24, 2016

on not learning to code

For the past decade or more I’ve fiddled around with learning to code: when I first began I tried to learn some Perl, then later Python, then Ruby, then back to Python again. But I’ve never been able to stick with it for any significant period of time, and I think the chief reason is this: I still have no idea what I would ever do with any of those languages. I can’t imagine a use-case. By contrast, I’ve learned various markup languages — LaTeX, HTML, CSS — because, as someone who writes and presents what he writes to others in various venues, the uses of such tools are obvious. But most people don’t think of that kind of thing as real coding.

The article that most people quote when humanists ask whether they should learn to code is this one by Matt Kirschenbaum. Its subtitle is “Why humanities students should learn to program,” but I don’t think Kirschenbaum addresses that question directly, or with any degree of specificity.

He does describe some particular cases in which code literacy matters: “the student of contemporary literature interested in, say, electronic poetry or the art of the novel in the information age”; “the student interested in computer-assisted text analysis, who may need to create specialized programs that don’t yet exist”; “Procedural literacy, which starts with exercises like making a snowball, will be essential if humanities students are to understand virtual worlds as rhetorical and ideological spaces.” Fair enough. But what about those of us who aren’t studying virtual worlds or electronic poetry?

Typically, we see the value of a particular skill — driving a car; cooking; playing a musical instrument — and because we perceive the value we go about acquiring that skill. But sometimes the arrow may point in the other direction: only when you acquire the skill do you perceive its uses. I have a suspicion that if I got really good at writing Python I would find uses for it. But because I can’t imagine what those uses could be, I have trouble sustaining the discipline needed to learn it.

Tuesday, November 22, 2016

Bethany

In yesterday's post on Kim Stanley Robinson's Mars trilogy I quoted Adam Roberts commenting on the “niceness” of KSR's characters, and it might be worth noting that in his own fiction Adam rarely gives us nice characters. Sometimes they're decent enough people, though in exceptionally challenging circumstances, the kinds of circumstances that make decent people do some less-than-decent things. In other cases Adam's characters are rather nasty, or seriously messed-up in one way or another — perverted, one might say, sometimes in the commonplace sense but always in the etymological sense: torqued from the true, twisted towards eccentric paths.

You know what else is kinda perverted? A writer dedicating a work to a friend who had read it in draft and had some reservations about it. A few months ago I read a draft of a novella Adam had written and gave him some feedback, and now I see he has published it as an e-book. I bought it, eager to see what he had done with it, and … well, more on what he has done with it in a moment, but I got to the end and saw this:

Bethany is dedicated to my friend Alan Jacobs, who read and disliked an earlier draft of it. I have taken out some of the things to which he, rightly, I think, objected; but much of what he disliked, I fear, remains.

Now isn't that kind of … perverted? I ask you.

Speaking of perversion, the protagonist of Bethany is a deeply disfigured person, and one of the things the story encourages us to do is to think about why that is —how people get that way — and that of course is a question that leads to many others. Bethany is a theological fable: like Voltaire's Candide with the jokes removed and with no answers given, at the end or anywhere else, to its questions. (Adam is often a really funny writer but this isn't a funny book.)

Adam says that his story is a kind of dialogue with Nabokov's Invitation to a Beheading, a book I haven't read, so I'm probably missing a good deal. But my own thoughts about Bethany start with one of its epigraphs, taken from a writer Nabokov quotes in that novel, except that Nabokov or one of his characters made up the writer, so I guess the epigraph was written by Nabokov. Anyway, here it is:

Human philosophers ponder what they call theodicy, which is to say, this question: “if God is love how can He be so cruel to us”? But this is quite the wrong way about—quite the wrong way to think of the matter. What we need, and most pressingly, is an anthrodicy. After all, it was men who tortured God to death on the cross, not the reverse. God joys in the life of men. It was a man who gloated God is dead. How may we descend into the chasm of the why of all this?

One might begin by saying that Jesus Christ is himself anthrodicy: he justifies the ways of Man to God, he who “is the propitiation for our sins, and not for ours only, but also for the sins of the whole world”. But then, since “it is he who has made us, and not we ourselves”, every anthrodicy circles back to a theodicy, does it not?

Suppose someone were to think along these lines:

The idea took root in Todd’s mind: to hunt turkeys was less of an achievement than hunting boar, which was less than hunting bears, which was less than hunting lions, which was less than hunting cunning and armed men. The logic had a kind of inescapability to it. The greatest hunt would be to hunt the greatest creature, the most dangerous prey, to face the biggest risk and survive it. To hunt animals was one thing; and to hunt human beings another; but to hunt and kill God was the grandest destiny of any individual. Todd took another swig of beer, and the idea set, as crystals sometimes solidify out of solution in one magnificent and swift transition. That was why God had established the universe the way He had—he had looked at himself and been displeased with his invulnerability, and so he had incarnated himself as a creature that could be killed. No, more than that: as a creature that had to be killed in order for the world to be saved. Why had he done this? Because God understood the deep nature of man, that man is defined by his nature as a hunter. And so God had set the hunt.

Is a person thinking along these lines distinctively depraved? Or is he, by contrast, connecting in some meaningful way to the notorious obscurity of God's purposes and the means by which those purposes are realized? What happens when you try to join, as Todd does, paleoneurology and the doctrine of God? “What a death were it then to see God die?” asked John Donne —but what kind of death (or life) would it be to kill God, the God who made us hunters and then ranged himself among our prey?

You'd have to be some kind of pervert to ask questions like that.

Monday, November 21, 2016

the asymptote of utopia

I just re-read Kim Stanley Robinson’s magnificent Mars trilogy — about which I hope to teach a class someday — and every time I go back to those books I find myself responding differently, and to different elements of the story. Which is a sign of how good they are, I think.

Some have described the Mars trilogy as a kind of utopia, but I don't think that’s right. Even at the end Mars remains a world with problems, though it must be said that most of them come from Earth. Mars itself has become a pretty stable social order, and even the strongest opponent within the book of how it got that way thinks, in the last paragraph of the final volume, that “Nowhere on this world were people killing each other, nowhere were they desperate for shelter or food, nowhere were they scared for their kids. There was that to be said.” There’s no guarantee that the social order will remain so beneficent, but I think KSR wants us to believe that as time goes by stable harmony becomes more and more strongly established, more difficult to displace. Thus one of his minor characters, Charlotte Dorsa Brevia, is a “metahistorian” who argues for a

broad general movement in history which commentators called her Big Seesaw, a movement from the deep residuals of the dominance hierarchies of our primate ancestors on the savanna, toward the very slow, uncertain, difficult, unpredetermined, free emergence of a pure harmony and equality which would then characterize the very truest democracy. Both of these long-term clashing elements had always existed, Charlotte maintained, creating the big seesaw, with the balance between them slowly and irregularly shifting, over all human history: dominance hierarchies had underlain every system ever realized so far, but at the same time democratic values had been always a hope and a goal, expressed in every primate’s sense of self, and resentment of hierarchies that after all had to be imposed, by force. And so as the seesaw of this meta-metahistory had shifted balance over the centuries, the noticeably imperfect attempts to institute democracy had slowly gained power.

This increasingly stable harmony happens, I think it’s clear, primarily because the First Hundred who colonized Mars are almost all scientists, and as scientists take a rational, empirical approach to solving political problems. That is, the initial conditions of human habitation on Mars are rooted in the practices of science — which is one of the things that leads, much later on, to the first President of Mars being an engineer, which is to say, a pragmatic problem-solver. The politics of solutionism is the best politics, it appears.

However: it’s noteworthy that the people who do the most to shape the ultimate formation of Mars — political, social, and physical — are three characters who are almost invisible in the story, interacting very little with the story’s protagonists (who happen to be the most famous, not just on Mars but also on Earth). Vlad Taneev, Ursula Kohl, and Marina Tokareva work together on a variety of projects: Vlad and Ursula develop the longevity treatments that enable humans to dramatically increase their lifespans; Vlad and Marina work on “areobotany,” that is, adapting plants to the Martian environment; and the three of them together develop an “eco-economics,” that is, a political economy keyed to ecological health — a kind of systematically worked-out version of what KSR refers to in other contexts as the flourishing-of-the-land ethos of Aldo Leopold.

We hear almost nothing directly from this triumvirate during the course of the story, because they basically stay in their lab and work all the time. This is sometimes frustrating for the story’s protagonists, who are always directly involved in politcal events, risking life and limb, giving up their scientific projects in order to serve the common good (or, in the case of Sax Russell, applying technological solutions directly, and sometimes recklessly, to political and social problems). But while KSR makes it clear to us that the protagonists’ work is supremely valuable, he makes it equally clear that they could achieve far less without the isolated, ascetic, constant labor of Vlad, Ursula, and Marina.

So: scientists in the lab + scientists in the public arena = if not quite Utopia something asymptotically approaching it. A Big Seesaw, yes, but the amplitude of its oscillations grows ever smaller, almost to the point, as the story comes to an end, that they’re impossible to discern. In short, an epistocracy. It’s not a simplistic model, like Neil deGrasse Tyson’s proposed Rationalia: KSR understands the massive complexities of human interaction, and one of the best elements of the book is his portrayal of how the paradigmatically socially inept lab-rat Sax Russell comes to understand them as well. But the story really does display a great deal of confidence that if we put the scientists in charge things are going to get much better.

In his superb history of science fiction — now in a fancy new second edition — Adam Roberts writes,

With a few exceptions all [KSR’s] characters are decent human beings, with functional quantities of empathy and a general desire to make things work for the whole. Robinson’s position seems to be that, statistical outliers aside, we all basically want to get along, to not hurt other people, to live in balance.... That niceness — the capacity for collective work towards a common goal, the tendency not to oppress or exploit — is common to almost all the characters Robinson has written. His creations almost always lack inner cruelty, or mere unmotivated spitefulness, which may be a good thing. I’m not saying he’s wrong about human nature, either — although it is more my wish than my belief. What it does mean is that Robinson writes novels that tend to the asymptote of utopia, without actually attempting to represent that impossible goal.

(When I decided to insert that passage in this post, I didn't remember that Adam had used the language of the asymptote, which I also employ above. Great minds do think alike, after all. I shall acknowledge the probably unconscious influence of Adam’s thinking on my own in my title.)

So I have two questions:

1) Are natural scientists the true epistoi?

2) How might the case for epistocracy of any kind be altered if we take the position that human beings are not nearly as nice as KSR thinks they are?

Koya Bound


On Kickstarter, I supported Craig Mod’s Koya Bound project, which preserves in book form a record of an eight-day walk along the Kumano Kodo pilgrimage trail in Japan. The book has arrived and it’s absolutely gorgeous.



Friday, November 18, 2016

vicious circles: identity and anxiety

Last year I published an essay here at The New Atlantis called “Miss Marple and the Problem of Modern Identity.” Would you be so kind as to click that link and read the first few paragraphs? Feel free to stop at this sentence: “All you know about them is what they say of themselves — this is, in a nutshell, one of the core problems of modernity.”

In that essay my emphasis was on how we think of the people around us: how, if all we know about people is what they say of themselves, if there are not communal bonds that help us to situate those among whom we live, we can struggle to perceive them as neighbors and instead are tempted to consign them to the category of other. But let’s turn around and look at this another way: in the absence of those strong communal bonds, how do I know who I am? In such a context, identity becomes performative in multiple ways. I perform my self-understanding before others through a variety of display behaviors: how I dress, how I speak, the jobs I take, etc. And much of this performative work today is done through social media: what music or literature or television or movies I talk about and link to, what political causes I support.

In the aftermath of the Presidential election, a handful of explanatory matrices have come to dominate, and one of the primary ones involves decrying the influence of identity politics. This is basically what I’ve said, once, twice, who knows how many times. I’ve seen two further articulations of the same stance just today, both in the New York Times, one from David Brooks and one from Mark Lilla. Most versions of this case, including my own, tend to emphasize the uncomprehending hostility that people in one ideological camp for people in any other, and the hostility is certainly there, but I wonder if I haven’t missed something — something that underlies the hostility: anxiety.

Think about those people in your Twitter or Facebook feed who post and repost the same beliefs, the same talking points, over and over and over again. Why? What’s the point in this seemingly mindless repetition of the same damned ideas? I am coming to suspect that I have not taken seriously enough the felt limitations of social media as venues for display behavior.

Sorry for the ugliness of the phrase, but that’s as concisely and clearly as I can put the point. Think about it this way: if you spend your days fully embodied in the life of a community, people know who you are. They may like you, they may dislike you, but they know. Embodied community has a lot of bandwidth; communication is enabled through multiple information streams. Consider, in comparison, how thin and weak the interpersonal bandwidth of social media is. You can’t even shout to be heard; every tweet is as loud as every other tweet, and as for Facebook posts, you don't even know whether they’re showing up in your friends’ timelines. And as Information Theory 101 teaches us, when there’s some doubt about whether a message is getting through, we employ redundancy.

So I’m looking at a causal chain like this:

(a) weak social embodiment leads to

(b) a compensatory investment in social media as a way of establishing and maintaining identity — proving to people that you are indeed who you say you are — but

(c) the intrinsic bandwidth problems of social media lead to anxiety that those media aren’t doing their identity-proclaiming work,

(d) which in turn leads to ceaseless and repetitious performances of identity-marking,

(e) which reads to others as mindless, head-butting-against-the-wall hostility towards any deviation from The Approved Positions, and

(f) which can over time, thanks to the mind-coarsening effects of such repetition even on people who are originally repeating themselves not out of anger but out of anxiety, produce the very hostility it was perceived to be, leading finally to

(g) a fractured republic.

So read Yuval Levin’s book, friends, and face the simple but daunting fact that if we don't work hard to repair the mediating institutions that give people a sense of security and belonging, people will turn instead to social media to do work that those technologies just aren’t equipped to do.

two kinds of world-building

The builders of fictional worlds, in science fiction and fantasy, come in two chief types, the meticulous and the speculative. The meticulous world-builder delights us by thoroughness of invention, the speculative by surprisingness. For the former, and for readers of the former, much of the pleasure of a fictional world arises from the working out of details; for the latter, and for the latter’s readers, what especially delights is the quirkiness or oddity of the invention, and the “what-if-the-world-were-like-this” questions so aroused.

The master of meticulous world-building in fantasy is of course Tolkien; in science fiction it’s Kim Stanley Robinson. Speculative world-building is more common, because it doesn't require so much detail: there is no genuinely meticulous world-building at less than 750 pages or so, I’d think. But that doesn't mean that the speculative type is easier to do well: it requires an instinct for the telling detail, the most distinctive and provocative ways in which a given world differs from our own, and an equally shrewd instinct for what doesn’t change. Keith Roberts’ Pavane, Ursula K. LeGuin’s The Left Hand of Darkness, Hope Mirrlees’ Lud-in-the-Mist, all strike me as especially wonderful examples of speculative world-building.

There are bad ways to read both kinds of world-building. You cannot reasonably expect a meticulous work to be lively all the time; you cannot reasonably expect a speculative work to be perfectly consistent in all its details. But some readers have unreasonable expectations. The fair-minded reader of the meticulous text will deal graciously with longueurs; the fair-minded reader of the speculative text will smile forgivingly at inconsistencies. The masters of the meticulous are also skilled at limiting drearinesss; the masters of the speculative avoid capriciousness. (Aside: no one has ever handled the need to fill in the details of an imagined world more brilliantly and charmingly than Susanna Clarke, in the footnotes of Jonathan Strange and Mr Norrell.)

Both kinds of storytelling are adaptable to different media, but the requirements of meticulousness incline such makers to books rather than movies — though the rise of the long-form television series is pretty well-suited to meticulousness. The speculative is perhaps more dependent on style than the meticulous, but that style need not be linguistic: it can also be visual.

All these thoughts are prompted by Fantastic Beasts and Where to Find Them, which I saw last night and absolutely loved. I could say a good deal about various elements of the film, but in this post I just want to focus on the world-building, which was, I think, superb. I really do believe that if I hadn’t known that J. K. Rowling wrote the screenplay I would have guessed, because it bears the marks of her particular gift, which is the speculative.

Because there are seven Harry Potter books, and now a series of ancillary media — the little books of Fantastic Beasts and Quidditch Through the Ages, the Pottermore website, a play, this new movie and the four (!) sequels to come — it’s natural to think of Rowling as one of the meticulous world-builders, spooling out stories from a secure repository of well-worked-out details. In fact the Potterverse isn't very meticulous at all, and has ten thousand holes in its fabric, as readers have pointed out from the very beginning.

No, it’s the curious, provocative, stimulating detail that Rowling specializes in, and here that gift is manifested best in Newt Scamander’s suitcase, which wonderfully extends an idea Potter readers learned about first, I think, with the Ford Anglia that Mr. Weasley enchanted in Harry Potter and the Chamber of Secrets: in the magical world, objects can be bigger on the inside than they are on the outside. In Newt’s suitcase the creatures that others fear but that he loves and wants to protect (from “the most vicious creatures on the planet: humans”) find safety and affection. And, in Eddie Redmayne’s wonderful portrayal of Newt, we see an awkward, mumbling, socially uncomfortable man transformed, when he enters the little Ark he has made, into an expansive and confortable figure, a skinny ginger Noah. Watching all this just made me happy.

I’m not convinced that the four movies to come will work as well as this first one did, and I’m especially nervous about the Grindelwald story that will clearly be a prime plot driver. But Newt Scamander proves, to my surprise, to be an utterly captivating protagonist, and I will eagerly await his further adventures — without expecting the strict consistency and precision of world-making that the meticulous craftsmen offer.

Wednesday, November 16, 2016

lessons learned

Maybe I should have been writing about Facebook instead of Twitter, but never mind, because my friend Brian Phillips has done it for me. But along the way Brian writes,

What had really happened was that the left had become sensitized to the ways in which conventional moral language tended to shore up existing privilege and power, and had embarked on a critique of this tendency that the right interpreted, with some justification, as an attack on the very concept of meaning. But what would we have without meaning? Isolation and chaos, conditions in which it would presumably be easy to raise the capital gains tax. So if the left found itself in the strange position of supporting science on the one hand while insisting that truth was a cultural construct on the other, the right found itself in the even stranger position of investing in meaning even as it dissociated itself from fact. Evolution was a myth and climate change was a hoax, but philosophers still had access to objective truth, provided they had worn curly wigs and died enough centuries ago.

I don't know when it happened. Maybe with intelligent design? Maybe Colin Powell's WMD testimony? Maybe it was already under way, with Fox News and Rush Limbaugh? But at some point, the American right — starting with the non-alt version, the one before the one we just elected — took another look at the postmodern critique of the linguistic basis of virtue and tumbled absolutely spinning into love with it. It turned out that postmodernism also contained the seeds of a system that would shore up existing privilege and power. All you had to do was take the insights of subversion and repurpose them for the needs of authority.

As you might imagine, I don't agree with all of this, but I agree with a lot of it. The academic left interrogated the discourses of “truth” and “reason,” revealed the aporias thereof, exposed the inner workings of the power-knowledge regime, all in the name of social justice. I remember vividly Andrew Ross’s insistence, twenty-five years ago, that it was actually perfectly appropriate and consistent for a would-be revolutionary like him to have a tenured position at Princeton: “I teach in the Ivy League in order to have direct access to the minds of the children of the ruling classes.” It turns out that the children of the ruling classes learned their lessons well, so when they inherited positions in their fathers’ law firms they had some extra, and very useful, weapons in their rhetorical armory.

In precisely the same way, when, somewhat later, academic leftists preached that race and gender were the determinative categories of social analysis, members of the future alt-right were slouching in the back rows of their classrooms, baseball caps pulled down over their eyes, making no external motions but in their dark little hearts twitching with fervent agreement.

Back when people thought that Andrew Ross mattered, I participated in many conversations at Wheaton College about postmodernism, and had to hear many colleagues chortle that things were going to be better for Christians now because “we have a level playing field.” No longer did we have to fear being brought before the bar of Rational Evidence, that hanging judge of the Enlightenment who had sent so many believers to the gallows! You have your constructs and we have our constructs, and who’s to say which are better, right? O brave new world that hath such a sociology of knowledge in it!

To which my reply was always: “Now when they reject you and your work they don't have to defend their decision with an argument.” I knew because I was shopping a book around then, and heard from one peer reviewer that it was well-researched and well-written but was also characterized by “underlying evangelical theological propositions.” Rejected without further explanation. As Brian rightly says in his post, "An America where we are all entitled to our own facts is a country where the only difference between cruelty and justice is branding."

Sauce for the goose, sauce for the gander. It seems that we’ve all now learned the lessons that the academic left taught, and how’s that working out for us? The alt-right/Trumpistas are Caliban to the academic left’s Prospero: “You taught me language, and my profit on’t is, I know how to curse.”

Monday, November 14, 2016

a dissertation on monocausal parataxis in social media

We don't usually do politics here at Text Patterns, but I sort of sent us down that road in my last post. So: I think Twitter’s atomizing, paratactic tendency, its constant pressure to squeeze thoughts into tiny boxes, exacerbates and intensifies a common intellectual vice: monocausalism.

So far there have been, according to my highly scientific estimate, fifty bazillion tweets beginning “Trump won because” or “Hillary lost because” — and providing the answer within 140 characters. But the more you read about the election the more you will realize that many, many factors produced this result. Let’s just look at one of several ways we could think about what just happened: in 2012 almost 66 million people voted for Barack Obama, while in 2016 Hillary Clinton received only around 61 million votes. Why the decline? Wikileaks? James Comey’s on-and-off investigations based on Wikileaks? Long-standing hatred of the Clintons? Misogyny? Actual (as opposed to perceived) corruption on the part of Hillary personally and the Clinton Foundation institutionally? A sense on the part of minority voters that Hillary does not share their concerns? A sense on the part of working-class voters that Hillary is contemptuous of them? Extremely poor strategy by the Clinton campaign, focusing their money and energy in the wrong places? Third-party candidates that siphoned away votes? Hillary’s unattractive personality, especially in comparison to Barack Obama? An Electoral College system weighted in favor of the places where Hillary was weakest? Intimidation of voters by Trump supporters?

The answer is: All of the above, and more. Every factor listed played a role in the outcome of this election. And we haven't even brought Donald Trump into our deliberations. The outcome of this election is a classic case of causal overdetermination. But Twitter doesn't do overdetermination well. Twitter lends itself to monocausal parataxis: you pick your preferred explanation, articulate it in the punchiest way you can, and then retweet everyone who sees it your way. And ... and ... and....

People used to complain about politicians and their sound bites. Twitter is the sound bite in the age of infinite digital amplification. Combine that radical oversimplification of every event and every idea with the constant inflammation of emotion and you have a real mess. The other social media — especially Facebook — have their own problems, but Twitter, while it isn't the worst thing that has happened to American politics, may be the worst thing that has happened to American political culture.

Sunday, November 13, 2016

against tweetstorms

A few weeks ago I took to Twitter to unleash a tweetstorm against tweetstorms. (I was in an ironic mood. Also, if you’re wondering what a tweetstorm is, you can see a few by Mark Andreessen, thought by some to be the originator if not the master of the form, here.) Now I want to make that argument more properly. Hang on tight, we’re getting into the Wayback Machine for one of my geekiest posts ever!

One of the most distinctive characteristics of biblical Hebrew is parataxis, which connects clauses almost wholly by coordinating conjunctions — “and” and its cognates. Without getting too technical here, I want to acknowledge that there is disagreement among Hebrew scholars today about whether the Hebrew word waw should always be translated as “and”: some believe that it has different shades of meaning, in different contexts, that translators should strive to bring those shades out. But in the King James translation, waw is always rendered as “and,” which gives to biblical storytelling a very distinctive rhythm, and also contributes to what Erich Auerbach famously called its “reticence.”

A classic example is the Akedah, the story of the binding of Isaac:

And Abraham took the wood of the burnt offering, and laid it upon Isaac his son; and he took the fire in his hand, and a knife; and they went both of them together. And Isaac spake unto Abraham his father, and said, My father: and he said, Here am I, my son. And he said, Behold the fire and the wood: but where is the lamb for a burnt offering? And Abraham said, My son, God will provide himself a lamb for a burnt offering: so they went both of them together. And they came to the place which God had told him of; and Abraham built an altar there, and laid the wood in order, and bound Isaac his son, and laid him on the altar upon the wood. And Abraham stretched forth his hand, and took the knife to slay his son. And the angel of the Lord called unto him out of heaven, and said, Abraham, Abraham: and he said, Here am I. And he said, Lay not thine hand upon the lad, neither do thou any thing unto him: for now I know that thou fearest God, seeing thou hast not withheld thy son, thine only son from me. And Abraham lifted up his eyes, and looked, and behold behind him a ram caught in a thicket by his horns: and Abraham went and took the ram, and offered him up for a burnt offering in the stead of his son. And Abraham called the name of that place Jehovahjireh: as it is said to this day, In the mount of the Lord it shall be seen.

As Kierkegaard famously showed in Fear and Trembling, the story fairly cries out for elucidation: What was Abraham thinking? What did he feel? But all we get is this unembellished, uninflected, set of steps: And ... And ... And.....

Parataxis is perfectly suited to the chief genres of the Hebrew Bible — narrative, law, poetry, prophecy — or, maybe better, the genres of the Hebrew Bible are what they are because of the paratactic tendencies of the Hebrew language? Hard to say. In any case, in the New Testament, as long as the genres are carried over from the Hebrew Bible, the parataxis is there also, even though now in Greek rather than Hebrew:

When he was come down from the mountain, great multitudes followed him. And, behold, there came a leper and worshipped him, saying, Lord, if thou wilt, thou canst make me clean. And Jesus put forth his hand, and touched him, saying, I will; be thou clean. And immediately his leprosy was cleansed. And Jesus saith unto him, See thou tell no man; but go thy way, shew thyself to the priest, and offer the gift that Moses commanded, for a testimony unto them. And when Jesus was entered into Capernaum, there came unto him a centurion, beseeching him, And saying, Lord, my servant lieth at home sick of the palsy, grievously tormented. And Jesus saith unto him, I will come and heal him.

It’s when we get to the letters of Paul that we begin to suspect that God knew what he was doing in bringing the Christian Gospel to the world at a moment and in a place where the lingua franca was Greek. For Greek lends itself to complexities of conjunction and disjunction, all manner of relations between clause and clause, idea and idea. (Sometimes Paul gets himself tangled in those complexities: try reading Ephesians 1, for instance, in any translation, and see if you can diagram those sentences.) If instead of narrating or legislating or poetizing or prophesying you need to be engaged in dialectical exposition and argumentation, Greek is the language you want. Greek gives you parataxis if you need it, but syntaxis also. And the more complex your argument is, the more you need that syntaxis.

Hey, wasn’t this supposed to be a post about Twitter and tweetstorms? Yes. My point is: Twitter enforces parataxis. I don't mean that in the sense that you absolutely can’t make an argument on Twitter, only that everything about the platform militates against it, and very few people have the commitment or the resourcefulness to push back. So a typical tweetstorm, even when it’s trying to make a case for something, even when it needs to be an argument and its author wants it to be an argument, isn’t an argument: it’s a series of disconnected assertions, effectively no more than And ... And ... And.... I think this is enforced not primarily by the 140-character limit itself, but more by the tweeter’s awareness that each tweet will be read individually, and retweeted individually, losing any context. So the tweeter tries to make each tweet as self-contained as possible, forgoing syntactic relations and complications.

Moreover, even a lengthy tweetstorm, by tweetstorm standards, isn’t long enough to develop an argument properly. (You’d need to use seven or eight tweets just for my previous paragraph, depending on your strategy for connecting the tweets. This whole post? Maybe 50 tweets. Who does 50-tweet storms?)

So what does this atomization of thought remind me of? Biblical proof-texting, that’s what. The founders of Twitter are to our discursive culture what Robert Estienne — the guy who divided the Bible up into verses — is to biblical interpretation. Is it possible, when faced with Paul’s letter to the Ephesians divided into verses, to keep clearly in mind the larger dialectical structure of his exposition? Sure. But it’s very hard, as generations of Christians who think that they can settle an argument by quoting a verse, a verse that might not even be a complete sentence, have demonstrated to us all. Becoming habituated to tweet-sized chunks of thought is damaging to one's grasp of theology and social issues alike.

All this is why I think people who have interesting and even slightly complicated things to say should get off Twitter and get onto a blog, or Medium, or something — any venue that allows extended prose sequences and therefore full-blown syntaxis. Of course, in other contexts, Twitter — with its enforcement of linguistic and argumentative simplicity, its encouragement of unsequenced and disconnected thoughts — might be just the thing you need. If you want to be President of the United States, for example.

Stay tuned for a follow-up to this post.

Friday, November 11, 2016

revenge of the Morlocks

Well, now, this is interesting:

And herein lies the seeds of speciation: a difference in a trait that genes influence – intelligence – affecting reproduction patterns. Coupled with policies of exclusion – building a wall, breaking up families to deport undocumented immigrants, targeting specific religious groups unified by their ancestry – the population sorting that may begin over the next four years could, with time and if sustained, alter the segregation of gene variants in a way that sets us on a path toward an unstoppable divergence.

I hope that as a nation we can accept, heal, reach out, and “be on the same team” as President Obama eeked out yesterday. But right now, the gash seems too deep to mend anytime soon, especially if the aforementioned actions of exclusion and discrimination actually emerge from the current state of shock and division. Here is the best description I’ve read of the sad basis of the Trump campaign and soon-to-be administration.

So much things to say, to quote Bob Marley, but I’ll try to restrain myself. And before I go further I will note, just for the record, that Trump really is scientifically illiterate and I fear that his advisors will be as well, and that that will surely lead to problems, though perhaps not the worst ones we will face from a Trump administration. Onward:

Ricki Lewis thinks that we may be headed towards “unstoppable genetic divergence,” that Donald Trump will be the chief progenitor of this separation, and that the primary criterion by which this separation will be effected is intelligence. All the smart people will be on one side of the Great Wall of Trump, and all the stupid people on the other side.

And yet all this will be accomplished by the will of the Stupids. Which suggests, on Lewis’s own account of things, that beyond a certain point — let’s call it the Trump Threshold — intelligence may not be adaptive. Indeed it may be maladaptive, as suggested in Idiocracy. So if you want to belong to the species that will succeed in the long run, you should probably start trying to get dumber now.

Lewis takes the title of her post, “Donald Trump and the New Morlock Nation,” from H. G. Wells’s The Time Machine, a book she hasn’t read. (But she saw the movie. And looked up the book’s Wikipedia page.) I, however, have read the book, and when I read Lewis’s post, one passage from it came to mind:

The Upper-world people might once have been the favoured aristocracy, and the Morlocks their mechanical servants: but that had long since passed away. The two species that had resulted from the evolution of man were sliding down towards, or had already arrived at, an altogether new relationship. The Eloi, like the Carolingian kings, had decayed to a mere beautiful futility. They still possessed the earth on sufferance: since the Morlocks, subterranean for innumerable generations, had come at last to find the daylit surface intolerable. And the Morlocks made their garments, I inferred, and maintained them in their habitual needs, perhaps through the survival of an old habit of service. They did it as a standing horse paws with his foot, or as a man enjoys killing animals in sport: because ancient and departed necessities had impressed it on the organism. But, clearly, the old order was already in part reversed. The Nemesis of the delicate ones was creeping on apace. Ages ago, thousands of generations ago, man had thrust his brother man out of the ease and the sunshine. And now that brother was coming back changed! Already the Eloi had begun to learn one old lesson anew. They were becoming reacquainted with Fear.

Thursday, November 10, 2016

the problem with experts

Alastair Roberts writes,

Trump’s argument against vaccines works because people no longer trust the authorities — the governments, the scientists, the medical professionals, etc. — who tell them that they are safe. The biased mainstream media, the liberal elite, lying politicians, activist judges, crony capitalists, politically correct academics, the conspiring government, scientists bought off by big business, hypocritical religious leaders: all are radically corrupt, motivated by self-interest, and radically untrustworthy. In such a situation, people’s realm of trust can become more tribal in character, focusing upon people of their own class, background, friendship groups, family, locality, ethnicity, nationality, religion, etc. and deeply suspicious of and antagonistic towards people who do not belong to those groups. This collapse of trust hasn’t occurred because the general public has suddenly become expert in the science behind vaccinations and discovered the authorities’ claims concerning vaccines to be scientifically inaccurate. The trust that has been lost was never directed primarily at such scientific claims. Rather, it was a trust in the persons and agencies that presented us with them.

I think this is all quite right, but I think there’s another important element to the story: the creation, largely through radio and television, of a distinctive class. When a complex or otherwise disputed issue arises, the media look for informants, and those informants they call “experts.” The problem is that the term conflates several varieties of expertise and non-expertise. An “expert in infectious diseases” from the Centers for Disease Control is followed by an “expert in the paleo diet,” who is then succeeded by an “expert in political polling,” and then the hour is wrapped up by a visit from a “relationships expert.”

Some of these people don't know anything about anything. Others have deep learning in rigorously maintained fields of knowledge. But they have all been folded into the class of “expert.” And so when some of them are proved to be empty heads in empty suits, the reputation of the whole class is compromised. And that’s how you get a situation in which all experts are distrusted — in which their very designation as possessing expertise is just a big red flag.

Tuesday, November 8, 2016

Chinese typing

a Chinese typewriter

There's a good deal of enthusiasm in this Atlantic post — written by Sarah Zhang, but the enthusiasm is largely that of Tom Mullaney of Stanford — for non-alphabetic modes of text entry. Mullaney is a passionate critic of what he thinks of as Western alphabetic triumphalism, and is an advocate for other methods of getting text onto screens. Zhang writes, 

The telegraph was developed with the alphabet in mind. So was the typewriter. And the computer. And internet protocols. And yes, Chinese speakers spent a century conforming their language to those technologies until computing power transcended them, resulting in a relationship with technology richer and more complicated than in the alphabetic world.

However, Victor Mair, a Sinologist who writes at Language Log, is having none of it: "the vast majority of Chinese are busily inputting characters via the alphabet.... As several astute observers (e.g., William C. Hannas, David Moser) have noted, it is the alphabet — in combination with electronic text processing — that is rescuing Chinese characters from the oblivion to which they would have been assigned if they had had to rely on the mechanical Chinese typewriter for their preservation and dissemination in the modern world." 

The whole conversation is fascinating, if rather confusing (for this uninformed observer anyway.) 

Tangentially: at one point Zhang writes, "alternative, faster typing methods in English, like ShapeWriter or Swype that let you swipe through the letters of the word in one motion, have struggled to catch on outside of early adopters. Plain old QWERTY is good enough." This makes no sense to me, because those input systems are QWERTY systems as much as typing on a typewriter keyboard. 

I had read — can't remember where now — that the best of these alternative keyboards for iOS is Microsoft's Word Flow keyboard, so I downloaded it and tried it, but, while I liked it when it worked, it only worked sometimes: it occasionally became unresponsive, and other times didn't appear at all, leaving a blank space at the bottom of the screen where the keyboard was supposed to be. But I think eventually this is how I'll type on the phone. Has anyone else had better success with these alternative keyboards? 

Sunday, November 6, 2016

Apple's new strategy (and old users)

As John Gruber recently commented, Apple hasn't upgraded the Mac Pro in more than a thousand days. The company’s indifference to its professional users is puzzling to Marco Arment also:

Only the Mac Pro has the space, budget, heat capacity, and PCIe bandwidth to offer high-performance desktop- and professional-grade GPUs. If gamers, game makers, visual effects workers, and OpenCL aren’t enough, the rapidly-emerging VR and AR markets should be — they’re the next wave of high-end pro buyers who need the fastest hardware money can buy, and Apple has nothing to offer them.

Think about that: Apple has nothing to offer them. Apple clearly thinks it doesn't need such users any more — even though the faithfulness of professional programmers, designers, and artists is what kept Apple alive for many years when the company was marginally profitable at best.

In those days the goal of Apple was to design and build products that were “insanely great,” while the mission of Microsoft was to get “a computer on every desk and in every home.” Apple wanted to make the best and coolest things it could make, while Microsoft just wanted complete penetration of the market. I suspect that, since Apple became a phone company that also makes a few computers, its corporate attitude has come to mimic that of Microsoft. It’s all about market share, baby: an iPhone in every pocket.

This new attitude has led Apple’s leadership not just to ignore their most loyal customers, but also to be oblivious to a significant decline in the quality of their products, especially their software. Recently Phil Schiller said, in response to widespread frustration with the recent Mac announcements, “We know we made good decisions about what to build into the new MacBook Pro and that the result is the best notebook ever made, but it might not be right for everyone on day one.” We know.

Similarly, a few months ago, when John Gruber asked Craig Federighi to respond to those who had been complaining about a decline in software quality, Federighi said, “We’re frustrated of course to hear it overall characterized as this, quality is dropping overall, because we know that’s not true.” We know.

We know we’re doing the right things. We know our products aren’t getting worse. We just know. So if you’re hoping for Apple to reconsider its recent strategic decisions, it’s time to stop hoping.

So those of us who need professional-level computing power will need to turn elsewhere. Those of us who need consistently reliable software will need to turn elsewhere. And Apple is betting that those groups won't be large enough or influential enough to keep them from getting an iPhone in every pocket. Time will tell if they’re right.

Friday, November 4, 2016

comp

I don't think I'm going to support this Kickstarter project — I am deeply committed to my Leuchtturm notebooks — but I am tempted to do so just to thank Aron Fay for the illustrated history of composition notebooks he has provided on the page.



There's also something kind of fascinating about how Fay wants his product to be extremely high-quality but to look cheap and ordinary.

Thursday, November 3, 2016

why you should read Audrey Watters

For anyone who wants to understand the complex and ever-shifting relations between technology and education, especially higher education, in America, the one truly indispensable figure is Audrey Watters, who writes at Hack Education. Her most recent post exemplifies why her work is so necessary — and why far, far more people should pay attention to it.

Here’s what makes Watters unique: She writes about technocrats who hope and promise to transform education, which of course a great many people do, but she writers about these matters with the eye of a folklorist — which is her academic training. She writes,

I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future? What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

See, the technological-prophecy complex is a kind of culture, and like all cultures it tells stories. It accumulates lore. Watters is brilliant at noticing what that lore says and teaches — and, perhaps even more important, what it doesn't say, what it declines to look at, what it strategically forgets.

The people using technology to “hack education” are always trying to distract us from the presence of the little man behind the curtain. Audrey Watters is stubbornly insistent on pulling back that curtain. You should read her stuff, starting with this new post.

no, Microsoft Word really is that bad

My family will tell you that I’ve been difficult to be around for the past few days — grumpy, impatient. And there’s a straightforward reason for that: in order to work on revisions for a forthcoming book, I’ve been using Microsoft Word.

It’s become so commonplace for people to hate Word that a counterintuitive Slate post praising it was long overdue, but even by Slate standards Heather Schwedel has done a poor job. For one thing, she shows just how informed she is about these matters by referring to “the unfamiliar, bizarro-world file format RTF” — a format created by Microsoft. But when she says that her devotion to Word is a function of her being “a copy editor and thus prone to fussy opinions about fonts and formatting and all such things” — it is to laugh. Because if you care about “fonts and formatting and all such things” Word is the worst possible application to deal with.

As Louis Menand wrote some years ago, with the proper emphasis, “Microsoft Word is a terrible program.

To begin with, the designers of Word apparently believe that the conventional method of endnote numbering is with lowercase Roman numerals—i, ii, iii, etc. When was the last time you read anything that adhered to this style? ... To make this into something recognizably human, you need to click your way into the relevant menu (View? Insert? Format?) and change the i, ii, iii, etc., to 1, 2, 3, etc. Even if you wanted to use lowercase Roman numerals somewhere, whenever you typed “i” Word would helpfully turn it into “I” as soon as you pressed the space bar. Similarly, if, God forbid, you ever begin a note or a bibliography entry with the letter “A.,” when you hit Enter, Word automatically types “B.” on the next line. Never, btw (which, unlike “poststructuralism,” is a word in Word spellcheck), ask that androgynous paper clip anything. S/he is just a stooge for management, leading you down more rabbit holes of options for things called Wizards, Macros, Templates, and Cascading Style Sheets. Finally, there is the moment when you realize that your notes are starting to appear in 12-pt. Courier New. Word, it seems, has, at some arbitrary point in the proceedings, decided that although you have been typing happily away in Times New Roman, you really want to be in the default font of the original document. You are confident that you can lick this thing: you painstakingly position your cursor in the Endnotes window (not the text!, where irreparable damage may occur) and click Edit, then the powerful Select All; you drag the arrow to Normal (praying that your finger doesn’t lose contact with the mouse, in which case the window will disappear, and trying not to wonder what the difference between Normal and Clear Formatting might be) and then, in the little window to the right, to Times New Roman. You triumphantly click, and find that you are indeed back in Times New Roman but that all your italics have been removed.

This kind of disaster — and worse — still happens. In the document I’ve been working on recently, I was conversing with my editors in the comments pane about the advisability (or lack thereof) of certain changes, and then at a certain point, without warning, every time I tried to type a comment Word would paste in a paragraph I had recently deleted from another page. I wasn’t choosing to paste — I wasn’t even using any special keys (Command, Control, Option). I was just typing letters of the alphabet. And Word insisted on inserting an entire paragraph every time my fingers hit the keys. I ended up having to write all my comments in my text editor and then paste them into the comment box. I was grateful that Word allowed me to do that.

If you really care about “fonts and formatting and all such things” Word is a nightmare, because in such matters its consistent practice is to do what it thinks you probably want to do, or what it thinks you should do. Contrast that to a program that genuinely cares about formatting, LaTeX, which always does precisely what you tell it to do. Now, this mode of doing business can generate problems of its own, as every user of LaTeX knows, since from time to time you will manage to tell it to do something that you don't really want it to do. But those problems are always fixable, and over time you learn to avoid them, whereas in Word anything can happen at any time and you will often be completely unable either to figure out what happened or set it right.

In every book that I work on, the worst moment of the entire endeavor occurs when I have to convert my plain-text draft into Word format for my editors. I don't have to open Word to do that, thanks to pandoc, whose use I explain here; but I know then that I have only a short time before they send me back an edited text which I will have to open in Word. And from that point on there can be no joy in the labor, only misery. Microsoft Word is not just a terrible program. It is a terrible, horrible, no good, very bad program. It is the program than which no worse can be conceived. We hates it, preciousss. We hates it.

Tuesday, November 1, 2016

children of Twitter

It’s a commonplace that Europeans, and people from several other parts of the world, see Americans as — if they’re inclined to be neutral — “childlike” or — if they’re inclined to be censorious — “children.” In his memoir Paris to the Moon Adam Gopnik quotes approvingly a French friend who comments that you can always spot the American tourists in France because they’re all dressed like six-year-olds. And indeed Americans have often embraced this description, though giving it a positive spin: for instance, in his seminal book No Place of Grace: Antimodernism and the Transformation of American Culture, 1880-1920, Jackson Lears explains how many intellectuals and artists embraced “antimodernism” in the form of medievalism precisely because they saw the Middle Ages as “childlike” in all the best senses of the term.

And then there’s A. O. Scott, writing in 2014 on “The Death of Adulthood in American Culture”:

A crisis of authority is not for the faint of heart. It can be scary and weird and ambiguous. But it can be a lot of fun, too. The best and most authentic cultural products of our time manage to be all of those things. They imagine a world where no one is in charge and no one necessarily knows what’s going on, where identities are in perpetual flux. Mothers and fathers act like teenagers; little children are wise beyond their years. Girls light out for the territory and boys cloister themselves in secret gardens. We have more stories, pictures and arguments than we know what to do with, and each one of them presses on our attention with a claim of uniqueness, a demand to be recognized as special. The world is our playground, without a dad or a mom in sight.

Yesterday’s post on social media, politics, and emotion is a variation on this theme. I don't think there’s any question that social media prompt us to respond to the world in childlike/childish ways, leading always with our strongest emotions and then coming up with comically inadequate post facto justifications of them, or assuming that the only just world is one which conforms itself to my felt needs and within which the only real violations are of my feelings.

Nous sommes tous Américains. In the current moment, most adults are emotionally six years old, most college students four, and the Republican Presidential nominee two. (Seriously: look at any professional description of the “terrible twos” and try to tell me that it doesn't precisely describe Donald Trump, whose supporters act as indulgent parents and elder siblings.)

I can't say that this is a good thing, but it’s the situation we’re in and it’s not going to change any time soon. So when we’re thinking about social media and political discourse, what if we stopped cursing the emotional darkness and instead lit a candle? And the first step in doing that is by accepting that most of the people we interact with on social media really are children.

In The Abolition of Man, C. S. Lewis wrote,

St Augustine defines virtue as ordo amoris, the ordinate condition of the affections in which every object is accorded that kind of degree of love which is appropriate to it. Aristotle says that the aim of education is to make the pupil like and dislike what he ought. When the age for reflective thought comes, the pupil who has been thus trained in ‘ordinate affections’ or ‘just sentiments’ will easily find the first principles in Ethics; but to the corrupt man they will never be visible at all and he can make no progress in that science. Plato before him had said the same. The little human animal will not at first have the right responses. It must be trained to feel pleasure, liking, disgust, and hatred at those things which really are pleasant, likeable, disgusting and hateful. In the Republic, the well-nurtured youth is one ‘who would see most clearly whatever was amiss in ill-made works of man or ill-grown works of nature, and with a just distaste would blame and hate the ugly even from his earliest years and would give delighted praise to beauty, receiving it into his soul and being nourished by it, so that he becomes a man of gentle heart. All this before he is of an age to reason; so that when Reason at length comes to him, then, bred as he has been, he will hold out his hands in welcome and recognize her because of the affinity he bears to her.’

This, I think, should be the task of those who want to use social media wisely and well: not to try to reason with people — the code architecture of all social media, and especially Twitter, with its encouragement of instantaneous response and crude measures of approval or disapproval, militates against rational reflection — but to promote ordinate affection, and especially the love of the good wherever it may be found, even in people you have been taught to think of as your political opponents.

I say that because I believe hatred is the least selective of emotions, the most scattershot, the one that can most easily find its way into every human encounter if it is not restrained by strongly positive responses to the true, the good, and the beautiful. (The truth of this statement is confirmed on Twitter every hour.)

If we are going to begin to heal the wounds of our political culture that have been either created or exacerbated by social media, then we will need to train ourselves — and only then, we hope, others — in the practices of loving what is truly desirable. Rather than trying to wrench Twitter into a vehicle for rational debate, which it can never be, we need to turn its promotion of emotional intensity to good account. (We need jujitsu, not Mortal Kombat.) And then, perhaps, when at least some people have become habituated to more ordinate affections, and Reason at length comes to them, then, bred as they have been, they will hold out their hands in welcome and recognize her because of the affinity they bear to her.