Text Patterns - by Alan Jacobs

Saturday, May 30, 2015

Tav's Mistake


Neal Stephenson's Seveneves is a typical Neal Stephenson novel: expansive and nearly constantly geeking out over something. If a character in one of Stephenson's SF novels is about to get into a spacesuit, you know that'll take five pages because Stephenson will want to tell you about every single element of the suit's construction. If a spacecraft needs to rendezvous with a comet, and must get from one orbital plane to another, Stephenson will need to explain every decision and the math underlying it, even if that takes fifty pages — or more. If you like that kind of thing, Seveneves will be the kind of thing you like.

I don't want to write a review of the novel here, beyond what I've just said; instead, I want to call attention to one passage. Setting some of the context for it is going to take a moment, though, so bear with me. (If you want more details, here's a good review.)

The novel begins with this sentence: "The moon blew up without warning and for no apparent reason." After the moon breaks into fragments, and the fragments start bumping into each other and breaking into ever smaller fragments, scientists on earth figure out that at a certain point those fragments will become a vast cloud (the White Sky) and then, a day or two later, will fall in flames to earth — so many, and with such devastating force, that the whole earth will become uninhabitable: all living things will die. This event gets named the Hard Rain, and it will continue for millennia. Humanity has only two years to prepare for this event: this involves sending a few people from all the world's nations up to the International Space Station, which is frantically being expanded to house them. Also sent up is a kind of library of genetic material, in the hope that the diversity of the human race can be replicated at some point in the distant future.

The residents of the ISS become the reality-TV stars for those on earth doomed to die: every Facebook post and tweet scrutinized, every conversation (even the most private) recorded and played back endlessly. Only a handful of these people survive, and as the Hard Rain continues on a devastated earth, their descendants very slowly rebuild civilization — focusing all of their intellectual resources on the vast problems of engineering with which they're faced as a consequence of the deeply unnatural condition of living in space. This means that, thousands of years after the Hard Rain begins, as they are living in an environment of astonishing technological complexity, they don't have much in the way of social media.

In the decades before Zero [the day the moon broke apart], the Old Earthers had focused their intelligence on the small and the soft, not the big and the hard, and built a civilization that was puny and crumbling where physical infrastructure was concerned, but astonishingly sophisticated when it came to networked communications and software. The density with which they’d been able to pack transistors onto chips still had not been matched by any fabrication plant now in existence. Their devices could hold more data than anything you could buy today. Their ability to communicate through all sorts of wireless schemes was only now being matched — and that only in densely populated, affluent places like the Great Chain.

But in the intervening centuries, those early textual and visual and aural records of the survivors had been recovered and turned into The Epic — the space-dwelling humans’ equivalent of the Mahabharata, a kind of constant background to the culture, something known to everyone. And when the expanding human culture divides into two distinct groups, the Red and the Blue, the second of those groups became especially attentive to one of those pioneers, a jounalist named Tavistock Prowse. “Blue, for its part, had made a conscious decision not to repeat what was known as Tav’s Mistake.”

Fair or not, Tavistock Prowse would forever be saddled with blame for having allowed his use of high-frequency social media tools to get the better of his higher faculties. The actions that he had taken at the beginning of the White Sky, when he had fired off a scathing blog post about the loss of the Human Genetic Archive, and his highly critical and alarmist coverage of the Ymir expedition, had been analyzed to death by subsequent historians. Tav had not realized, or perhaps hadn’t considered the implications of the fact, that while writing those blog posts he was being watched and recorded from three different camera angles. This had later made it possible for historians to graph his blink rate, track the wanderings of his eyes around the screen of his laptop, look over his shoulder at the windows that had been open on his screen while he was blogging, and draw up pie charts showing how he had divided his time between playing games, texting friends, browsing Spacebook, watching pornography, eating, drinking, and actually writing his blog. The statistics tended not to paint a very flattering picture. The fact that the blog posts in question had (according to further such analyses) played a seminal role in the Break, and the departure of the Swarm, only focused more obloquy upon the poor man.

But — and this is key to Stephenson’s shrewd point — Tav is a pretty average guy, in the context of the social-media world all of us inhabit:

Anyone who bothered to learn the history of the developed world in the years just before Zero understood perfectly well that Tavistock Prowse had been squarely in the middle of the normal range, as far as his social media habits and attention span had been concerned. But nevertheless, Blues called it Tav’s Mistake. They didn’t want to make it again. Any efforts made by modern consumer-goods manufacturers to produce the kinds of devices and apps that had disordered the brain of Tav were met with the same instinctive pushback as Victorian clergy might have directed against the inventor of a masturbation machine.

So the priorities of the space-dwelling humanity are established first by sheer necessity: when you’re trying to create and maintain the technologies necessary to keep people alive in space there’s no time for working on social apps. But it’s in light of that experience that the Spacers grow incredulous at a society that lets its infrastructure deteriorate and its medical research go underfunded in order to devote its resources of energy, attention, technological innovation, and money to Snapchat, YikYak, and Tinder.

Stephenson has been talking about this for a while now. He calls it “Innovation Starvation”:

My life span encompasses the era when the United States of America was capable of launching human beings into space. Some of my earliest memories are of sitting on a braided rug before a hulking black-and-white television, watching the early Gemini missions. In the summer of 2011, at the age of fifty-one — not even old — I watched on a flatscreen as the last space shuttle lifted off the pad. I have followed the dwindling of the space program with sadness, even bitterness. Where's my donut-shaped space station? Where's my ticket to Mars? Until recently, though, I have kept my feelings to myself. Space exploration has always had its detractors. To complain about its demise is to expose oneself to attack from those who have no sympathy that an affluent, middle-aged white American has not lived to see his boyhood fantasies fulfilled.

Still, I worry that our inability to match the achievements of the 1960s space program might be symptomatic of a general failure of our society to get big things done. My parents and grandparents witnessed the creation of the automobile, the airplane, nuclear energy, and the computer, to name only a few. Scientists and engineers who came of age during the first half of the twentieth century could look forward to building things that would solve age-old problems, transform the landscape, build the economy, and provide jobs for the burgeoning middle class that was the basis for our stable democracy.

Now? Not so much.

I think Stephenson is talking about something very, very important here. And I want to suggest that the decision to focus on “the small and the soft” instead of “the big and the hard” creates a self-reinforcing momentum. So I’ll end here by quoting something I wrote about this a few months ago:

Self-soothing by Device. I suspect that few will think that addiction to distractive devices could even possibly be related to a cultural lack of ambition, but I genuinely think it’s significant. Truly difficult scientific and technological challenges are almost always surmounted by obsessive people — people who are grabbed by a question that won’t let them go. Such an experience is not comfortable, not pleasant; but it is essential to the perseverance without which no Big Question is ever answered. To judge by the autobiographical accounts of scientific and technological geniuses, there is a real sense in which those Questions force themselves on the people who stand a chance of answering them. But if it is always trivially easy to set the question aside — thanks to a device that you carry with you everywhere you go — can the Question make itself sufficiently present to you that answering is becomes something essential to your well-being? I doubt it.

Tuesday, May 19, 2015

Pynchon and the "Californian Ideology"

In a recent post I wrote,

The hidden relations between these two worlds — Sixties counterculture and today’s Silicon Valley business world — is, I believe, one of the major themes of Thomas Pynchon’s fiction and the chief theme of his late diptych, Inherent Vice and Bleeding Edge. If you want to understand the moral world we’re living in, you could do a lot worse than to read and reflect on those two novels.

Then yesterday I read this great post by Audrey Watters on what she calls the “Silicon Valley narrative” — a phrase she’s becoming ambivalent about, and wonders whether it might profitably be replaced by “Californian ideology.” That phrase, it turns out, comes from a 1995 essay by Richard Barbrook and Andy Cameron. I knew about this essay, have known about it for years, but had completely forgotten about it until reminded by Watters. Here’s the meat of the introduction:

At the end of the twentieth century, the long predicted convergence of the media, computing and telecommunications into hypermedia is finally happening. Once again, capitalism’s relentless drive to diversify and intensify the creative powers of human labour is on the verge of qualitatively transforming the way in which we work, play and live together. By integrating different technologies around common protocols, something is being created which is more than the sum of its parts. When the ability to produce and receive unlimited amounts of information in any form is combined with the reach of the global telephone networks, existing forms of work and leisure can be fundamentally transformed. New industries will be born and current stock market favourites will swept away. At such moments of profound social change, anyone who can offer a simple explanation of what is happening will be listened to with great interest. At this crucial juncture, a loose alliance of writers, hackers, capitalists and artists from the West Coast of the USA have succeeded in defining a heterogeneous orthodoxy for the coming information age: the Californian Ideology.

This new faith has emerged from a bizarre fusion of the cultural bohemianism of San Francisco with the hi-tech industries of Silicon Valley. Promoted in magazines, books, TV programmes, websites, newsgroups and Net conferences, the Californian Ideology promiscuously combines the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich. Not surprisingly, this optimistic vision of the future has been enthusiastically embraced by computer nerds, slacker students, innovative capitalists, social activists, trendy academics, futurist bureaucrats and opportunistic politicians across the USA. As usual, Europeans have not been slow in copying the latest fad from America. While a recent EU Commission report recommends following the Californian free market model for building the information superhighway, cutting-edge artists and academics eagerly imitate the post human philosophers of the West Coast’s Extropian cult. With no obvious rivals, the triumph of the Californian Ideology appears to be complete.

Putting this together with Watters’s post and with my essay on the late Pynchon… wow, does all this give me ideas. Perhaps Pynchon is the premier interpreter of the Californian ideology — especially when you take into account some of his earlier books as well, especially Vineland — someone who understands both its immense appeal and its difficulty in promoting genuine human flourishing. Much to think about and, I hope, to report on here, later.

Monday, May 18, 2015

ideas and their consequences

I want to spend some time here expanding on a point I made in my previous post, because I think it’s relevant to many, many disputes about historical causation. In that post I argued that people don't get an impulse to alter their/our biological conformation by reading Richard Rorty or Judith Butler or any other theorists within the general orbit of the humanities, according to a model of Theory prominent among literary scholars and in Continental philosophy and in some interpretations of ancient Greek theoria. Rather, technological capability is its own ideology with its own momentum, and people who practice that ideology may sometimes be inclined to use Theory to provide ex post facto justifications for what they would have done even if Theory didn’t exist at all.

I think there is a great tendency among academics to think that cutting-edge theoretical reflection is ... well, is cutting some edges somewhere. But it seems to me that Theory is typically a belated thing. I’ve argued before that some of the greatest achievements of 20th-century literary criticism are in fact rather late entries in the Modernist movement: “We academics, who love to think of ourselves as being on the cutting-edge of thought, are typically running about half-a-century behind the novelists and poets.” And we run even further behind the scientists and technologists, who alter our material world in ways that generate the Lebenswelt within which humanistic Theory arises.

This failure of understanding — this systematic undervaluing of the materiality of culture and overvaluing of what thinkers do in their studies — is what produces vast cathedrals of error like what I have called the neo-Thomist interpretation of history. When Brad Gregory and Thomas Pfau, following Etienne Gilson and Jacques Maritain and Richard Weaver, argue that most of the modern world (especially the parts they don't like) emerges from disputes among a tiny handful of philosophers and theologians in the University of Paris in the fifteenth century, they are making an argument that ought to be self-evidently absurd. W. H. Auden used to say that the social and political history of Europe would be exactly the same if Dante, Shakespeare, and Mozart had never lived, and that seems to me not only to be true in those particular cases but also as providing a general rule for evaluating the influence of writers, artists, and philosophers. I see absolutely no reason to think that the so-called nominalists — actually a varied crew — had any impact whatsoever on the culture that emerged after their deaths. When you ask proponents of this model of history to explain how the causal chain works, how we got from a set of arcane, recondite philosophical and theological disputes to the political and economic restructuring of Western society, it’s impossible to get an answer. They seem to think that nominalism works like an airborne virus, gradually and invisibly but fatally infecting a populace.

It seems to me that Martin Luther’s ability to get a local printer to make an edition of Paul’s letter to the Romans stripped of commentary and set in wide margins for student annotation was infinitely more important for the rise of modernity than anything that William of Ockham and Duns Scotus ever wrote. If nominalist philosophy has played any role in this history at all — and I doubt even that — it has been to provide (see above) ex post facto justification for behavior generated not by philosophical change but by technological developments and economic practices.

Whenever I say this kind of thing people reply But ideas have consequences! And indeed they do. But not all ideas are equally consequential; nor do all ideas have the same kinds of consequences. Dante and Shakespeare and Mozart and Ockham and Scotus have indeed made a difference; but not the difference that those who advocate the neo-Thomist interpretation of history think they made. Moreover, and still more important, scientific ideas are ideas too; as are technological ideas; as are economic ideas. (It’s for good reason that Robert Heilbroner called his famous history of the great economists The Worldly Philosophers.)

If I’m right about all this — and here, as in the posts of mine I’ve linked to here, I have only been able to sketch out ideas that need much fuller development and much better support — then those of us who are seriously seeking alternatives to the typical modes of living in late modernity need a much, much better philosophy and theology of technology. Which is sort of why this blog exists ... but at some point, in relation to all the vital topics I’ve been exploring here, I’m going to have to go big or go home.

prosthetics, child-rearing, and social construction

There’s much to think and talk about in this report by Rose Eveleth on prosthetics, which makes me think about all the cool work my friend Sara Hendren is doing. But I’m going to set most of that fascinating material aside for now, and zero in on one small passage from Eveleth’s article:

More and more amputees, engineers, and prospective cyborgs are rejecting the idea that the “average” human body is a necessary blueprint for their devices. “We have this strong picture of us as human beings with two legs, two hands, and one head in the middle,” says Stefan Greiner, the founder of Cyborgs eV, a Berlin-based group of body hackers. “But there’s actually no reason that the human body has to look like as it has looked like for thousands of years.”

Well, that depends on what you mean by “reason,” I think. We should probably keep in mind that having “two legs, two hands [or arms], and one head in the middle” is not something unique to human beings, nor something that has been around for merely “thousands” of years. Bilateral symmetry — indeed, morphological symmetry in all its forms — is something pretty widely distributed throughout the evolutionary record. And there are very good adaptive “reasons” for that.

I’m not saying anything here about whether people should or should not pursue prosthetic reconstructions of their bodies. That’s not my subject. I just want to note the implication of Greiner’s statement — an implication that, if spelled out as a proposition, he might reject, but is there to be inferred: that bilateral symmetry in human bodies is a kind of cultural choice, something that we happen to have been doing “for thousands of years,” rather than something deeply ingrained in a vast evolutionary record.

You see a similar but more explicit logic in the way the philosopher Adam Swift talks about child-rearing practices: “It’s true that in the societies in which we live, biological origins do tend to form an important part of people’s identities, but that is largely a social and cultural construction. So you could imagine societies in which the parent-child relationship could go really well even without there being this biological link.” A person could say that the phenomenon of offspring being raised by their parents “is largely a social and cultural construction” only if he is grossly, astonishingly ignorant of biology — or, more likely, has somehow managed to forget everything he knows about biology because he has grown accustomed to thinking in the language of an exceptionally simplistic and naïve form of social constructionism.

N.B.: I am not arguing for or against changing child-rearing practices. I am exploring how and why people simply forget that human beings are animals, are biological organisms on a planet with a multitude of other biological organisms with which they share many structural and behavioral features because they also share a long common history. (I might also say that they share a creaturely status by virtue of a common Maker, but that’s not a necessary hypothesis at the moment.) In my judgment, such forgetting does not happen because people have been steeped in social constructionist arguments; those are, rather, just tools ready to hand. There is a deeper and more powerful and (I think) more pernicious ideology at work, which has two components.

Component one: that we are living in a administrative regime built on technocratic rationality whose Prime Directive is, unlike the one in the Star Trek universe, one of empowerment rather than restraint. I call it the Oppenheimer Principle, because when the physicist Robert Oppenheimer was having his security clearance re-examined during the McCarthy era, he commented, in response to a question about his motives, “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you've had your technical success. That is the way it was with the atomic bomb.” Social constructionism does not generate this Prime Directive, but it can occasionally be used — in, as I have said, a naïve and simplistic form — to provide ex post facto justifications of following that principle. We change bodies and restructure child-rearing practices not because all such phenomena are socially constructed but because we can — because it’s “technically sweet.”

My use of the word “we” in that last sentence leads to component two of the ideology under scrutiny here: Those who look forward to a future of increasing technological manipulation of human beings, and of other biological organisms, always imagine themselves as the Controllers, not the controlled; they always identify with the position of power. And so they forget evolutionary history, they forget biology, they forget the disasters that can come from following the Oppenheimer Principle — they forget everything that might serve to remind them of constraints on the power they have ... or fondly imagine they have.

Saturday, May 16, 2015

Station Eleven and global cooling



I recently read Emily St. John Mandel’s Station Eleven, which didn’t quite overwhelm me the way it has overwhelmed many others — though I liked it. It’s good, but it could have been great. The post-apocalyptic world is beautifully and convincingly rendered: I kept thinking, Yes: this is indeed what we would value, should all be lost. But the force of the book is compromised, I think, by its chief structural conceit, which is that all the major characters in the novel’s present tense of civilizational ruin are linked in some way to an actor named Arthur Leander who died just before the Georgia Flu wiped out 99.9% of the human race. This conceit leads Mandel to flash back repeatedly to our own world and moment, and every time that happened I thought Dammit. I just didn’t care about Arthur Leander; I didn't want to read fairly conventional realistic-novel stuff. I wanted to rush through all that to get back to the future Mandel imagines so powerfully.

All that said, I have one small thought, totally irrelevant to my feelings about the book as a whole, that keeps returning to my mind. In one of the book’s first scenes, a troupe of musicians and actors (the Traveling Symphony) is walking along an old road somewhere in Michigan, and it’s very very hot, over a hundred degrees. This is twenty years after civilization died, which makes me wonder: Would the world by then be any cooler? If all of our culture’s heat sources ceased functioning today — no more air conditioners emitting hot air, no more internal combustion engines, no more factories blowing out smoke — how long would it take before there was a measurable cooling of the world’s climate?

Monday, May 11, 2015

rewiring the reading organ

Here's Gary Shteyngart on Saul Bellow:

The first time I tackled Ravelstein, back in 2000, this American mind was as open to long-form fiction as any other and I wolfed the novel down in one Saturday between helpings of oxygen and water and little else. Today I find that Bellow’s comment, ‘It is never an easy task to take the mental measure of your readers,’ is more apt than ever. As I try to read the first pages of Ravelstein, my iPhone pings and squawks with increasing distress. The delicate intellectual thread gets lost. Macaulay. Ping! Antony and Cleopatra. Zing! Keynes. Marimba! And I’m on just pages 5 and 6 of the novel. How is a contemporary person supposed to read 201 pages? It requires nothing less than performing brain surgery on oneself. Rewiring the organ so that the neurons revisit the haunts they once knew, hanging out with Macaulay and Keynes, much as they did in 2000, before encounters with both were reduced to brief digital run-ins on some highbrow content-provider’s blog, back when knowledge was actually something to be enjoyed instead of simply being ingested in small career-sustaining bursts.

Shteyngart is sort of channeling Nick Carr here. Several years ago Carr wrote

Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

Of course, some people have always been this way. In my book on The Pleasures of Reading in an Age of Distraction, I claim that John Self, the protagonist of Martin Amis’s early novel Money, is our patron saint. Self tries reading Animal Farm in order to please a woman who bought it for him: 

Reading takes a long time, though, don’t you find? It takes such a long time to get from, say, page twenty-one to page thirty. I mean, first you’ve got page twenty-three, then page twenty-five, then page twenty-seven, then page twenty-nine, not to mention the even numbers. Then page thirty. Then you’ve got page thirty-one and page thirty-three — there’s no end to it. Luckily Animal Farm isn’t that long a novel. But novels . . . they’re all long, aren’t they. I mean, they’re all so long. After a while I thought of ringing down and having Felix bring me up some beers. I resisted the temptation, but that took a long time too. Then I rang down and had Felix bring me up some beers. I went on reading.

Nothing against the Shteyngart piece, but it’s not really telling is anything new. People keep reminding themselves that this is The Way We Live Now but they just keep on living that way. Eventually they’ll either live some other way or start telling different stories, I guess. 

Wednesday, May 6, 2015

near the end of my (Apple) rope

I bought my first Apple product — the original Macintosh — almost exactly thirty years ago. I have never been as frustrated with Apple products as I am now. Not even close.

A great many of these issues involve communications among machines: on the Mac, Yosemite brought a host of wifi problems; on iOS, Bluetooth has been borked for millions of people since iOS 8 was introduced, especially if you have an iPhone 6 or 6 Plus. Apple wants us to replace iPhoto with its new Photos app, but I can’t get Photos to sync all the pictures that iPhoto handles … well, fairly well, anyway — which is all just part of the larger story, which is that iCloud is a complete disaster.

Marco Arment:

Your computer can’t see my computer on the network or vice versa? The only solution that works is to reboot everything, just like using Windows fifteen years ago. Before Yosemite, I never had these issues on Macs.

Yosemite is now 6 months old, these bugs still aren’t fixed, and it feels like they probably won’t be fixed anytime soon. Yosemite is probably in minimal-maintenance mode as primary resources have likely moved on to headlining features for 10.11. This is what’s so frustrating about today’s Apple: if a bug persists past the early beta stages of its introduction, it rarely ever gets fixed. They’re too busy working on the new to fix the old.

But of course the new will have bugs too. So the bugs keep piling up — not to mention the missing features: e.g., margins in TextEdit documents can’t be changed without some sketchy hacks, items in Reminders can be ordered by date only manually, Safari lacks favicons to help you distinguish tabs by sight, and so on. Then there are inexplicably frustrating software design decisions, like regular backwards-incompatible changes in file formats for some apps (Pages, Keynote). Moreover, interoperability between OS X and iOS seems to be getting worse with time, not better, despite features like Handoff which are supposed to address that very problem but may not work.

And the arrival of the Apple Watch is surely going to exacerbate all these problems, not only with Apple software but with third-party software that companies are trying to make work on three platforms now: OS X, iOS, and the special version of iOS that Watch runs. Mo’ devices, mo’ platforms, mo’ problems.

In light of all this, here’s my current plan:

First, I will stop even trying to get either Bluetooth or iCloud to work. Pretend they don’t exist, because effectively they don’t. Assume that the only backups I have are to an external hard drive. If I want to play music in my car, I’ll either listen to the radio or burn CDs like I did back in the day. (Remember when that was the coolest thing?) Well, if burning CDs still works in iTunes….

Second, move whenever possible to non-Apple software, especially, though not only, when Apple's stuff relies in some way on iCloud. (Hardest thing to replace, for a guy who doesn’t want to use Google or Microsoft products either: Keynote. Keynote is a great, if somewhat bloated app, but its ever-changing file format makes it a long-term loser.)

Third, if things aren’t any better by 2017 — when my iPhone 6 Plus will be old enough to trade in — switch to Linux, for my phone as well as my computer, assuming that the Ubuntu Phone is available in this country by then. Yes, Linux has plenty of problems; but I won’t be paying a premium price for a system that promises to “just work” but just doesn’t work. If I’m going to have to be in permanent fiddling/hacking mode, let me do it from within an operating system meant for fiddling and hacking.

Tuesday, May 5, 2015

a few words on Age of Ultron

A few random thoughts about Avengers: Age of Ultron:

  • It’s fun.
  • It needed two fewer massive battle set-pieces.
  • James Spader’s Ultron voice is wonderfully creepy and sleazy. (By the way, don't we live in the Golden Age of voice acting? I think Pixar is largely responsible for this.)
  • Joss Whedon knows that his job as director is primarily to give us those massive battle set-pieces, and he does that, but I have a feeling that his heart really isn’t in it — in part because, as writer, he knows that those simply ruin narrative coherence. So he always has strategies for threading the story together.
  • One way he does this is through creating themes that the characters respond to in their varying ways. Perhaps the biggest such theme in this film is: marriage and children. It’s really a wonderful stroke on Whedon’s part to create a (surprisingly and to me gratifyingly long) breathing-space in the movie set in Clint Barton’s ramshackle house in the country, with his wife and children. That sets all the major characters — except Thor, who, you know, is Thor — thinking about what value they place on such a life. It’s because of this theme that Hawkeye — the one Avenger who has no superpowers, genetic modifications, or mind-and-body-altering training — becomes possibly the most important single character in this movie. (I just wish Jeremy Renner were a better actor, because I don't think he quite brings it off.)
  • The other way Whedon builds continuity is through geeky jokes that recur throughout the movie. There are, as always with Whedon, several such here — one that starts when Captain America tells Tony Stark to watch his language, another based on characters trading the line “What, you didn't see that coming?” — but the best one is about Mjölnir, Thor’s hammer. At one point Whedon actually turns the superheroes themselves into fanboys speculating about just how the unliftability of Mjölnir works: “So if you put it in an elevator,” says Cap, only to have Tony cut in: “Elevator’s not worthy.” I just love this stuff, which nobody does better than Whedon.

Anyway, as I say, it’s good wholesome overstuffed bloated fun. Thumbs up.

Monday, May 4, 2015

notification

Matt Gemmell:

The problem with notifications is that they occupy the junction of several unhealthy human characteristics: social pressure of timely response, a need for diversion, and our constant thirst for novelty. Mobile devices exacerbate that issue by letting us succumb to all of those at any moment. That’s not a good thing. I’m constantly horrified that much of Microsoft’s advertising seems to presuppose that working twenty-four hours per day is mankind’s long-sought nirvana.

With the Watch, we’ll be waiting for a long time.

For one thing, notifications are mostly read-only. Most iPhone apps don’t have corresponding Watch apps yet, so you’re simply seeing a notification without the means to respond. Even those notifications that can be handled on the device are inherently constrained by the available screen space, and input methods. For example, responses to messages are limited to assorted emoticons, dictated text, or an audio clip.

The Watch’s size, and the need to raise your wrist, discourages prolonged reading, which automatically makes you filter what you deal with. On the iPhone, or any of its ancestors further up the three, the default mode of response is now. On the Watch, it’s later.

Okay.... but then, why not just wait until “later” to check your iPhone? Why not just keep the iPhone in the other room, or in your pocket with the notifications turned off? (The latter is what I do: the only notifications I get on my phone are for calls and texts from my loved ones.)

insders and outsiders

One of the stories often told by fans of the Inklings — C. S. Lewis and J. R. R. Tolkien and their friends — is that their great success is a kind of “revenge of the outsiders” story: writers whose ideas were rejected by the cultural elite end in triumph. The story’s origins lie with the Inklings themselves: so they conceived themselves, as a ragged group of oddballs tending the flame of old tales and old ways while the cultural elite went its corrupt modernist way. Lewis returns to this theme often in his letters.

But were Lewis and Tolkien really outside the mainstream? Consider:

  • Each of them was a fellow of an ancient and prestigious college in one of England’s two elite universities
  • They were the two leading authors of the English curriculum at that university (a curriculum that lasted longer than they did)
  • Each of them published books for that university’s prestigious press
  • One of them (Tolkien) shared a publisher with Bertrand Russell
  • One of them (Lewis) gave immensely popular radio talks for the BBC

Even Owen Barfield, in some ways the most culturally marginal of the major Inklings, early in his career wrote articles for the New Statesman and had a book (Poetic Diction) published by Faber. (After that he was largely self-exiled from the mainstream by his commitment to Anthroposophy.)

To be sure, there were important ways that both Lewis and Tolkien were, in the eyes of some, not quite the right thing at Oxford: neither of them attended an elite public school; Lewis was Irish; Tolkien was Catholic; each of them stood for ideas about literature that were palpably old-fashioned; and Lewis was (in addition to being generally assertive, sometimes to the point of bullying) vocal about being a Christian in ways that struck many of his colleagues as being ill-bred at best. But considering such impediments to insider status, they did amazingly well at finding their way into the midst of things, and they did so before either of them had written anything for which they’re now famous.

Saturday, May 2, 2015

Paul Goodman and Humane Technology

This is a kind of thematic follow-up to my previous post.

A few weeks ago Nick Carr posted a quotation from this 1969 article by Paul Goodman: “Can Technology Be Humane?” I had never heard of it, but it’s quite fascinating. Here’s an interesting excerpt:

For three hundred years, science and scientific technology had an unblemished and justified reputation as a wonderful adventure, pouring out practical benefits, and liberating the spirit from the errors of superstition and traditional faith. During this century they have finally been the only generally credited system of explanation and problem-solving. Yet in our generation they have come to seem to many, and to very many of the best of the young, as essentially inhuman, abstract, regimenting, hand-in-glove with Power, and even diabolical. Young people say that science is anti-life, it is a Calvinist obsession, it has been a weapon of white Europe to subjugate colored races, and manifestly—in view of recent scientific technology—people who think that way become insane. With science, the other professions are discredited; and the academic “disciplines” are discredited.

The immediate reasons for this shattering reversal of values are fairly obvious. Hitler’s ovens and his other experiments in eugenics, the first atom bombs and their frenzied subsequent developments, the deterioration of the physical environment and the destruction of the biosphere, the catastrophes impending over the cities because of technological failures and psychological stress, the prospect of a brainwashed and drugged 1984. Innovations yield diminishing returns in enhancing life. And instead of rejoicing, there is now widespread conviction that beautiful advances in genetics, surgery, computers, rocketry, or atomic energy will surely only increase human woe.

Goodman’s proposal for remedying this new mistrust and hatred of technology begins thus: “Whether or not it draws on new scientific research, technology is a branch of moral philosophy, not of science,” and requires the virtue of prudence. Since “in spite of the fantasies of hippies, we are certainly going to continue to live in a technological world,” this redefinition of technology — or recollection of it to its proper place — is a social necessity. Goodman spells out some details:

  • “Prudence is foresight, caution, utility. Thus it is up to the technologists, not to regulatory agencies of the government, to provide for safety and to think about remote effects.”
  • “The recent history of technology has consisted largely of a desperate effort to remedy situations caused by previous over-application of technology.”
  • “Currently, perhaps the chief moral criterion of a philosophic technology is modesty, having a sense of the whole and not obtruding more than a particular function warrants.”
  • “Since we are technologically overcommitted, a good general maxim in advanced countries at present is to innovate in order to simplify the technical system, but otherwise to innovate as sparingly as possible.”
  • “A complicated system works most efficiently if its parts readjust themselves decentrally, with a minimum of central intervention or control, except in case of breakdown.”
  • “But with organisms too, this has long been the bias of psychosomatic medicine, the Wisdom of the Body, as Cannon called it. To cite a classical experiment of Ralph Hefferline of Columbia: a subject is wired to suffer an annoying regular buzz, which can be delayed and finally eliminated if he makes a precise but unlikely gesture, say by twisting his ankle in a certain way; then it is found that he adjusts quicker if he is not told the method and it is left to his spontaneous twitching than if he is told and tries deliberately to help himself. He adjusts better without conscious control, his own or the experimenter’s.”
  • “My bias is also pluralistic. Instead of the few national goals of a few decision-makers, I propose that there are many goods of many activities of life, and many professions and other interest groups each with its own criteria and goals that must be taken into account. A society that distributes power widely is superficially conflictful but fundamentally stable.”
  • “The interlocking of technologies and all other institutions makes it almost impossible to reform policy in any part; yet this very interlocking that renders people powerless, including the decision-makers, creates a remarkable resonance and chain-reaction if any determined group, or even determined individual, exerts force. In the face of overwhelmingly collective operations like the space exploration, the average man must feel that local or grassroots efforts are worthless, there is no science but Big Science, and no administration but the State. And yet there is a powerful surge of localism, populism, and community action, as if people were determined to be free even if it makes no sense. A mighty empire is stood off by a band of peasants, and neither can win — this is even more remarkable than if David beats Goliath; it means that neither principle is historically adequate. In my opinion, these dilemmas and impasses show that we are on the eve of a transformation of conscience.”

If only that last sentence had come true. I hope to reflect further on this article in later posts.

American TechGnosis

Erik Davis has written a new afterword to his 1989 book TechGnosis, and it’s very much worth a read. It’s a reminder of that wing of contemporary tech culture that grows quite directly out of Sixties counterculture, with Steward Brand’s Whole Earth Catalog as one of the chief midwives of the transition.

I think TechGnosis continues to speak despite its sometime anachronism because it taps the enigmatic currents of fantasy, hope, and fear that continue to charge our tools, and that speak even more deeply to the profound and peculiar ways those tools shape us in return. These mythic currents are as real as desire, as real as dream; nor do they simply dissipate when we recognize their sway. Nonetheless, technoscience continues to propagate the Enlightenment myth of a rational and calculated life without myths, and to promote values like efficiency, productivity, entrepreneurial self-interest, and the absolute adherence to reductionist explanations for all phenomena. All these day-lit values undergird the global secularism that forms the unspoken framework for public and professional discourse, for the “worldview” of our faltering West. At the same time, however, media and technology unleash a phantasmagoric nightscape of identity crises, alternate realities, memetic infection, dread, lust, and the specter of invisible (if not diabolical) agents of surveillance and control. That these two worlds of day and night are actually one matrix remains our central mystery: a rational world of paradoxically deep weirdness where, as in some dying earth genre scenario, technology and mystery lie side-by-side, not so much as explanations of the world but as experiences of the world.

The hidden relations between these two worlds — Sixties counterculture and today’s Silicon Valley business world — is, I believe, one of the major themes of Thomas Pynchon’s fiction and the chief theme of his late diptych, Inherent Vice and Bleeding Edge. If you want to understand the moral world we’re living in, you could do a lot worse than to read and reflect on those two novels.

I recently read George Marden’s brief but deeply insightful Twilight of the American Enlightenment, and the most fascinating element of that book is the way Marsden traces the lines of thought and influence that start with America’s great victory in World War II, lead to a sense of spiritual crisis in the 1950s — Is America morally worthy of its leading place in the world? And have we achieved it at the cost of creating a lonely crowd made up of organization men? —, and go on from there by a kind of inevitable logic to the social and sexual revolutions of the 1960s.

Those developments were made inevitable, it seems to me, by a single conflict in the American mind. Marsden:

At all these levels of mainstream American life, from the highest intellectual forums to the most practical everyday advice columns, two ... authorities were almost universally celebrated: the authority of the scientific method and the authority of the autonomous individual. If you were in a public setting in the 1950s, two of the things that you might say on which you would likely get the widest possible assent were, one, that one ought to be scientific, and two, that one ought to be true to oneself.

Not much has changed — except that today’s leading technology companies claim to have united these two authorities. They give us, they say, the very science that we need in order to be true to ourselves. Erik Davis’s TechGnosis is one way of believing in that promise, but by no means the only way.

Friday, May 1, 2015

choose your own (reading) adventure

About e-reading: a kind of Standard Model has emerged among book-lovers. For example:

One of the most imperishable notions ever set down about a personal library can be found inside Sven Birkerts’s essay “Notes from a Confession.” Birkerts speaks of “that kind of reading which is just looking at books,” of the “expectant tranquility” of sitting before his library: “Just to see my books, to note their presence, their proximity to other books, fills me with a sense of futurity.” Expectant tranquility and sense of futurity — those are what the noncollector and what the downloader of e-books does not experience, because only an enveloping presence permits them.

I’m sorry but your Nook has no presence.

That’s William Giraldi, who, despite what he says, is definitely not sorry. And here’s Dustin Illingworth:

As an unabashed sensualist, the most obvious deficiency of the digital book, to me, is the scarcity of its satisfactions: its lack of spine and alarming weightlessness, its abstract and odorless pages, the tactile sterility of the entire enterprise. It seems to me that a book’s physicality is part and parcel of its ability to convey an intimate and lasting experience. Books are meant to be handled and smelled, fingers run along worn cloth, words underlined in good black ink, dog-eared corners folded and refolded. Indeed, the materiality of books — pages, fonts, marginalia, previous owners, stains — channel, for me, a kind of literary magic, an aura of lived memory that the eBook cannot aspire to. The drops of blood in my copy of Dune (nosebleed, age 14), the wilted spots in Jude the Obscure, the profound and funny notes in Confederacy of Dunces written by a mystery reader I’ll never meet — this is where the physical book and the vitality of the reader come together, thickening with every encounter. Yes, the ideas within books, their collections of consciousness, are the important things; however, a physical book makes the conveyance itself an essential part of the endless enrichment: a monument to our relationship with the living, growing text.

I don't disagree with any of this ... well, actually, I ... Okay, let me put it this way: You have two choices.

One: You have as many beautiful books as you want. You get the worn cloth, the underlined words, the dog-eared corners — if you so desire. You can have them pristine, if that’s your preference. Even the coffee stains and, um, drops of blood that connect the materiality of your existence with the materiality of the book’s existence. All this can be yours. But you have no control over what books you get: titles, authors, text — all random. Maybe you get Tolstoy, maybe you get Danielle Steele, maybe you get Dr. Oz.

Two: You can only read on e-readers, with all the features (changeable font size, backups of annotations, etc.) such devices typically have. No codexes for you. But you get to choose the books you read.

Which way do you go?