Text Patterns - by Alan Jacobs

Wednesday, August 10, 2016

post latency warning

Folks, posts will be few and far between here, for a while. I'm working hard on a book, and life in general is sufficiently complicated that I don't have many unused brain cells. I'm finding it healthier and saner to devote my online time to my tumblr, where I mainly post images that I enjoy contemplating. And you know, I've found some very cool stuff lately, so please check it out.

Ciao!

Monday, August 8, 2016

secrets of Apple (not) revealed

John Gruber and others are praising this Fast Company feature on Apple, but I don’t see why. It’s all like this:

The iPhone will continue to morph, in ways designed to ensure its place as the primary way we interact with and manage our technological experience for the foreseeable future. Apple will sell more devices, but its evolution will also enable it to explore new revenue opportunities. This is how Apple adapts. It expands its portfolio by building on the foundation laid by earlier products. That steady growth has made it broader and more powerful than any other consumer technology company.

Contentless abstraction. The iPhone will somehow “morph.” Apple will explore unnamed “revenue opportunities.” Also, Tim Cook thinks health care is really important, and Apple products need to work with networks Apple doesn’t own. Revelatory! Elsewhere in the article we learn that far more people are working on the Maps app than when it launched — that’s about as concrete as the article gets.

Everybody who writes about Apple ends up doing this: madly whipping the egg whites into big fluffy peaks. Because Apple never tells anyone anything.

Saturday, August 6, 2016

Self on digital

Will Self's meditation on digital imagery in the Guardian is a peculiar one — I'm not sure he quite knows what he wants to say. If I had to sum it up, I'd call it an essay suspended between two fears: first, that digital imagery in the end won't prove to be a perfectly seamless simulacrum of experience; second, that it will.

Joseph Brodsky once wrote, “should the truth about the world exist, it’s bound to be non-human”. Now we have the temerity to believe we can somehow perceive that non-human reality, although to do so would be a contradiction in terms. Over the next few years a new generation of television receivers will be rolled out. (We might call them “visual display units” since the formal distinction between computers and televisions is on the point of dissolving.) These machines are capable of displaying imagery at ultra-high definition; so-called “8K UHDTV” composes pictures employing 16 times the number of pixels of current high definition TV, which presents us – if we could only see it – with the bizarre spectacle of an image that exists in a higher resolution than our own eyes are capable of perceiving. Will this natural limitation on our capacity to technologically reproduce the world’s appearance lead our scientists and technologists to desist? I doubt it: the philosopher John Gray observes that: “In evolutionary prehistory, consciousness emerged as a side-effect of language. Today it is a byproduct of the media.”

Whatever the digital is and does, Self seems to be saying, it makes us. Thus his fascination with the distorted and decomposed images of Wiktor Forss:


We might compare these images to others that also decompose the digital, give it the qualities of the analog, but in a different way. See Robin Sloan on video style transfer — for instance, Raiders of the Lost Ark in the style of Gustav Doré: give it a watch. What for Self is a source of fear and anxiety could also be a source of playfulness and delight. I'm not sure we need to be quite so angst-ridden about the whole thing.

But in any case, how does Self take the argument, or rather the experience, beyond the sources he cites: Walter Benjamin, Jorge Luis Borges, Marshal McLuhan? (Especially Benjamin). It must be hard for a writer to accept that other and earlier writers have already told his story better than he can tell it.

Friday, August 5, 2016

work in progress

Folks, as some of you know, I've been working for some time on a book about Christian intellectuals in the second world war. But I've set that aside for a while to work on a different project, one prompted by what I guess I'll call the exigencies of the current moment. It'll be called How to Think: A Guide for the Perplexed, and you can get more details about it here.

If you have any questions about it I'd be happy to answer them in the comments below.

Wednesday, August 3, 2016

word games

Ian Bogost reports on what some people think of as a big moment in the history of international capitalism:

At the close of trading this Monday, the top five global companies by market capitalization were all U.S. tech companies: Apple, Alphabet (formerly Google), Microsoft, Amazon, and Facebook.

Bloomberg, which reported on the apparent milestone, insisted that this “tech sweep” is unprecedented, even during the dot-com boom. Back in 2011, for example, Exxon and Shell held two of the top spots, and Apple was the only tech company in the top five. In 2006, Microsoft held the only slot—the others were in energy, banking, and manufacture. But things have changed. “Your new tech overlords,” Bloomberg christened the five.

And then Bogost zeroes in on what’s peculiar about this report:

But what makes a company a technology company, anyway? In their discussion of overlords, Bloomberg’s Shira Ovide and Rani Molla explain that “Non-tech titans like Exxon and GE have slipped a bit” in top valuations. Think about that claim for a minute, and reflect on its absurdity: Exxon uses enormous machinery to extract the remains of living creatures from geological antiquity from deep beneath the earth. Then it uses other enormous machinery to refine and distribute that material globally. For its part, GE makes almost everything — from light bulbs to medical imaging devices to wind turbines to locomotives to jet engines.

Isn’t it strange to call Facebook, a company that makes websites and mobile apps a “technology” company, but to deny that moniker to firms that make diesel trains, oil-drilling platforms, and airplane engines?

I’m reminded here of a comment the great mathematician G. H. Hardy once made to C. P. Snow: “Have you noticed how the word ‘intellectual’ is used nowadays? There seems to be a new definition that doesn’t include Rutherford or Eddington or Dirac or Adrian or me. It does seem rather odd.”

As Bogost points out, the financial world uses “technology” to mean “computer technology.” But, he also argues, this is not only nonsensical, it’s misleading. Try depriving yourself of the word “technology” to describe those companies and things start looking a little different. “Almost all of Google’s and Facebook’s revenue, for example, comes from advertising; by that measure, there’s an argument that those firms are really Media industry companies, with a focus on Broadcasting and Entertainment.” Amazon is a retailer. Among those Big Five only Apple and Microsoft are computing companies, and they are so in rather different ways, since Microsoft makes most of its money from software, Apple from hardware.

Here’s a useful habit to cultivate: Notice whenever people are leaning hard on a particular word or phrase, making it do a lot of work. Then try to formulate what they’re saying without using that terminology. The results can be illuminating.

I/O


I’m still thinking about the myths and metaphors we live by, especially the myths and metaphors that have made modernity, and the world keeps giving me food for thought.

So speaking of food, recently I was listening to a BBC Radio show about food — I think it was this one — and one of the people interviewed was Ken Albala, a food historian at the University of the Pacific. Albala made the fascinating comment that in the twentieth century, much of our thinking about proper eating was shaped (bent, one might better say) by thinking of the human body as a kind of internal combustion engine. Just as in the 21st century we think of our brains as computers, in the 20th we thought of our bodies as automobiles.

But perhaps, given the dominance of digital computing in our world, including its imminent takeover of the world of automobiling, we might be seeing a shift in how we conceive of our bodies, from analog metaphors to digital ones. Isn’t that what Soylent is all about, and the fascination with smoothies? — Making nutrition digital! An amalgamated slurry of ingredients goes in one end; an amalgamated slurry of ingredients comes out the other end. Input/Output, baby. Simple as that.


UPDATE: My friend James Schirmer tells me about Huel — human fuel! Or, as pretty much everyone will think of it, "gruel but with an H."


"Please, sir, may I have some more"?

Saturday, July 30, 2016

my boilerplate letter to social media services

Hello,

Someone has signed up for your service using my email address. (And, interestingly, using this name.) Please delete my email address from your database.

The email I got welcoming me to your service came from a no-reply address, so I had to go to your website and dig around until I found a contact form. I see that you require me to give you my name as well as my email address, so you're demanding that I tell you things about myself I’d rather you not know because you aren't smart enough, or don't care enough, to include one simple step in your sign-up process: Confirm that this is your email address.

This neglect is both discourteous and stupid. It’s discourteous because it effectively allows anyone who wants to spam someone else to use your service as a quick-and-easy tool for doing so. It’s stupid because then anyone so victimized will tag anything that comes from you as spam, which will eventually lead to your whole company being identified as a spammer. You’ll all be sitting around in the office saying, between chugs of Soylent, “We keep ending up in Gmail's spam filters, what’s up with that? Those idiots.”

So, again, please delete my email address from your database. And please stop being a rude dumbass, like all the other rude dumbasses to whom I have to send this message, more frequently than most people would believe.

Most sincerely yours,

Alan Jacobs

Wednesday, July 27, 2016

on expertise

One of the most common refrains in the aftermath of the Brexit vote was that the British electorate had acted irrationally in rejecting the advice and ignoring the predictions of economic experts. But economic experts have a truly remarkable history of getting things wrong. And it turns out, as Daniel Kahneman explains in Thinking, Fast and Slow, that there is a close causal relationship between being an expert and getting things wrong:

People who spend their time, and earn their living, studying a particular topic produce poorer predictions than dart-throwing monkeys who would have distributed their choices evenly over the options. Even in the region they knew best, experts were not significantly better than nonspecialists. Those who know more forecast very slightly better than those who know less. But those with the most knowledge are often less reliable. The reason is that the person who acquires more knowledge develops an enhanced illusion of her skill and becomes unrealistically overconfident. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” [Philip] Tetlock writes. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of The New York Times in ‘reading’ emerging situations.” The more famous the forecaster, Tetlock discovered, the more flamboyant the forecasts. “Experts in demand,” he writes, “were more overconfident than their colleagues who eked out existences far from the limelight.”

So in what sense would it be rational to trust the predictions of experts? We all need to think more about what conditions produce better predictions — and what skills and virtues produce better predictors. Tetlock and Gardner have certainly made a start on that:

The humility required for good judgment is not self-doubt – the sense that you are untalented, unintelligent, or unworthy. It is intellectual humility. It is a recognition that reality is profoundly complex, that seeing things clearly is a constant struggle, when it can be done at all, and that human judgment must therefore be riddled with mistakes. This is true for fools and geniuses alike. So it’s quite possible to think highly of yourself and be intellectually humble. In fact, this combination can be wonderfully fruitful. Intellectual humility compels the careful reflection necessary for good judgment; confidence in one’s abilities inspires determined action....

What's especially interesting here is the emphasis not on knowledge but on character — what's needed is a certain kind of person, and especially the kind of person who is humble.

Now ask yourself this: Where does our society teach, or even promote, humility?

Monday, July 25, 2016

some thoughts on the humanities

I can't say too much about this right now, but I have been working with some very smart people on a kind of State of the Humanities document — and yes, I know there are hundreds of those, but ours differs from the others by being really good.

In the process of drafting a document, I wrote a section that ... well, it got cut. I'm not bitter about that, I am not at all bitter about that. But I'm going to post it here. (It is, I should emphasize, just a draft and I may want to revise and expand it later.)



Nearly fifty years ago, George Steiner wrote of the peculiar character of intellectual life “in a post-condition” — the perceived sense of living in the vague aftermath of structures and beliefs that can never be restored. Such a condition is often proclaimed as liberating, but at least equally often it is experienced as (in Matthew Arnold's words) a suspension between two worlds, “one dead, / The other powerless to be born.” In the decades since Steiner wrote, humanistic study has been more and more completely understood as something we do from within such a post-condition.

But the humanities cannot be pursued and practiced with any integrity if these feelings of belatedness are merely accepted, without critical reflection and interrogation. In part this is because, whatever else humanistic study is, it is necessarily critical and inquiring in whatever subject it takes up; but also because humanistic study has always been and must always be willing to let the past speak to the present, as well as the present to the past. The work, the life, of the humanities may be summed up in an image from Kenneth Burke’s The Philosophy of Literary Form (1941):

Imagine that you enter a parlor. You come late. When you arrive, others have long preceded you, and they are engaged in a heated discussion, a discussion too heated for them to pause and tell you exactly what it is about. In fact, the discussion had already begun long before any of them got there, so that no one present is qualified to retrace for you all the steps that had gone before.You listen for a while, until you decide that you have caught the tenor of the argument; then you put in your oar. Someone answers; you answer him; another comes to your defense; another aligns himself against you, to either the embarrassment or gratification of your opponent, depending upon the quality of your ally’s assistance. However, the discussion is interminable. The hour grows late, you must depart. And you do depart, with the discussion still vigorously in progress.

It is from this ‘unending conversation’ that the materials of your drama arise.

It is in this spirit that scholars of the humanities need to take up the claims that our movement is characterized by what it has left behind — the conceptual schemes, or ideologies, or épistèmes, to which it is thought to be “post.” In order to grasp the challenges and opportunities of the present moment, three facets of our post-condition need to be addressed: the postmodern, the posthuman, and the postsecular.

Among these terms, postmodern was the first-coined, and was so overused for decades that it now seems hoary with age. But it is the concept that lays the foundation for the others. To be postmodern, according to the most widely shared account, is to live in the aftermath of the collapse of a great narrative, one that began in the period that used to be linked with the Renaissance and Reformation but is now typically called the “early modern.” The early modern — we are told, with varying stresses and tones, by host of books and thinkers from Foucault’s Les Mots et les choses (1966) to Stephen Grenblatt’s The Swerve (2011) — marks the first emergence of Man, the free-standing, liberated, sovereign subject, on a path of self-emancipation (from the bondage of superstition and myth) and self-enlightenment (out of the darkness that precedes the reign of Reason). Among the instruments that assisted this emancipation, none were more vital than the studia humanitatis — the humanities. The humanities simply are, in this account of modernity, the discourses and disciplines of Man. And therefore if that narrative has unraveled, if the age of Man is over — as Rimbaud wrote, “Car l’Homme a fini! l’Homme a joué tous les rôles!” — what becomes of the humanities?

This logic is still more explicit and forceful with regard to the posthuman. The idea of the posthuman assumes the collapse of the narrative of Man and adds to it an emphasis on the possibility of remaking human beings through digital and biological technologies leading ultimately to a transhuman mode of being. From within the logic of this technocratic regime the humanities will seem irrelevant, a quaint relic of an archaic world.

The postsecular is a variant on or extension of the postmodern in that it associates the narrative of man with a “Whig interpretation of history,” an account of the past 500 years as a story of inevitable progressive emancipation from ancient, confining social structures, especially those associated with religion. But if the age of Man is over, can the story of inevitable secularization survive it? The suspicion that it cannot generates the rhetoric of the postsecular.

(In some respects the idea of the postsecular stands in manifest tension with the posthuman — but not in all. The idea that the posthuman experience can be in some sense a religious one thrives in science fiction and in discursive books such as Erik Davis’s TechGnosis [1998] and Ray Kurzweil’s The Age of Spiritual Machines [1999] — the “spiritual” for Kurzweil being “a feeling of transcending one’s everyday physical and mortal bounds to sense a deeper reality.”)

What must be noted about all of these master concepts is that they were articulated, developed, and promulgated primarily by scholars in the humanities, employing the traditional methods of humanistic learning. (Even Kurzweil, with his pronounced scientistic bent, borrows the language of his aspirations — especially the language of “transcendence” — from humanistic study.) The notion that any of these developments renders humanistic study obsolete is therefore odd if not absurd — as though the the humanities exist only to erase themselves, like a purely intellectual version of Claude Shannon’s Ultimate Machine, whose only function is, once it's turned on, to turn itself off.

But there is another and better way to tell this story.

It is noteworthy that, according to the standard narrative of the emergence of modernity, the idea of Man was made possible by the employment of a sophisticated set of philological tools in a passionate quest to understand the alien and recover the lost. The early humanists read the classical writers not as people exactly like them — indeed, what made the classical writers different was precisely what made them appealing as guides and models — but nevertheless as people, people from whom we can learn because there is a common human lifeworld and a set of shared experiences. The tools and methods of the humanities, and more important the very spirit of the humanities, collaborate to reveal Burke’s “unending conversation”: the materials of my own drama arise only through my dialogical encounter with others, those from the past whose voices I can discover and those from the future whose voices I imagine. Discovery and imagination are, then, the twin engines of humanistic learning, humanistic aspiration. In was in just this spirit that, near the end of his long life, the Russian polymath Mikhail Bakhtin wrote in a notebook,

There is neither a first nor a last word and there are no limits to the dialogic context (it extends into the boundless past and the boundless future).... At any moment in the development of the dialogue there are immense, boundless masses of forgotten contextual meanings, but at certain moments of the dialogue’s subsequent development along the way they are recalled and invigorated in new form (in a new context). Nothing is absolutely dead: every meaning will have its homecoming festival.

The idea that underlies Bakhtin’s hopefulness, that makes discovery and imagination essential to the work of the humanities, is, in brief, Terence’s famous statement, clichéd though it may have become: Homo sum, humani nihil a me alienum puto. To say that nothing human is alien to me is not to say that everything human is fully accessible to me, fully comprehensible; it is not to erase or even to minimize cultural, racial, or sexual difference; but it is to say that nothing human stands wholly outside my ability to comprehend — if I am willing to work, in a disciplined and informed way, at the comprehending. Terence’s sentence is best taken not as a claim of achievement but as an essential aspiration; and it is the distinctive gift of the humanities to make that aspiration possible.

It is in this spirit that those claims that, as we have noted, emerged from humanistic learning, must be evaluated: that our age is postmodern, posthuman, postsecular. All the resources and practices of the humanities — reflective and critical, inquiring and skeptical, methodologically patient and inexplicably intuitive — should be brought to bear on these claims, and not with ironic detachment, but with the earnest conviction that our answers matter: they are, like those master concepts themselves, both diagnostic and prescriptive: they matter equally for our understanding of the past and our anticipating of the future.

Tuesday, July 19, 2016

The World Beyond Kant's Head

For a project I’m working on, and will be able to say something about later, I re-read Matthew Crawford’s The World Beyond Your Head, and I have to say: It’s a really superb book. I read it when it first came out, but I was knee-deep in writing at the time and I don’t think I absorbed it as fully as I should have. I quote Crawford in support of several of the key points I make in my theses on technology, but his development of those points is deeply thoughtful and provocative, even more than I had realized. If you haven’t read it, you should.

But there’s something about the book I want to question. It concerns philosophy, and the history of philosophy.

In relation to the kinds of cultural issues Crawford deals with here -- issues related to technology, economics, social practices, and selfhood -- there are two ways to make use of the philosophy of the past. The first involves illumination: one argues that reading Kant and Hegel (Crawford’s two key philosophers) clarifies our situation, provides alternative ways of conceptualizing and responding to it, and so on. The other way involves causation: one argues that we’re where we are today because of the triumphal dissemination of, for instance, Kantian ideas throughout our culture.

Crawford does some of both, but in many respects the chief argument of his book is based on a major causal assumption: that much of what’s wrong with our culture, and with our models of selfhood, arises from the success of certain of Kant’s ideas. I say “assumption” because I don’t think that Crawford ever actually argues the point, and I think he doesn’t argue the point because he doesn’t clearly distinguish between illumination and causation. That is, if I’ve read him rightly, he shows that a study of Kant makes sense of many contemporary phenomena and implicitly concludes that Kant’s ideas therefore are likely to have played a causal role in the rise of those phenomena.

I just don’t buy it, any more than I buy the structurally identical claim that modern individualism and atomization all derive from the late-medieval nominalists. I don’t buy those claims because I have never seen any evidence for them. I am not saying that those claims are wrong, I just want to know how it happens: how you get from extremely complex and arcane philosophical texts that only a handful of people in history have ever been able to read to world-shaping power. I don’t see how it’s even possible.

One of Auden’s most famous lines is: “Poetry makes nothing happen.” He was repeatedly insistent on this point. In several articles and interviews he commented that the social and political history of Europe would be precisely the same if Dante, Shakespeare, and Mozart had never lived. I suspect that this is true, and that it’s also true of philosophy. I think that we would have the techno-capitalist society we have if Duns Scotus, William of Ockham, Immanuel Kant, and G.F.W. Hegel had never lived. If you disagree with me, please show me the path which those philosophical ideas followed to become so world-shapingly dominant. I am not too old to learn.

Sunday, July 17, 2016

some friendly advice about online writing and reading

Dennis Cooper, a writer and artist, is a pretty unsavory character, so in an ideal world I wouldn't choose him as a poster boy for the point I want to make, but ... recently Google deleted his account, and along with it, 14 years of blog posts. And they are quite within their rights to do so.

People, if you blog, no matter on what platform, do not write in the online CMS that your platform provides. Instead, write in a text editor or, if you absolutely must, a word processing app, save it to your very own hard drive, and then copy and paste into the CMS. Yes, it’s an extra step. It’s also absolutely worth it, because it means you always have a plain-text backup of your blog posts.

You should of course then back up your hard drive in at least two different ways (I have an external drive and Dropbox).

Why write in a text editor instead of a word processing app? Because when you copy from the latter, especially MS Word, you tend to pick up a lot of unnecessary formatting cruft that can make your blog post look different than you want it to. I write in BBEdit using Markdown, and converting from Markdown to HTML yields exceptionally clean copy. If you’d like to try it without installing scripts, you can write a little Markdown and convert it to HTML by using this web dingus — there are several others like it.

While I’m giving advice about writing on the web, why not some about reading as well? Too many people rely on social-media sites like Facebook and Twitter to get their news, which means that what they get is unpredictably variable, depending on what other people link to and how Facebook happens to be tweaking its algorithms on any given day. Apple News is similarly uncertain. And I fundamentally dislike the idea of reading what other people, especially other people who work for mega-corporations, want me to see.

Try using an RSS reader instead. RSS remains the foundation of the open web, and the overwhelming majority of useful websites have RSS feeds. There are several web-based RSS readers out there — I think the best are Feedly and Newsblur — and when you build up a roster of sites you profit from reading, you can export that roster as an OPML file and use it with a different service. And if you don't like those web interfaces you can get a feed-reading app that works with those (and other) services: I’m a big fan of Reeder, though my introduction to RSS was NewNewsWire, which I started using when it was little more than a gleam in Brent Simmons’s eye.

So, the upshot: in online writing and reading alike, look for independence and sustainability. Your life will be better for it.

Monday, July 11, 2016

Green Earth

Another Kim Stanley Robinson novel, and another set of profoundly mixed feelings. Green Earth, which was published last year, is a condensation into a single volume of three novels that appeared in the middle of the last decade and are generally known as the Science in the Capital trilogy.

Robinson is an extraordinarily intelligent writer with a wide-ranging mind, and no one writes about either scientific thinking or the technological implementation of science with the clarity and energy that he evidences. Moreover, he has an especially well thought-out, coherent and consistent understanding of the world — what some people (not me) call a worldview, what used to be called "a philosophy." But that philosophy is also what gets him into trouble, because he has overmuch trust in people who share it and insufficient understanding of, or even curiosity about, people who don’t.

Robinson is a technocratic liberal universalist humanitarian (TLUH), and though Green Earth is in many ways a fascinating novel, an exceptionally well-told story, it is also to a somewhat comical degree a TLUH wish-fulfillment fantasy. I can illustrate this through a brief description of one of the novel's characters: Phil Chase, a senator from Robinson's native California whose internationalist bent is so strong that his many fans call him the World's Senator, who rises to become President, whose integrity is absolute, who owes nothing to any special-interest groups, who listens to distinguished scientists and acts on their recommendations, who even marries a distinguished scientist and — this is the cherry on the sundae — has his marriage blessed by the Dalai Lama. TLUH to the max.

In Green Earth Robinson's scientists tend to be quite literally technocrats, in that they work for, or have close ties to, government agencies, which they influence for good. Only one of them does anything wrong in the course of the book, and that — steering a National Science Foundation panel away from supporting a proposal that only some of them like anyway — is scarcely more than a peccadillo. And that character spends the rest of the book being so uniformly and exceptionally virtuous that, it seems to me, Robinson encourages us to forget that fault.

Robinson's scientists are invariably excellent at what they do, honest, absolutely and invariably committed to the integrity of scientific procedure, kind to and supportive of one another, hospitable to strangers, deeply concerned about climate change and the environment generally. They eat healthily, get plenty of exercise, and drink alcohol on festive occasions but not habitually to excess. They are also all Democrats.

Meanwhile, we see nothing of the inner lives of Republicans, but we learn that they are rigid, without compassion, owned by the big oil companies, practiced in the blackest arts of espionage against law-abiding citizens, and associated in not-minutely-specified ways with weirdo fundamentalist Christian groups who believe in the Rapture.

Green Earth is really good when Robinson describes the effects of accellerated climate change and the various means by which it might be addressed. And I liked his little gang of Virtuous Hero Scientists and wanted good things to happen to them. But the politically Manichaean character of the book gets really tiresome when extended over several hundred pages. Robinson is just so relentless in his flattery of his likely readers' presuppositions — and his own. (The Mars Trilogy is so successful in part because all its characters are scientists and if they were uniformly virtuous as the scientists in Green Earth there would be no story.)

It's fascinating to me that Robinson is so extreme in his caricatures, because in some cases he's quite aware of the dangers of them. Given what I've just reported, it wouldn't be surprising if Robinson were attracted to Neil DeGrasse Tyson's imaginary republic of Rationalia, but he's too smart for that. At one point the wonderfully virtuous scientist I mentioned earlier hears a lecture by a Tibetan Buddhist who says, "An excess of reason is itself a form of madness" — a quote from an ancient sage, and an echo of G.K. Chesterton to boot, though Robinson may not know that. Our scientist instantly rejects this idea — but almost immediately thereafter starts thinking about it and can’t let the notion go; it sets him reading, and eventually he comes across the work of Antonio Damasio, who has shown pretty convincingly that people who operate solely on the basis of "reason" (as usually defined) make poorer decisions than those whose emotions are in good working order and play a part in decision-making.

So Robinson is able to give a subtle and nuanced account of how people think — how scientists think, because one of the subtler themes of the book is the way that scientists think best when their emotions are engaged, especially the emotion of love. Those who love well think well. (But St. Augustine told us that long ago, didn't he?)

Given Robinson's proper emphasis on the role of the whole person in scientific thinking, you'd expect him to have a stronger awareness of the dangers of thinking according to the logic of ingroups and outgroups. But no: in this novel the ingroups are all the way in and the outgroups all the way out. Thus my frustration.

Still, don't take these complaints as reasons not to read Green Earth or anything else by Robinson. I still have more of his work to read and I'm looking forward to it. He always tells a good story and I always learn a lot from reading his books. And I can't say that about very many writers. Anyway, Adam Roberts convinced me that I had under-read one of Robinson's other books, so maybe that'll happen again....

Saturday, July 9, 2016

Happy Birthday, Pinboard!

Maciej Ceglowski tells us that Pinboard turns seven today. I started using Pinboard on July 14, 2009, so I've been there since it was about a week old. (Didn't realize that until just now.)

I think Pinboard is just the greatest thing, and I can't even really explain why. I suppose because it primarily does one thing — enable you to bookmark things you read online — and does it with simple elegance. It's the closest thing to an organizational system I have. Here's my Pinboard page, or my Pinboard, as the kids say.

You can also sort of hack Pinboard to stretch its capabilities. I knew for some time that the notes you create in Pinboard used Markdown syntax before I realized that that meant you could embed images, as I've done here, or even videos. (These are loaded from the original URL, not stored by Pinboard, so it wouldn't be a means of long-term preservation.)

I have used Pinboard to note blog-post ideas — and indeed one of the most densely-populated tags in my Pinboard is called bloggable — but I have often fantasized, as my friend Matt Thomas knows, about turning Pinboard into a blogging platform, and basically moving my whole online life there. The problem is that this runs counter to my oft-professed devotion to the Own Your Turf ideal. I suppose if I were truly devoted to that ideal I wouldn't use Pinboard at all, but when a service performs its chosen task so well, and that task is so important for my work, I'm not inclined to sacrifice quality to principle. Not yet anyway. Anyhow, Pinboard makes it easy to download your whole archive, which I do from time to time by way of backup.

Two more notes, while we're in a note-making mood:

Note the first: The true master of Pinboard is Roberto Greco, whom Robin Sloan has called the internet's "idea sommelier."

Note the second: In the birthday greeting that's my first link above, Ceglowski makes a point of noting the outside funding he has received in creating and maintaining Pinboard: zero. Not a penny. He also provides the revenue Pinboard brings in, so that you can see something important: he makes a pretty good living. Almost as long as he's been running Pinboard he's been advocating for this way of doing things: instead of seeking venture capital in hopes of getting filthy rich, and therefore inevitably becoming a slave to your investors, why not choose to be a small business owner? Didn't this used to be the American dream, or one of them, to be your own person, work for no one except yourself, determine your own hours, live your own life?

Ceglowski has not only advocated for this way of life in tweets and talks, he even created The Pinboard Co-Prosperity Cloud, a competition for young entrepreneurs the winner of which would get plenty of advice from him and $37. It's hard to imagine a more countercultural figure in Silicon Valley, which I guess is why Ceglowski gets invited to give so many talks. For the money-obsessed tech-startup world, it's like having one of the zoo animals come to you.

Thursday, July 7, 2016

this reader's update

It's been widely reported that in the past couple of years e-book sales have leveled off. Barring some currently unforeseen innovations — and those could certainly happen at any time — we have a situation in which a relatively few people read books on dedicated e-readers like the Kindle, considerably more people read on the smartphones, and the great majority read paper codexes.

My own reading habits have not leveled off: I have become more and more of a Kindle reader. This surprises me somewhat, because at the same time I have learned to do more and more of my writing by hand, in notebooks, and have limited my participation in the digital realm. So why am I reading so much on my Kindle? Several reasons:

  • It would be disingenuous of me to deny that the ability to buy books instantly and to be reading them within a few seconds of purchase doesn't play a role. I am as vulnerable to the temptations of immediate gratification as anyone else.
  • When I'm reading anything that demands intense or extended attention I don't want to do anything except read, so reading on a smartphone, with all its distractions, is not an option. (Plus, the Kindle's screen is far easier on my eyes.)
  • I own thousands of books and it's not easy to find room for new ones. My office at Baylor is quite large, and I could fit another bookcase in it, but I read at home far more often than at the office, and I already have books stacked on the floor in my study because the bookshelves are filled. So saving room is a factor — plus, anything I have on the Kindle is accessible wherever I am, since the Kindle is always in my backpack. I therefore avoid those Oh crap, I left that book at the office moments. (And as everyone knows who keeps books in two places, the book you need is always in the place where you aren't.)
  • I highlight and annotate a good bit when I read, and the Kindle stores those highlighted passages and notes in a text file, which I can easily copy to my computer. I do that copying once a week or so. So I have a file called MyClippings.txt that contains around 600,000 words of quotations and notes, and will own that file even if Amazon kills the Kindle tomorrow. My text editor, BBEdit, can easily handle documents far larger than that, so searching is instantaneous. It's a very useful research tool.
  • Right now I'm re-reading my hardcover copy of Matthew Crawford's The World Beyond Your Head — more on that in another post — and it's an attractive, well-designed book (with one of the best covers ever), a pleasure to hold and read. But as a frequent Kindle user I can't help being aware how many restrictions reading this way places upon me: I have to have an adequate light source, and if I'm going to annotate it only a small range of postures is available to me. (You know that feeling where you're trying to make a note while lying on your back and holding the book in the air, or on your upraised knee, and your handwriting gets shaky and occasionally unreadable because you can't hold the book steady enough? — that's no way to live.) Especially as I get older and require more light to read by than I used to, the ability to adjust the Kindle's screen to my needs grows more appealing; and I like being able to sit anywhere, or lie down, or even walk around, while reading without compromising my ability to see or annotate the text.

For me, reading on the Kindle has just one significant practical drawback: it's too easy to abandon books. And I don't mean books that I'm just not interested in — I'm generally in favor of abandoning those — but books that for any number of reasons I need to stick with and finish. I can just tap my way over to something else, and that's easier than I'd like it to be. (That I'm not the only one who does this can be seen by anyone who activates the Popular Highlights feature on a Kindle: almost all of them are in the first few pages of books.)

By contrast, when I'm reading a codex, not only am I unable to look at a different book while holding the same object, I have a different perception of my investment in the text. I might read fifty pages of a book on Kindle and annotate it thoroughly, and then set it aside without another thought. But when I've annotated fifty pages of a codex, I am somehow bothered by all those remaining unread and unmarked pages. A book whose opening pages are marked up but the rest left untouched just feels like, looks like, an unfinished job. I get an itch to complete the reading so that I can see and take satisfaction from annotations all the way through. I never feel that way when I read an e-book.

That's the status report from this reader's world.


UPDATE: Via Jennifer Howard on Twitter, this report on book sales in the first half of 2016 suggests that the "revival of print books" is driven to a possibly troubling extent by the enormous popularity of adult coloring books. Maybe in the end e-books will be the last refuge for actual readers.

Wednesday, July 6, 2016

futurists wanted

At least, wanted in government, and by Farhad Manjoo, who laments the shutdown of the Office of Technology Assessment in 1995.

Of course, the future doesn’t stop coming just because you stop planning for it. Technological change has only sped up since the 1990s. Notwithstanding questions about its impact on the economy, there seems no debate that advances in hardware, software and biomedicine have led to seismic changes in how most of the world lives and works — and will continue to do so. 
Yet without soliciting advice from a class of professionals charged with thinking systematically about the future, we risk rushing into tomorrow headlong, without a plan. 
“It is ridiculous that the United States is one of the only nations of our size and scope in the world that no longer has an office that is dedicated to rigorous, nonpartisan research about the future,” Ms. Webb said. “The fact that we don’t do that is insane.” 
Or, as Mr. Toffler put it in “Future Shock,” “Change is avalanching upon our heads and most people are grotesquely unprepared to cope with it.”

I think Manjoo is correct in theory, but I simply cannot imagine any professional governmental futurists who are not simply stooges of the multinational tech companies. The study of the future has been bought at a price; I don't see it recovering its independence.

Tuesday, July 5, 2016

Black Panther


Three issues into the Ta-Nehisi Coates/Brian Stelfreeze Black Panther and I'm struggling. Coates has a really interesting vision here but his lack of experience in writing comics is showing, I think. Stelfreeze's art is quite beautiful, but there are a great many panels that are somewhat difficult to read, visually, and panel-to-panel continuity is seriously lacking. Sometimes I look at a whole page and can't tell why the panels are in the order they are. And in this third issue especially the story seems to advance hardly at all. I'm thinking of bailing out. Anybody else want to encourage me to stick with it?

the memes of the Brexit post-mortems

I don't have any strong opinions about the Brexit decision. In general I’m in favor of functioning with the smallest possible political units; but I’m also aware that to leave the EU would be a huge step with unforeseeable consequences, which is something my conservative disposition also resists. So: no strong opinion about whether Brexit is right or wrong. But I am fascinated by the post-mortems, especially as an observer of the internet, because what the internet makes possible is the instantaneous coalescing of opinion.

So, just a few days after the referendum, intellectual Remainers already have an established explanation, a kind of Copenhagen interpretation of the events meant to yield a Standard Account: Brexiters, motivated by hatred and resentment, acted in complete disregard of facts. I feel that I’ve read a hundred variations on this blog post by Matthew Flinders already, though not all the others have warmed so openly to the idea of an “architecture of politics” meant to “enforce truthfulness.” (What should we call the primary instrument of that architecture? The Ministry of Truth, perhaps?)

I’m especially interested in Flinders’ endorsement and perpetuation of the ideas that Brexit marks the victory of “post-truth politics.” This has very rapidly become a meme — and quite a meme — and one of the signs of how it functions is that Flinders doesn't cite anyone’s use of it. He’s not pretending to have coined the term, he’s just treating it as an explanatory given — to continue my physics analogy, something like Planck’s Constant, useful to plug into your equation to make the numbers work out.

(By the way, I suspect the seed of the “post-truth politics” meme was planted by Stephen Colbert when he coined the term “truthiness”.)

The invocation of “post-truth politics” is very useful to people like Flinders because it allows him to conflate actual disregard of facts with disregard of economic predictions — you can see how those categories get mixed up in this otherwise useful fact-checking of claims by Brexiters. When that conflation happens, then you get to tar people who suspect economic and political forecasts with the same brush you use to tar people who disregard facts altogether and go with their gut — even though there are ample reasons to distrust economic and political forecasts, and indeed a kind of cottage industry in publishing devoted to explaining why so many forecasts are wrong.

There’s no question that many votes for Brexit were based on falsehoods or sheer ignorance. But when people who belong to the academic-expertise class suggest that all disagreement with their views may be chalked up to “post-truth politics” — and settle on that meme so quickly after they receive such a terrible setback to their hopes and plans — then it’s hard for me not to see the meme as defending against a threat to status. And that matters for academic life, and for the intellectual life more generally, because the instantaneous dominance of such a meme forecloses inquiry. There’s no need to look more closely at either your rhetoric or the substance of your beliefs if you already have a punchy phrase that explains it all.

Sunday, July 3, 2016

two apologies and a bleg

Apology One: I wrote a post a while back about hating time-travel stories, and almost immediately after I did so I started thinking of exceptions to that rule. I mean, I’ve been praising Adam Roberts’s The Thing Itself to the skies and it’s a time-travel story, though it’s also many other things. I thought of another example, and then another, and soon enough it became obvious to me that I don’t hate time-travel stories at all. I was just annoyed by one that I thought went wrong, largely because it reminded me of several others that I thought went wrong in very similar ways. So that was a classic case of rash blogging. I am truly sorry to writers and readers of time-travel stories, and I humbly repent and pledge amendment of life.

Apology Two: In a similarly fractious mood, I once wrote a screed against podcasts. But I have not given up on my search for podcasts — in part because I think the medium has so much promise — and since I wrote that post have listened to a whole bunch of them, and have developed affection for a few. So let me again repent of the extremity of my language and the coarseness of my reactions.

In another post, I’ll do some capsule reviews of the podcasts I’ve been listening to in the past year, but for now I have, as we academics say, a comment and a question.

The comment is that the one kind of podcast I absolutely cannot abide is the most common kind: two dudes talking. Or three dudes, or three women, or any combination of genders — it’s the chatting-in-front-of-a-microphone that drives me nuts. The other day I tried listening to Control-Walt-Delete, but when Walt Mossberg and Nilay Patel spent the first five minutes discussing what the sports teams of the schools they had attended were called, I said Finis, done, I’m outta here. No, I like podcasts that are professionally edited, scripted, festooned with appropriate music, crafted into some kind of coherent presentation. Podcasts like that seem respectful to the listener, wanting to engage my attention and reward it.

But one thing I’ve noticed is that the podcasts I know that do that best are relentlessly liberal in their political and social orientation. Which is not surprising, given that most of our media are likewise liberal. And I don't even mean that as a criticism: there is a significant liberal element to my own political makeup, and if you want to know why that is, just listen to this episode of the Criminal podcast. Criminal in general is a good example of the kind of podcast I like, from its sound design and apt use of music to its strong storytelling. Even the website is artfully designed.

Which leads me to my Bleg: Does anyone know of similarly well-crafted, artful podcasts made by conservatives or Christians? I have not yet found a single one. Podcasts by conservatives and Christians tend to be either bare-bones — two dudes talking, or one dude talking with maybe a brief musical intro and outro — or schmaltzily over-produced. (Just Christians in that second category.) Anyone know of any exceptions to this judgment? I suspect that there’s an unbridgeable gulf of style here, but I’d like to be proved wrong.

UPDATE: Despite the quite clear statements I make above to the effect that (a) I really, really dislike dudes-talking podcasts and (b) I am not asking about dude-talking podcasts but about professionally produced podcasts, people keep writing on Twitter and email to say "Hey, here's a dudes-talking podcast that you might like." Sigh.

Saturday, July 2, 2016

the unbought grace of the human self

Returning to Edward Mendelson’s essay “In the Depths of the Digital Age”: an essay about technology, some might say, as this is a blog about technology. But we’re not talking today about “technology” tout court; we’re talking about digital technologies, and more specifically digital communications technologies, and more specifically yet internet-connected digital communications technologies, and even more specifically — let’s get to the heart of the matter — that very recent variety of internet-connected digital communications technology that offers free “services” purported to connect us with one another, services whose makers read, sift, and sell the data that we provide them when we use their software.

The question that Mendelson most forcefully presses on us is: What selves are made by submission to these technologies?

The explicit common theme of these books [under review] is the newly public world in which practically everyone’s lives are newly accessible and offered for display. The less explicit theme is a newly pervasive, permeable, and transient sense of self, in which much of the experience, feeling, and emotion that used to exist within the confines of the self, in intimate relations, and in tangible unchanging objects — what William James called the “material self” — has migrated to the phone, to the digital “cloud,” and to the shape-shifting judgments of the crowd.

Mendelson does not say that this shift is simply bad; he writes of gains and losses. And his essay does not have a thesis as such. But I think there is one not-directly-stated idea that dominates his reflections on these books: Neither users of nor commentators on these social-media technologies have an adequate intellectual or moral vocabulary to assess the massive changes in selfhood that we have opened ourselves to. The authors of the books Mendelson reviews either openly confess their confusion or try to hide that confusion with patently inadequate conceptual schemes.

But if even the (self-proclaimed) expert authorities are floundering in this brave new world, what can we do to think better about what’s happening to always-connected, always-surveilled, always-signalling, always-assessing selves? One possibility: read some fiction.

I have sometimes suggested — see here, here, and here — that Thomas Pynchon ought to be a central figure for anyone who wants to achieve some insight and clarity on these matters. And lo and behold, this from Mendelson:

In Thomas Pynchon’s Gravity’s Rainbow (1973), an engineer named Kurt Mondaugen enunciates a law of human existence: “Personal density … is directly proportional to temporal bandwidth.” The narrator explains:

“Temporal bandwidth” is the width of your present, your now…. The more you dwell in the past and future, the thicker your bandwidth, the more solid your persona. But the narrower your sense of Now, the more tenuous you are.

The genius of Mondaugen’s Law is its understanding that the unmeasurable moral aspects of life are as subject to necessity as are the measurable physical ones; that unmeasurable necessity, in Wittgenstein’s phrase about ethics, is “a condition of the world, like logic.” You cannot reduce your engagement with the past and future without diminishing yourself, without becoming “more tenuous.”


And Mendelson suggests that we use this notion of “temporal bandwidth” to think about how investments in social media alter our experience of time — and especially our relationship to the future.

Another example: Virginia Woolf is cited five times in this essay, perhaps surprisingly — what does Virginia Woolf have to do with technology? But — I’m not just a friend of Mendelson’s but also a pretty careful reader of his work — I have noticed that as we have gotten deeper into our current socially-digital age Woolf’s fiction has loomed larger and larger in Mendelson’s thinking. Mrs Dalloway is the subject of the wonderful final chapter of Mendelson’s The Things That Matter — a superb book I reviewed here — and that chapter would make a fine primer for the shaping of a model of selfhood adequate to the world, and able to stand up to models that are reductive and simplistic enough to be bought and sold in the marketplace.

One might not think that Pynchon and Woolf have much in common — but Mendelson thinks they do, and thinks that the visions and portrayals of selfhood they provide are profoundly useful correctives to the ones we’re being sold every day. I’ll close this post with a quotation from a brief essay by Mendelson in which he makes the link between the two writers explicit:

Like all of Virginia Woolf’s novels and, despite their misplaced reputation for high-tech cleverness, all of Thomas Pynchon’s novels, including his latest one, both books point toward the kind of knowledge of the inner life that only poems and novels can convey, a knowledge that eludes all other techniques of understanding, and that the bureaucratic and collective world disdains or ignores. Yet for anyone who has ever known, even in a crowded room, the solitude and darkness that Clarissa [Dalloway] and Oedipa [Maas] enter for a few moments, that experience, however brief and elusive, is “another mode of meaning behind the obvious” and, however obscured behind corruption, lies, and chatter, “a thing there was that mattered.”

Thursday, June 30, 2016

On Sleep

For the last month, almost every night, I have listened to Max Richter’s Sleep. I have some things to say about it:

  • It amounts to more than eight hours of music.
  • It is comprised of 31 sections, ranging in length from 2:46 to 33:46. Only seven of the sections are shorter than ten minutes.
  • The music is made by voices, strings, and keyboard instruments (some of which are electronic).
  • I think I have listened to it all, but I am not sure. I have played it mostly in bed, though sometimes at my computer as I write. In bed I have drifted in and out of sleep while listening. I think I have listened to some sections several times, others no more than once, but I cannot be sure.
  • Sleep is dominated by three musical themes, one played on the piano, one played on the violin, and one sung by a soprano voice. (Though other music happens also.) One way to characterize Sleep is as a series of themes with variations.
  • The piano theme is the most restful, mimicking most closely the rhythms of the body breathing; the violin melody is the most beautiful; the vocal melody is the most haunting. (Also, when it appears when I am sleeping, or near sleep, it wakes me up.)
  • I could tell you which of the sections presents the violin melody most fully and most gorgeously, but then you might listen to that section on its own rather than in its context. I do not wish to encourage shortcuts in this matter.
  • It is said that the music of Arvo Pärt is especially consoling to the dying; I think this may prove true of Sleep as well. There is a very good chance that, should I die slowly, I will listen to Sleep regularly, perhaps even exclusively.
  • Sleep is the half-brother of death.
  • The number three plays a large role these pieces: the time signatures vary a good deal, but a good many of them come in units of three. Also, at least one section — maybe more; it’s so hard to be sure — features a bell-like tone that rings every thirteen beats.
  • If you have a very good pair of headphones, that's how you should listen to this music. If you're listening on, for instance, Apple's earbuds, you'll miss a great deal of wonderful stuff going on in the lower registers. 
  • The musical materials of Sleep are deceptively simple: Richter is not by the standards of contemporary music tonally adventurous, yet he manages to create a remarkable variety of treatments of his simple themes. The power of the music grows with repetition, with variation, with further repetition. This is yet another reason why sampling this composition will not yield an experience adequate to its design.
  • Since I started listening to Sleep I have thought a good deal about sleep and what happens within it. As Joyce insisted in Finnegans Wake and in his comments on the book when it was still known as Work in Progress, we have no direct access to the world of sleep. All we have is our memories of dreams, and these may well be deeply misleading: “mummery,” Joyce says, “maimeries.” And even dreams are not sleep tout court. A third of our lives is effectively inaccessible to us.
  • Listening to Sleep is, I think, one of the most important aesthetic experiences of my life, but I do not have any categories with which to explain why — either to you or to myself.

Wednesday, June 29, 2016

Mendelson's undead

I want to devote several posts, in the coming days, to this essay by Edward Mendelson. I should begin by saying that Edward is a good friend of mine and someone for whom I have the deepest respect — which will not keep me from disagreeing with him sometimes. It’s also important to note that his position in relation to current communications technologies can’t be easily categorized: in addition to being the Lionel Trilling Professor of the Humanities at Columbia University and the literary executor of the poet W. H. Auden, he has been a contributing editor for PC magazine since 1988 (!), writing there most recently about the brand-new file system of the upcoming MacOS Sierra. He also does stuff like this in his spare time. (I’m going to call him “Mendelson” in what follows for professionalism’s sake.)

That, in the essay-review that I want to discuss, Mendelson’s attitude towards social-media technology is sometimes quite critical is in no way inconsistent with his technological knowledge and interests. Perhaps this doesn’t need to be said, but I have noticed over the years that people can be quite surprised when a bona fide technologist — Jaron Lanier, for example — is fiercely critical of current trends in Silicon Valley. They shouldn’t be surprised: people like Lanier (and in his own serious amateur way Mendelson) learned to use computers at a time when getting anything done on such a machine required at least basic programming skills and a significant investment of time. The DIY character of early computing has almost nothing in common with the culture generated today’s digital black boxes, in which people can think of themselves as “power users” while having not the first idea how the machine they’re holding works. (You can’t even catch a glimpse of the iOS file system without special tools that Apple would prefer you not to know about.)

Anyway, here’s the passage that announces what Mendelson is primarily concerned to reflect on:

Many probing and intelligent books have recently helped to make sense of psychological life in the digital age. Some of these analyze the unprecedented levels of surveillance of ordinary citizens, others the unprecedented collective choice of those citizens, especially younger ones, to expose their lives on social media; some explore the moods and emotions performed and observed on social networks, or celebrate the Internet as a vast aesthetic and commercial spectacle, even as a focus of spiritual awe, or decry the sudden expansion and acceleration of bureaucratic control.

The explicit common theme of these books is the newly public world in which practically everyone’s lives are newly accessible and offered for display. The less explicit theme is a newly pervasive, permeable, and transient sense of self, in which much of the experience, feeling, and emotion that used to exist within the confines of the self, in intimate relations, and in tangible unchanging objects—what William James called the “material self”—has migrated to the phone, to the digital “cloud,” and to the shape-shifting judgments of the crowd.

So that’s the big picture. We shall return to it. But for now I want to focus on something in Mendelson’s analysis that I question — in part out of perverse contrarianism, and in part because I have recently been spending a lot of time with a smartphone. Mendelson writes,

Dante, always our contemporary, portrays the circle of the Neutrals, those who used their lives neither for good nor for evil, as a crowd following a banner around the upper circle of Hell, stung by wasps and hornets. Today the Neutrals each follow a screen they hold before them, stung by buzzing notifications. In popular culture, the zombie apocalypse is now the favored fantasy of disaster in horror movies set in the near future because it has already been prefigured in reality: the undead lurch through the streets, each staring blankly at a screen.

In response to this vivid metaphor, let me propose a thought experiment: suppose there were no smartphones, and you were walking down the streets of a city, and the people around you were still looking down — but rather than at a screen, at letters from loved ones, and colorful postcards sent by friends from exotic locales? How would you describe such a scene? Would you think of those people as the lurching undead?

I suspect not. But why not? What’s the difference between seeing communications from people we know on paper that came through the mail and seeing them on a backlit glass screen? If we were to walk down the street of a city and watch someone tear open an envelope and read the contents, looking down, oblivious to her surroundings, why would we perceive that scene in ways so unlike the ways we perceive people looking with equal intensity at the screens of their phones? Why do those two experiences, for so many of us as observers and as participants, have such radically different valances?

I leave these questions as exercises for the reader.

Tuesday, June 28, 2016

the sources of technological solutionism

If you’re looking for case studies in technological solutionism — well, first of all, you won't have to look long. But try these two on for size:

  1. How Soylent and Oculus Could Fix the Prison System
  2. New Cities

That second one, which is all about how techies are going to fix cities, is especially great, asking the really Key Questions: “What should a city optimize for? How should we measure the effectiveness of a city (what are its KPIs)?”

The best account of this rhetoric and its underlying assumptions I have yet seen appeared just yesterday, when Maciej Ceglowski posted the text of a talk he gave on the moral economy of tech:

As computer programmers, our formative intellectual experience is working with deterministic systems that have been designed by other human beings. These can be very complex, but the complexity is not the kind we find in the natural world. It is ultimately always tractable. Find the right abstractions, and the puzzle box opens before you.

The feeling of competence, control and delight in discovering a clever twist that solves a difficult problem is what makes being a computer programmer sometimes enjoyable.

But as anyone who's worked with tech people knows, this intellectual background can also lead to arrogance. People who excel at software design become convinced that they have a unique ability to understand any kind of system at all, from first principles, without prior training, thanks to their superior powers of analysis. Success in the artificially constructed world of software design promotes a dangerous confidence.

Today we are embarked on a great project to make computers a part of everyday life. As Marc Andreessen memorably frames it, "software is eating the world". And those of us writing the software expect to be greeted as liberators.

Our intentions are simple and clear. First we will instrument, then we will analyze, then we will optimize. And you will thank us.

But the real world is a stubborn place. It is complex in ways that resist abstraction and modeling. It notices and reacts to our attempts to affect it. Nor can we hope to examine it objectively from the outside, any more than we can step out of our own skin.

The connected world we're building may resemble a computer system, but really it's just the regular old world from before, with a bunch of microphones and keyboards and flat screens sticking out of it. And it has the same old problems.

Approaching the world as a software problem is a category error that has led us into some terrible habits of mind.

I almost quoted the whole thing. Please read it — and, perhaps, read it in conjunction with another essay I referred to recently, about the just plain wrongness of believing that the brain is a computer. Ask a software engineer for solutions to non-software problems, and you’ll get answers that might work brilliantly ... if the world were software.

Monday, June 27, 2016

myths we can't help living by

One reason the technological history of modernity is a story worth telling: the power of science and technology to provide what the philosopher Mary Midgley calls “myths we live by”. For instance, Midgley writes,

Myths are not lies. Nor are they detached stories. They are imaginative patterns, networks of powerful symbols that suggest particular ways of interpreting the world. They shape its meaning. For instance, machine imagery, which began to pervade our thought in the seventeenth century, is still potent today. We still often tend to see ourselves, and the living things around us, as pieces of clockwork: items of a kind that we ourselves could make, and might decide to remake if it suits us better. Hence the confident language of ‘genetic engineering’ and ‘the building-blocks of life’.

Again, the reductive, atomistic picture of explanation, which suggests that the right way to understand complex wholes is always to break them down into their smallest parts, leads us to think that truth is always revealed at the end of that other seventeenth-century invention, the microscope. Where microscopes dominate our imagination, we feel that the large wholes we deal with in everyday experience are mere appearances. Only the particles revealed at the bottom of the microscope are real. Thus, to an extent unknown in earlier times, our dominant technology shapes our symbolism and thereby our metaphysics, our view about what is real.

This is why I continue to protest against the view which, proclaiming that “ideas have consequences,” goes on to ignore the material and technological things that press with great force upon our ideas. Consider, for instance, the almost incredible influence that computers have upon our understanding of the human brain, even though the brain does not process information and is most definitely not in any way a computer. The metaphor is almost impossible for neuroscientists to escape; they cannot, generally speaking, even recognize it as a metaphor.

If we can even begin to grasp the power of such metaphors and myths, we can understand why a technological history of modernity is so needful.

Sunday, June 26, 2016

more on speed

A bit of a follow-up to this post, and to brutus’s comment on it (which you should read) as well: My friend Matt Frost commented that Jeff Guo is the “bizarro Alan Jacobs,” which is true in a way. Guo clearly thinks that his problem is that there’s not enough new content and he can’t consume it fast enough, whereas I have argued on many occasions for slower reading, slower thinking, re-reading and re-viewing....

And yet. I’ve watched movies the way Guo watches them, too; in fact, I’ve done it many times. And I’ve read books — even novels — in a similar way, skimming large chunks. So I’m anything but a stranger to the impulse Guo has elevated to a principle. But here’s the thing: Whenever we do that we’re thereby demonstrating a fundamental lack of respect for the work we’re skimming. We are refusing to allow it the kind and amount of attention it requests. So if — to take an example from my previous post — you watch Into Great Silence at double speed you’re refusing the principle on which that film is built. When you decide to read Infinite Jest but skip all the conversations between Marathe and Steeply because you find them boring you’re refusing the fundamental logic of the book, which, among other things, offers a profound meditation on boredom and its ever-ramifying effects on our experiences.

I think we do this kind of thing when we don’t really want to read or view, but to have read and have viewed — when more than watching Into Great Silence or reading Infinite Jest we want to be able to say “Yeah, I’ve seen Into Great Silence and ”Sure, I’ve read Infinite Jest." It’s a matter of doing just enough that we can convince ourselves that we’re not lying when we say that. But you know, Wikipedia + lying is a lot easier. Just saying.

Aside from any actual dishonesty, I don’t think there’s anything wrong with viewing or reading on speed. But it’s important to know what you’re doing — and what you’re not doing: what impulses you’re obeying and what possibilities you’re refusing. Frank Kermode, in a brilliant reflection that I quote here, speaks of a threefold aesthetic and critical sequence: submission, recovery, comment. But if you won’t submit to the logic and imagination of the work in question, there’ll be nothing to recover from, and you’ll have no worthwhile comment to make.

All of which may prompt us to think about how much it matters in any given case, which will be determined by the purpose and quality of the work in question. Scrub through all of The Hangover you want, watch the funny parts several times, whatever. It doesn’t matter. But if you’re watching Mulholland Drive (one of Guo’s favorite movies, he says) and you’re refusing the complex and sophisticated art that went into its pacing, well, it matters a little more. And if you’re scrubbing your way through ambitious and comprehensively imagined works of art, then you really ought to rethink your life choices.

Friday, June 24, 2016

this is your TV on speed

Jeff Guo watches TV shows really fast and thinks he's pretty darn cool for doing so.

I recently described my viewing habits to Mary Sweeney, the editor on the cerebral cult classic "Mulholland Drive." She laughed in horror. “Everything you just said is just anathema to a film editor,” she said. “If you don't have respect for how something was edited, then try editing some time! It's very hard."

Sweeney, who is also a professor at the University of Southern California, believes in the privilege of the auteur. She told me a story about how they removed all the chapter breaks from the DVD version of Mulholland Drive to preserve the director’s vision. “The film, which took two years to make, was meant to be experienced from beginning to end as one piece,” she said.

I disagree. Mulholland Drive is one of my favorite films, but it's intentionally dreamlike and incomprehensible at times. The DVD version even included clues from director David Lynch to help people baffled by the plot. I advise first-time viewers to watch with a remote in hand to ward off disorientation. Liberal use of the fast-forward and rewind buttons allows people to draw connections between different sections of the film.

Question: How do you draw connections between sections of the film you fast-forwarded through?

Another question: How would Into Great Silence be if you took 45 minutes to watch it?

A third question: Might there be a difference — an experiential difference, and even an aesthetically qualitative difference — between remixing and re-editing and creating montages of works you've first experienced at their own pace and, conversely, doing the same with works you've never had the patience to sit through?

And a final suggestion for Jeff Guo: Never visit the Camiroi.