Text Patterns - by Alan Jacobs

Wednesday, October 30, 2013

the view from the moral mountaintop

Even at my advanced age, I can still never quite predict what’s going to agitate me. But here’s something that has me rather worked up. In a reflection on Ender’s Game — a story about which I have no opinions — Laura Miller relates this anecdote:

There’s a short story by Tom Godwin, famous in science fiction circles, called “The Cold Equations.” It’s about the pilot of a spaceship carrying medicine to a remote planet. The ship has just enough fuel to arrive at that particular destination, where its cargo will save six lives. En route, the pilot discovers a stowaway, an adolescent girl, and knowing that her additional weight will make completing the trip impossible, the agonized man informs her that she will have to go out the airlock. There is no alternative solution. 

 This story was described to me by a science fiction writer long before I read it, and since it contains lines like “she was of Earth and had not realized that the laws of the space frontier must, of necessity, be as hard and relentless as the environment that gave them birth,” I can’t honestly call it a must. The writer was complaining about some of his colleagues and their notions of their genre’s strengths and weaknesses. “They always point to that story as an example of how science fiction forces people to ask themselves the sort of hard questions that mainstream fiction glosses over,” he said. “That’s what that story is supposed to be about, who would you save, tough moral choices.” He paused, and sighed. “But at a certain point I realized that’s not really what that story is about. It’s really about concocting a scenario where you get a free pass to toss a girl out an airlock.”

If you’d like, take a few moments now and read “The Cold Equations” for yourself. If you’ve done so, then tell me: what in the story constitutes evidence for the claim that Tom Godwin’s story is fundamentally “about concocting a scenario where you get a free pass to toss a girl out an airlock”? Is is the ending, maybe?

... the empty ship still lived for a little while with the presence of the girl who had not known about the forces that killed with neither hatred nor malice. It seemed, almost, that she still sat, small and bewildered and frightened, on the metal box beside him, her words echoing hauntingly clear in the void she had left behind her:

I didn’t do anything to die for... I didn’t do anything...

Does that sound like delight in the death of a child to you?

How casually Miller’s friend attributed to someone he did not know, and with no discernible evidence, sick and twisted fantasies of murdering female children. And how casually Miller relates it and, apparently, endorses it not only as a true description of Tom Godwin but also of (male?) science-fiction fandom in general:

The heart of any work of fiction, and especially of popular fiction, is a knot of dreams and desires that don’t always make sense or add up, which is what my friend meant when he said that “The Cold Equations” is really about the desire to toss a girl out an airlock (with the added bonus of getting to feel sorry for yourself afterward). That inconvenient girl, with her claim to the pilot’s compassion, can be jettisoned as satisfyingly as the messy, mundane emotions the story’s fans would like to see purged from science fiction.

Miller and her friend just look down from their moral heights on Tom Godwin and people who have been moved by his story, and dispense their eviscerating judgments with carefree assurance. I can't even imagine what it’s like to live at that altitude. I hope I don't ever find out.

writing big

The bigger your writing project, the less likely it is that you’ll find a writing environment that’s adequate to your needs. When you’re writing a book, you need to find some way to juggle research, ideas, notes, drafts, outlines ... which is hard to do.

As far as I know — I’d be happy to be corrected — the only product on the market that even tries to do all this in a single app is Scrivener, which many writers I know absolutely swear by. Me? I hate it. I freely acknowledge the irrationality of this hatred, but so it goes. I can objectively approve of the quality of an app and yet be frustrated by using it. I have the same visceral dislike of Evernote, though in that case sheer ugliness is the chief problem. But both Scrivener and Evernote are created by people who follow the more-features-the-better philosophy, and that’s one I am congenitally uncomfortable with. (The user manual for Scrivener is over 500 pages long.)

A few years ago I thought my answer for big projects might be Ulysses 2. I couldn’t put PDFs in it, but I didn't mind that because I like to annotate PDFs and you need a separate app to do that properly; and in other respects it had a lot going for it. I could write in plain text with Markdown, and could always have visible onscreen notes, or an outline, for the chapter I was working on and even, in a small pane on the left, the text of another chapter. Also, a Ulysses document was basically a package containing text and RTF files with some metadata — easy to unpack and open in other apps if necessary.

I liked Ulysses, but it tended to be unstable and some of its behavior was inconsistent (especially in exporting documents for printing or sending to others). I was pleased to learn that the makers were working on a updated version — but surprised when Ulysses III came out and proved to be a completely new application. And after I tried it out, surprise gave way to disappointment: essentially, it seems to me, it’s now an ordinary document-based text editor — an attractive one, to be sure, but not at all suited to the creation and management of major projects. As far as I can tell, you can replicate all the features of Ulysses III, except for its appearance, for free with TextWrangler and pandoc.

I use phrases like “it seems to me” and “as far as I can tell” because Ulysses III is getting some good press: see here and here and here and here. But these tend to focus on how the app looks, how well it syncs with iCloud, and its export options — not its status as an environment for organizing your writing, especially a project of any size. Ulysses III seems to me a nice app if you’re writing blog posts, but if you’re working on something big, it’s a significant step backwards from previous versions of the app.

Tuesday, October 29, 2013

on the maker ethos

Reading this lovely and rather moving profile of Douglas Hofstadter I was especially taken by this passage on why artificial intelligence research has largely ignored Hofstadter's innovative work and thought:

“The features that [these systems] are ultimately looking at are just shadows—they’re not even shadows—of what it is that they represent,” Ferrucci says. “We constantly underestimate—we did in the ’50s about AI, and we’re still doing it—what is really going on in the human brain.” 

The question that Hofstadter wants to ask Ferrucci, and everybody else in mainstream AI, is this: Then why don’t you come study it?

“I have mixed feelings about this,” Ferrucci told me when I put the question to him last year. “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something. And I don’t think the short path to that is theories of cognition.”

Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”

Here I think we see the limitations of what we might call the Maker Ethos in the STEM disciplines — the dominance of the T and the E over the S and the M — the preference, to put it in the starkest terms, for making over thinking.

An analogical development may be occurring in the digital humanities, as exemplified by Stephen Ramsay's much-debated claim that “Personally, I think Digital Humanities is about building things. […] If you are not making anything, you are not…a digital humanist.” Now, I think Stephen Ramsay is a great model for digital humanities, and someone who has powerfully articulated a vision of “building as a way of knowing,” and a person who has worked hard to nuance and complicate that statement — but I think that frame of mind, when employed by someone less intelligent and generous than Ramsay, could be a recipe for a troubling anti-intellectualism — of the kind that has led to the complete marginalization of a thinker as lively and provocative and imaginative as Hofstadter.

All this to say: making is great. But so is thinking. And thinking is often both more difficult and, in the long run, more rewarding, for the thinker and for the rest of us.

Monday, October 28, 2013

Apple's design problem

An oft-quoted remark by Steve Jobs has been in my mind lately. It’s been cited in many different forms, and who knows what the original words actually were, but it goes something like this: People tend to think of design as how something looks, but it’s really a matter of how it works. It seems to me that Apple is in real danger of forgetting this. And that’s not good news.

I could illustrate this claim in many ways, but let me just give two examples. First, take a look at this screenshot from my iPad:


It took me a few seconds to figure out how to attach this photo to my email. Why? Because the modal box that includes the photo is almost indistinguishable from its surroundings. The menu bar for that box, with its “Use” command, is almost exactly the same shade of gray as the menu bar for the message window, and is nearly aligned with it vertically, with the result that I first assumed that it was the message window.

This is what comes of Apple’s newly-discovered preference for “flat” design elements: they have fled from the illusion of three-dimensionality whenever possible — with some notable exceptions — and in so fleeing have abandoned many good principles of interface design. Among those principles few are more important than making different UI elements clearly distinguishable from one another.

And of course, many iOS developers are simply following Apple’s lead in this. The UI elements in the new OmniOutliner for iPad are so uninterpretably flat that I had to return the app for a refund and go back to version 1.

For more along these lines see, for instance, Jonathan Rotenberg.

So that’s one example, and here’s another: when Apple redesigned their iWork suite of apps from the ground up, they almost completely eliminated AppleScript support in addition to introducing various other regressions in usability. The appearance is wholly new; features, including features that many users have relied on for years, have been unceremoniously dumped.

So what do these changes have in common? I think the answer is clear: seeing design as a matter of “how it looks” rather than “how it works” — or at the very least placing such a premium on how it looks that fixes to how it works get set aside for some putative future release. Many users who are happy about the appearance of iOS 7 assume that it’s just a matter of time until the usability problems are fixed, but I’m not so sure. Consider, for instance, the fact that in Apple’s Reminders app, whether on the Mac or iOS, it’s still not possible to sort tasks by due date except through a painstaking manual process. (Hard to imagine a feature more fundamental to an app for reminding you of what you need to do.) Or the fact that attaching non-photographic files to mail messages in iOS is a mess and always has been.

In short, I think there’s mounting evidence that Apple is forgetting what design is really all about. I’m worried.

in praise of Twitter, once more

If you haven’t read Anne Trubek’s essay on what’s great about Twitter, you should:

Twitter has offered me an intellectual community I otherwise lack. It cuts the distance, both geographic and hierarchical. Not only can I talk with people in other places, but I can engage with people in different career stages as well. A sharp insight posted on Twitter is read, and RT'd (retweeted), with less regard for the tweeter's resume (or gender or race) than it might be if uttered at, say, a networking event. Social media is a hedge against the white-shoe, old-boys' networks of publishing. It is a democratizing force in the literary world. 
I credit Twitter with indirectly and directly allowing me to change careers from academic to freelance writer, to garner book contracts and to launch a new magazine. Plus, it has introduced to me colleagues with whom I practice what broadcast journalist Robert Krulwich calls "horizontal loyalty," or aiding others in similar career stages. Without social media, my ideas would have likely been smaller murmurs, my career more constricted and my colleagues fewer.

I have had similar experiences, and had them alongside Anne, who’s a long-time Twitter friend. (Well, you know, “long-time” as these things go.) For instance, I have spent the past few years exploring the various possibilities of the digital humanities, and almost everything I now know I learned, directly or indirectly, from people on Twitter. I could have learned much of this stuff without Twitter, but the task would have been a good deal harder and a lot less fun, and I wouldn't have gotten to know people that I delight in meeting face-to-face when the chance arises.

Friday, October 25, 2013

who quantifies the self?

The Quantified Self (QS) movement is comprised of people who use various recent technologies to accumulate detailed knowledge of what their bodies are doing — how they’re breathing, how much they walk, how their heart-rate varies, and so on — and then adjust their behavior accordingly to get the results they want. This is not surveillance, some QS proponents say, it’s the opposite: an empowering sousveillance.

But any technology that I can use for my purposes to monitor myself can be used by others who have power or leverage over me to monitor me for their purposes. See this trenchant post by Nick Carr:

One can imagine other ways QS might be productively applied in the commercial realm. Automobile insurers already give policy holders an incentive for installing tracking sensors in their cars to monitor their driving habits. It seems only logical for health and life insurers to provide similar incentives for policy holders who wear body sensors. Premiums can then be adjusted based on, say, a person’s cholesterol or blood sugar levels, or food intake, or even the areas they travel in or the people they associate with — anything that correlates with risk of illness or death. (Rough Type readers will remember that this is a goal that Yahoo director Max Levchin is actively pursuing.)
The transformation of QS from tool of liberation to tool of control follows a well-established pattern in the recent history of networked computers. Back in the mainframe age, computers were essentially control mechanisms, aimed at monitoring and enforcing rules on people and processes. In the PC era, computers also came to be used to liberate people, freeing them from corporate oversight and control. The tension between central control and personal liberation continues to define the application of computer power. We originally thought that the internet would tilt the balance further away from control and toward liberation. That now seems to be a misjudgment. By extending the collection of data to intimate spheres of personal activity and then centralizing the storage and processing of that data, the net actually seems to be shifting the balance back toward the control function. The system takes precedence.  

Do please read it all.

on Workflowy and the Tao of outlining

So when I think about software that could seriously alter the way I write, one of the first examples that comes to mind is Workflowy. Now, Workflowy seems to have been designed largely as a task manager, but I’m intrigued by it because its design brings a certain degree of desirable structure to what Giles Turnbull once memorably called the big-ass text file, or BATF.

The BATF idea is simple: put all of your thoughts, ideas, plans, tasks, and drafts in one, well, big-ass text file, and then search inside of it when you need to find something. No folders, no nested hierarchical stuff, just text — though possibly fitted out with tags or other searchable metadata to make it easier to find stuff. (And then, perhaps, move something into its own file when a draft is completed. But only if you want to.) The primary virtue of the BATF is that you eliminate the time spent creating new files and deciding where to put them. You just type. So there’s less friction between the idea and the act of recording it.

(By the way, the Drafts app for iOS has a command for appending the text you type to a selected Dropbox file, which would agreeably strengthen the BATF method for people who use iPhones and iPads: just type out your thoughts and tap that command, and the text, helpfully date-stamped, is added to your BATF and will be visible when you next open it on your computer.).

It seems to me that Notational Velocity and its higher-powered clone nvALT offer a slightly more sophisticated version of the BATF idea: you don't have just one file, but you have one window into which you type, and searching replaces sorting. I love nvALT and throw almost everything textual into it — I’m typing in it right now, though at some point I’ll probably move this over to BBEdit, just out of ancient habit.

It is the nature of the BATF and nvALT to be unstructured, which is why you might have to think about how to create relevant metadata. And there’s no really visible structure, which can sometimes be disorienting. And here’s where Workflowy comes in.

A Workflowy document is an outline: if you want to use it well you’ll need to master just a handful of keyboard shortcuts for indenting and outdenting and moving up and moving down, but once you do that you’ll be free just to type your ideas. You can, as with all other outliners that I know of, expand or collapse nodes, so you can focus better on what you want to work on. But Workflowy has a distinctive feature that I haven’t seen in another outliner: the ability to zoom in on a single node and make everything else disappear, just by clicking on a handle — and then zoom out again, just by clicking on a word or phrase.

Here’s a screenshot of a part of my Workflowy document:



This is from an in-progress outline of the book I’m working on. Here I’m working on the section of the preface to the book in which I’m identifying the book’s chief theme. Nothing else is visible on screen, except, at the top of the document, its higher levels: I can click on “Preface” to see the whole of the Preface; or on “Christian Humanism and Total War” to see the complete outline of the book; or on “Home” to see everything I’ve put into this document, which might include everything from grocery lists to upcoming travel plans. And then from that Home view I can drill down into any the document at any depth I choose. In other words, Workflowy could become, if I chose, a BATF with an adjustable-view organizational structure. Which is pretty cool.

All that said, I’m probably not going to be using Workflowy. It’s a web app, which means that my data is on somebody’s server, something I don't like (though it does have nice export options). And I can approximate many, though not all, of its features in OmniOutliner and TaskPaper, which reside on my computer. TaskPaper is not a real outliner — more than a couple of levels of indentation and things start getting unmanageable — but its files are plain text. OmniOutliner files, by contrast, are basically OPML/XML — readable by multiple apps, but not as plain as I’d like. Plus, OmniOutliner is plagued by feature bloat that makes it hard to use well.

I’d love to see a desktop version of WorkFlowy — as far as I know, there’s nothing quite like it. There are plenty of mind-mapping apps, most notable among them being the venerable and much-loved Tinderbox, but those have never worked for me. The linearity of the outline seems to soothe my turbulent mind. A desktop zoomable outliner might stand a chance of genuinely changing the way I think and write.

Wednesday, October 23, 2013

are these apps changing the way we write?

I’ll admit to some disappointment with this essay on new writing tools by Paul Ford — Ford is a smart writer and the topic seems a good fit for him, but I don't think he gets as deeply as he could into the legitimacy of the claims made by the makers of some of these writing tools.

As far as I can tell, the tools that he examines either aren’t really about writing at all — for instance, Ghost is an environment for publishing stuff online, stuff that you might write anywhere else — or they amount to taking already-familiar desktop writing tools and putting them online to make collaboration easier. That’s about it.

Not an inconsiderable achievement, mind you. Consider Editorially: it takes a practice that some of us have been following for several years now — writing in a plain-text editor with Markdown syntax which you can convert later to HTML or .doc format — , situates it in a super-attractive editing environment, and encourages sharing your writing with collaborators or editors. If I wrote regularly that way, I’d love Editorially.

Fargo does much the same for outlining — though outlining doesn’t seem naturally collaborative to me, so I’m not sure what the use-cases for Fargo are. But just as Editorially won’t be new to you if you’ve been following plain-text gospel, Fargo won’t be new to you if you’ve used, say, OmniOutliner or, if you’re a real oldtimer, the greatly-lamented DOS-only GrandView. In short, even if the tools you make are really cool, you’re not “reinventing” writing just by coding them in HTML5 and putting them in the cloud.

But I do think that a handful of recent apps have indeed made some significant innovations in writing technology, and I’ll talk about them in some near-future posts.

Tuesday, October 22, 2013

take your tastefully redesigned flag and shove it


This project — to redesign all the state flags in a single visual language — seems to me to combine rather remarkably the features of today’s techno-liberalism:

An active disdain for historical, cultural, and aesthetic difference: “I was immediately bothered by how discordant they are as a group”;

The easy assumption that unity is best brought about by imposition from On High: “Mitchell’s first move was to strip away ‘everything that reminded me of a divided nation’;

Implicit belief in the titanic suasive power of design;

Wish-fulfillment dreams of artistic power: Give us free rein and we’ll fix things;

An inability to see that the design languages of the moment are unlikely to last longer than, or as long as, the design languages they are replacing: “And I wasn’t surprised to learn they break just about every rule of flag design.”

Flag design has rules, people: created by the International Federation of Vexillological Associations, whom you may defy at your peril. We may but pity those unfortunate enough to live in benighted times when there were no organizations to establish rules for flag design and no designers who believed that those rules, and their own design sensibilities, obviously should overrule the whole of history.

Fie on all of it, I say! Of course I know this isn’t seriously meant, in the sense that Mitchell isn’t planning (as far as I know) to petition his Representative to introduce an amendment to the Constitution empowering the federal government to mandate a unified design language for state flags. But I think it is seriously meant in the senses listed above, and to that I say: Yuck. Give me history with all its messiness, its visible residue of evil acts and its mixed messages; give me aesthetic incoherence, give me people who don't know the Rules of Flag Design — give me, in short, the flag of Nunavut and, better still, the coat of arms of Nunavut!

Yeah, I know Nunavut isn’t one of the United States, but still.

Monday, October 21, 2013

investigating the poetry MOOC

Ah, the poetry MOOCs are coming — the exciting world of online education is spreading beyond the STEM disciplines and into the humanities! Let’s investigate.

Elisa New of Harvard is offering one on Poetry in America. It appears that the course is quite consciously Harvard-centric:

“I wanted to do this course using all of the resources of Harvard, its libraries, archives, museums, its students on camera, experimenting with making this a course that uses what the University offers, but for a reason — and that reason is that the history of American poetry and Harvard’s history are so completely intertwined,” she said.

“There are some major poets who didn’t spend time at Harvard, but the list of major American poets who did spend time at Harvard is very, very long. We have their manuscripts. They taught here. Buildings are named after them. So this is a perfect place as a base for the course.”

“There are some major poets who didn’t spend time at Harvard.” Some.  

I’m just going to set that aside.

So what’s this course going to be like? Well, um, “The course is broken down into modules.” Right: modules. “The course combines interactivity, video, traveling, and an element of surprise, said New.” The “traveling” seems to be done by New:

“We filmed here at Harvard, in Cambridge, on Cape Cod,” she said. “I’ll be filming in Washington, D.C., Manhattan, California, even Vermont to talk about [Robert] Frost.”

Also, New filmed Michael Pollan reading a poem about corn. “I’m drawing in teachers and students in a variety of ways.” But this is not all about celebrity poetry readings:

Communication will be essential, New said. “This is a course about conversation between people about poetry. It’s not just about me lecturing. It’s about how you can huddle around a poem with a bunch of other people and get to know them, and the poem better. For me, that’s the center of what humanistic inquiry is,” she said.  

Hmmm. “Huddle around a poem with a bunch of other people and get to know them, and the poem better” — those environments used to be called classrooms, didn’t they?

As far as I can tell, it’s impossible to discover either from the article I’ve been quoting or from HarvardX’s page about the course what any of this means: interactivity, traveling, huddling, conversation, “drawing in teachers and students in a variety of ways.” One might think that HarvardX would inform people of what the course expectations are in inviting them to register, especially since registrants are asked to decide whether they want to “Simply Audit This Course” or “Try for a Certificate,” but no: you are merely told that if you “participate in all of the course's activities and abide by the edX Honor Code” and “if your work is satisfactory, you'll receive a personalized certificate to showcase your achievement.”

So what is this “Honor Code Certificate”? Following some links, I get this: “An Honor Code Certificate of Achievement certifies that you have successfully completed a course, but does not verify your identity. Honor Code certificates are currently free.” But that’s all. I even signed up for an edX account to see if by registering for the course I would learn what the expectations are for the course, but nothing is available.

Now this seems rather curious: If an institution tells people that they can either audit a course or take it for an “Honor Code Certificate,” shouldn't that institution offer some information up front about what the difference is? What the expectations are? That no such information is offered tells us, I think, just how seriously we are to think of the educational value of this kind of “course”: it has none. Basically, people will watch a few videos. It’s telling that the course page says that it will last four weeks and that the “estimated effort” is “1-3 hours per week,” which suggests that they’re not even expecting genuine conversations to develop. As little as four hours’ investment in the entire (Harvard-based) history of American poetry?

I’m not sure this qualifies even as a joke. Now, advocates for MOOCs might say that this is but an experiment, an early essay in the craft. But with some poetry websites and an email listserv I could create something more educationally interesting and ambitious than this, though the entertaining spectacle of Michael Pollan reading a poem about corn would, sadly, be lacking. With Harvard’s resources, this is what they come up with?

UPDATE: Robert Ghrist on Twitter reminds me — can't believe I forgot this — that Al Filreis at Penn has been doing something like this for quite a while, but Al's course is more demanding: there are actual papers to write, quizzes to take, investments in the work of others. I don't know how well it works, or how Al might compare it to an on-campus course at Penn, but it looks like some real effort has been put in to making it meaningful.

Friday, October 18, 2013

one weird trick to unleash your creativity

Thomas Frank is exasperated:

What our correspondent also understood, sitting there in his basement bathtub, was that the literature of creativity was a genre of surpassing banality. Every book he read seemed to boast the same shopworn anecdotes and the same canonical heroes. If the authors are presenting themselves as experts on innovation, they will tell us about Einstein, Gandhi, Picasso, Dylan, Warhol, the Beatles. If they are celebrating their own innovations, they will compare them to the oft-rejected masterpieces of Impressionism — that ultimate combination of rebellion and placid pastel bullshit that decorates the walls of hotel lobbies from Pittsburgh to Pyongyang.

Those who urge us to “think different,” in other words, almost never do so themselves. Year after year, new installments in this unchanging genre are produced and consumed. Creativity, they all tell us, is too important to be left to the creative. Our prosperity depends on it. And by dint of careful study and the hardest science — by, say, sliding a jazz pianist’s head into an MRI machine — we can crack the code of creativity and unleash its moneymaking power.

That was the ultimate lesson. That’s where the music, the theology, the physics and the ethereal water lilies were meant to direct us. Our correspondent could think of no books that tried to work the equation the other way around — holding up the invention of air conditioning or Velcro as a model for a jazz trumpeter trying to work out his solo.

And why was this worth noticing? Well, for one thing, because we’re talking about the literature of creativity, for Pete’s sake. If there is a non-fiction genre from which you have a right to expect clever prose and uncanny insight, it should be this one. So why is it so utterly consumed by formula and repetition?

I’d like to suggest an answer to this question: the problem is that there’s actually no such thing as “creativity.” It’s a made-up concept bearing no relation to anything that exists. It’s a classic case of what the Marxists used to call “false reification.” Let’s never speak of it again.

renewing the Wanderjahre

The original Tour de France had nothing to do with bicycles: it was a medieval guild system which still has a residual existence today: it directed young craftsmen-in-training to travel from place to place in France, practicing their trade and learning the different ways it was done in different parts of the country. The same practice in Germany is called the Wanderjahre. You would think that this is where the English word "journeyman" comes from, but that's probably not the case.

The proposal I'm about to make runs against the grain of modern and American notions of freedom and independence, but for that very reason, among others, I'm going to make it: I think this practice should be brought back, reinvigorated, and extended to a wide range of professions. In place of the current residency model, which I suppose is the closest thing any American profession has to a Wanderjahre, we should ask our newly-minted physicians to spend a year working in a series of widely different situations: a hospital in inner-city Detroit, a tiny clinic in rural Mississippi, a gleaming health center in the suburbs of San Diego.

Similarly, lawyers should have to spend a few months in a public defender's office in Chicago, followed by a few more in a K Street lobbying firm in D.C. Academics who've just received their Ph.D. should spend a semester teaching at a community college, followed by one at a private university. Super-smart programmers from Stanford would get to pursue their own VC-funded startup — but only after they had spent some time writing code for J. C. Penney's website and the payroll system of a bank in Charlotte.

In short, the guild system needs renewal and expansion. If it's handled well, some people will experience a calling to work they never would have thought they could stand — until they were forced to do it. Perhaps some rough patches in uncomfortable situations will make them more thankful for the permanent positions they end up taking; and in any case will give many of them more empathy for their colleagues who spend large chunks (or the whole) of their careers in those "uncomfortable situations."

The guild-based Wanderjahre would not just be good for people newly arrived in a profession: it would promote among senior members of that profession, who would need to observe and evaluate the "journeymen," a sense of responsibility for the long-term health of their calling. At first, of course, we old sods would bitch and moan about the time it takes, time that (we tell ourselves) we'd otherwise be devoting to really important research, but after a generation or two those attitudes would fade: those who had experienced the Wanderjahre would know its value and would want to extend that value to others.

Would the system be subject to abuse, from younger and older members of the profession alike? Of course it would. All human systems are subject to abuse. But would be it better than the current system, or lack thereof? I bet it would.

Wednesday, October 16, 2013

coping with OS frustration

Alex Payne recently did what I do, in a less thorough way, from time to time: he re-evaluated his commitment to the Apple ecosystem. It’s a valuable exercise; among other things, it helps me to manage my frustrations with my technological equipment.

And frustrations there are — in fact, they have increased in recent years. You don't have to look far to find articles and blog posts on how Apple’s quality control is declining or iOS 7 is a disaster. (Just do a Google search for those terms.) And I have to say that after a month of using iOS 7 I would, without question, revert to iOS 6 if I could, a handful of new and useful features notwithstanding. Moreover, even after more than a decade of OS X the ecosystem still lacks a first-rate web browser and a largely bug-free email client. (Most people know what's wrong with Mail.app, but I could write a very long post on what's wrong with Safari, Chrome, and Firefox. Postbox is looking pretty good as an email client right now, but time will tell whether it's The Answer.)

But in the midst of these frustrations and others I need to keep two points in mind. First, we ask more of our computers than we ever have. Browsers, for instance, are now expected not just to render good old HTML but to play every kind of audio and video and to run web apps that match the full functionality of desktop apps. And increasingly we expect all our data to sync seamlessly among multiple devices: desktops, laptops, tablets, phones. There is so much more that can go wrong now. And so it sometimes does.

Second, as Alex Payne’s post reminds us, every other ecosystem has similar problems — or worse ones. And that’s a useful thing to keep in mind, especially when I’m gritting my teeth at the realization that, for instance, if you want to see the items in your Reminders app in chronological order you must, painstakingly, move them into the order you want one at a time. The same is true on the iOS versions. It seems very strange to me that such an obviously basic feature did not make it into the first released version of the software, and frankly unbelievable that manual re-ordering is your only option two years after the app was first introduced (in iOS 5) — but hey, influential Mac users have been complaining about fundamental inconsistencies in the behavior of the OS X Finder for about a decade now, with no results. This is the way of the world: the things that need to be fixed are ignored and the things that don’t need to be fixed get changed, as often as not for the worse. So whaddya gonna do?

One thing I’m not going to do is to throw the whole ecosystem out with the bathwater — and thanks to Alex Payne for preventing me from doing so. Better for me to make the most of a system I know how to use than to start over from scratch with something utterly unfamiliar that has at least as many problems of its own. And one thing I most certainly will do: I’ll keep asking Why in the hell won’t this thing just work?

Tuesday, October 15, 2013

reading the Wake

James Joyce, text portrait by Rod McLaren


Let me just post without comment — except for taking this moment to express my admiration — this account of people who gathered in a bookstore to read Finnegans Wake aloud:

We would gather around in a circle at Alias Books, lock the doors, and read out loud. We met every Sunday @ 11pm, and would average about 20-40 pages per mtg. It took us about 7 or 8 months to finish the book. This was the first book to kick off our Night Owl bookgroup (running about four years now), and we would experiment with our reading of it. We began reading it conventionally, falling into the normal trap of using conventional language: it must have one setting, one plot, each word must have only one meaning, the book must have one overall message. After discovering how FW aims to destroy this mode of thinking, we decided to experiment with our reading, not take the book so seriously, and let the experience of reading it takeover.

For instance, during one reading--I wish I could remember where we were in the book--for some reason, almost simultaneously, we all got up and started walking around the bookstore in a single file line, up and down the aisles, until either the page or the paragraph was finished. I do remember we were a fair way through FW, and had learned how to read its rhythms and pauses, and somehow we all agreed to physically mimic them in that one moment. FW is in part an invitation to performance art, as well as being a drama.

This is probably the best advice I know to give to readers of the Wake: let the text show you how to "read" it, how to perform it, what to do with it, how to use it. Because FW is a book about what has happened as well as what will happen--Joyce was a very unique kind of prophet--FW asks us to pay some attention to the present moment, and to the specific point in time that we are reading it. And as we read it, it read us: collectively, and, in its curious way, individually.

Monday, October 14, 2013

surprised by typography


A while back I posted on the great spaces-after-a-period controversy and the revelation by one Heraclitus that typographic history is more varied and complicated than certain modern scolds would have you believe.

As part of my research on Christian humanism and World War II, I’ve been reading the little book whose cover is pictured above, by the great genius and filthy disgusting pervert Eric Gill. I doubt that Gill had complete control over the typesetting of this book, though it does bear some of the characteristic marks of his typographic practice, but it’s interesting to look at all the same. Consider this page:
Let’s take a moment to note the distinctive elements here: the hanging indents; the use of the pilcrow to indicate a new section; wide spacing after some sentences; spaces separating quotation marks from the words or phrases quoted. It’s a reminder to me that the typographic conventions of today’s books are pretty rigid in comparison — a function, perhaps, of almost every printed book being typeset with a tiny handful of computer applications, which practice leaves little room for personal typographic style? I don't know about you, but I wouldn't mind being surprised by typography more often than I am.

Friday, October 11, 2013

pay the writer?

Philip Hensher has a point:

Frustration spilled out on Facebook after a University of Cambridge professor of modern German and comparative culture, Andrew Webber, branded the acclaimed literary novelist Philip Hensher priggish and ungracious when the author refused to write an introduction to the academic's forthcoming guide to Berlin literature for free.

Hensher said: "He's written a [previous] book about writers in Berlin during the 20th century, but how does he think that today's writers make a living? It shows a total lack of support for how writers can live. I'm not just saying it for my sake: we're creating a world where we're making it impossible for writers to make a living."

Hensher, who was shortlisted for the Man Booker prize in 2008 for his novel The Northern Clemency, a portrait of Britain's social landscape through the Thatcher era, wrote his first two novels while working a day job, but said: "I always had an eye to when I would make a living from it."

"If people who claim to respect literature – professors of literature at Cambridge University – expect it, then I see no future for young authors. Why would you start on a career of it's not just impossible, but improper, to expect payment?"

What Andrew Webber seems to be forgetting is that he has a day job, and for those of us in that situation the rules may be different — in fact, surely the rules are different, but I’m just not sure precisely how.

Almost everyone understands that when you write a book (whether academic or popular) you’ll be paid royalties as a percentage of sales; almost everyone understands that when you write an academic article you won’t be paid at all except insofar as publication itself is a kind of currency that you may be able to exchange for tenure or promotion or a more attractive position elsewhere. And in any case doing such writing is part of the academic job description. This kind of publication rarely has certain and measurable value; but as a general proposition its value is clear — for academics. However, it’s completely unfair and unreasonable to expect non-academics to write for no money when they’re not getting anything else for it either: every professional writer should join in the Harlan Ellison Chorus: PAY THE WRITER.

That said, there are a great many fuzzy areas here, especially in relation to online writing, because every major outlet is constantly starved for new content — more content than almost any outlet can reasonably be expected to pay, or pay more than a pittance, for. Thus Slate’s Future Tense blog asked to re-post a post I wrote here — but of course did not offer to pay for it. I said yes, but should I have?

I didn't really expect to get anything out of it — I suppose a couple of people clicked over to this blog, but I think few common convictions are less supported by evidence than the one that says you get “publicity value” by “getting your name out there.” (No direct route from there to cash on the barrelhead.) But it didn’t seem as though it would be hurting anyone, so why not?

Well, one might argue that I can support the Ellison Principle (PAY THE WRITER) by insisting on being paid for everything I write, online and offline: if writers were to form more of a common front on this matter, then we could alter the expectations and get online outlets to see paying for writing as the norm.

But magazines and websites have limited resources, so if every writer insisted on getting paid then there’d be far less new content for them to post and publish — and few of us would be happy with that. And in any case, writers would never be able to achieve a uniform common front: there will always be people, especially younger, less established writers, who believe in the “get your name out there” argument and will act accordingly.

And here’s another complication: since I do have a day job and am not trying to make a living by my writing, maybe if I don’t ask for financial compensation I can liberate money for people who really need it. Or would I just be tempting editors to publish less stuff by full-time writers because they can get free content from me?

I CAN’T FIGURE THIS OUT. Help me, people.

Thursday, October 10, 2013

Joyce, Tolkien, and copyright

James Joyce’s Ulysses is fascinating in many ways, not least because it has proven such a magnet for controversy of all kinds: it has been at the center of hullabaloos about obscenity law, about textual editing, and — as Robert Spoo’s new book demonstrates — about copyright. I haven’t read Spoo’s book yet, but I want to after reading Caleb Crain’s lucid review of it.

As often is the case when I find myself thinking about Ulysses, my mind turns towards The Lord of the Rings. This is probably as odd as I suspect it is, but the books have some curious things in common: each seeks to renew and transfigure some inherited literary form; each tries to reconceive the idea of epic scope; each has been accused of being excessively masculine in its understanding of the world; each, thanks in part to endless authorial fiddling, has been the object of a great many controversies; and finally, each has been involved in all sorts of copyright issues.

Crain writes in his review,

Law isn’t the only way for people who do business together to keep one another in line. In most fields, there’s a faster, cheaper and simpler sanction: don’t do business with the miscreant anymore. Such self-policing by a group isn’t fail-safe. Ostracism might not cost enough to be a deterrent in markets with many participants, little reporting and few long-term relationships, and there will always be a few bad actors who choose to be disreputable. But law, no matter how absolute, doesn’t prevent every act of bad behavior either, and self-regulation is more flexible and quicker to adapt to changing circumstances. The phenomenon has been called “order without law,” and it has been detected in Maine lobstermen, who respect one another’s trapping sites; in chefs, who are ginger about knocking off one another’s recipes; and in stand-up comics, who usually refrain from stealing one another’s routines and punch lines. It has even been found, believe it or not, in publishing. Sometimes, in the absence of copyright, publishers have paid authors and have abstained from reprinting the books of authors they haven’t paid. Ulysses, by James Joyce, considered by some the greatest novel of the twentieth century, lost its copyright protection in America on a technicality soon after it was published. But from the 1930s to the ’90s, Joyce and his estate were paid royalties from its publication in America anyway, thanks to exactly this kind of happy anarchy.  

With The Lord of the Rings, things didn’t happen quite this way. In 1965, the bosses at Ace Books decided that they had discovered a loophole in the copyright law that allowed them to publish their own edition of the novel — and to pay J.R.R. Tolkien absolutely nothing for doing so. It seems hard to believe that as recently as fifty years ago the American publishing industry was sufficiently chaotic for any publishing executives to think they could get away with this, but they printed 150,000 copies — you heard that right: one hundred and fifty thousand copies — of each of the three volumes of LOTR, which of course sold like hotcakes. After some huffing and puffing by Tolkien and his American publishers the Ace guys decided that they had received enough legal threats, bad publicity, and cash on the barrelhead that they should probably send the author some money and let their edition slide grecelessly out of print. Still, they probably came out well ahead on the deal. “Order without law” indeed.

Wednesday, October 9, 2013

Auden's two cheers for democracy

The major project I am currently working on concerns Christian humanism in a time to total war — in particular, in World War II. In the midst of a an unprecedentedly vast war, a number of prominent and highly accomplished intellectuals saw the need for a renewal of a rich and subtle humanism — which is surprising in itself, it seems to me — and for many of them that humanism needed to be grounded in a doctrinally robust Christianity. This seemed odd enough to me that I thought it needed to be accounted for. Thus this book.

One of the major figure in the story I’ll tell is W. H. Auden, and I’ll give significant attention to a little-known lecture he gave at Swarthmore College, where he taught during much of the war. Swarthmore has, to my great pleasure, made available online its collection of Auden memorabilia — including the full typescript of the lecture, entitled “Vocation and Society”. (How cool is that?)

In the book I’ll explore this lecture at some length, but right now I’ll just say something about the end of his talk, where he introduces an interesting and important question: Is democracy after all sustainable? Or, to put the question more precisely, Is it self-sustaining? Auden echoes a famous essay by E. M. Forster in offering “Two Cheers for Democracy,” but he withholds the third cheer for rather different reasons than the atheist Forster had. “Two cheers for Democracy,” says Auden: “one because it admits vocation, and two because it permits contrition. Two cheers are quite enough. There is no occasion to give three. Only Agape, the Beloved Republic, deserves that.” What he would later call “our dear old bag of a democracy” is sustained, not by itself, but by belief in something deeper and greater than itself. So Auden concludes his talk not with those cheers, but with the reading of a few lines of a very recent poem.

Just four months earlier T. S. Eliot had published “Little Gidding,” the last of his Four Quartets, and Auden finished his talk by reading the poem’s concluding lines:

And all shall be well and

All manner of thing shall be well

When the tongues of flame are in-folded

Into the crowned knot of fire

And the fire and the rose are one.

Auden’s vision, then, is of a vocation-based education sustained by a democratic polity, and a democratic polity sustained by Christian faith. This vision stood against the commanding power of the nation-state, against pragmatism, against modern technocratic canons of efficiency.

Just after the war Auden visited Harvard to read a poem to the Phi Beta Kappa Society. One of the dominant figures of American culture at that time was James Bryant Conant, Harvard’s president, who, captured by the techno-utopian mood of the war years, was striving to modernize the university and transform it into a research powerhouse focused on science and technology. In so doing he emphasized the humanities, especially the classics, far less than Harvard had done through much of its history. Auden told Alan Ansen, “When I was delivering my Phi Beta Kappa poem in Cambridge, I met Conant for about five minutes. ‘This is the real enemy,’ I thought to myself. And I’m sure he had the same impression about me.”

Tuesday, October 8, 2013

analogies

Very few people understand how to evaluate analogies properly. An analogy will have explanatory value if the things or experiences or events or ideas likened to one another are indeed alike in the respect called attention to by the analogy. Far too many people think they can deny the validity of an analogy between X and Y by pointing out ways in which X and Y are different. Yes, and if they were not different you couldn’t analogize them because they would be the same thing. In Thomistic terms, you do not discredit an exercise in analogical predication by gleefully announcing that the predication is not univocal.

Here’s the proper way to evaluate an analogy:

1) Ask this question: Does the person making the analogy between X and Y explain the respect in which he or she claims that X and Y are similar?

2) If not, ask the person to clarify that point.

3) If so, think about whether X and Y are indeed similar in the respect specified. If so, the analogy is legitimate. If not, the analogy fails.

4) Feel free at this point to pursue other questions about the analogy, e.g., whether even if legitimate it identifies an important similarity, or whether the analogy does the intellectual work its maker thinks it does.

Thank you for your time. We will now return to our regularly scheduled programming.

Monday, October 7, 2013

the Fanny Price we'll never see

I’m not especially excited about the Austen Project:

The Austen Project, with bestselling contemporary authors reworking “the divine Jane” for a modern audience, kicks off later this month with the publication of Joanna Trollope’s new Sense & Sensibility, in which Elinor is a sensible architecture student and impulsive Marianne dreams of art school.

Also promised are versions from Val McDermid (Northanger Abbey), Curtis Sittenfeld (Pride & Prejudice) and – gadzooks – the prolific Alexander McCall Smith, most famous for his Botswanan private eye novels, who has been let loose on Emma (an experience he describes as “like being asked to eat a box of delicious chocolates”).

Interestingly, but unsurprisingly, no one has signed up for what I believe to be Austen’s greatest novel, Mansfield Park. Why am I not surprised? Well, consider a passage from the best essay, by far, ever written about Mansfield Park, in which Tony Tanner writes,

Fanny Price exhibits few of the qualities we usually associate with the traditional hero or heroine. We expect them to have vigour and vitality; Fanny is weak and sickly. We look to them for a certain venturesomeness or audacity, a bravery, a resilience, even a recklessness; but Fanny is timid, silent, unassertive, shrinking and excessively vulnerable. Above all, perhaps, we expect heroes and heroines to be active, rising to opposition, resisting coercion, asserting their own energy; but Fanny is almost totally passive. Indeed, one of the strange aspects of this singular book is that, regarded externally, it is the story of a girl who triumphs by doing nothing. She sits, she waits, she endures; and, when she is finally promoted, through marriage, into an unexpectedly high social position, it seems to be a reward not so much for her vitality as for her extraordinary immobility. This is odd enough; yet there is another unusual and even less sympathetic aspect to this heroine. She is never, ever, wrong. Jane Austen, usually so ironic about her heroines, in this instance vindicates Fanny Price without qualification. we are used to seeing heroes and heroines confused, fallible, error-prone. But Fanny always thinks, feels, speaks and believes exactly as she ought. Every other character in the book, without exception, falls into error — some fall irredeemably. But not Fanny. She does not put a foot wrong. Indeed, she hardly risks any steps at all: as we shall see, there is an intimate and significant connection between her virtue and her immobility. The result of these unusual traits has been to make her a very unpopular heroine.

The pivotal event of the novel is a long scene in which various young people gathered at Mansfield Park, the Bertram country house, decide to stage a play that Fanny believes to be immoral and that she therefore quietly but firmly refuses to act in. When Sir Thomas Bertram unexpectedly returns from Antigua and finds them in the middle of rehearsals, his younger son Edmund meets with him and confesses the general impropriety. But he adds this: “‘We have all been more or less to blame,’ said he, ‘every one of us, excepting Fanny. Fanny is the only one who has judged rightly throughout; who has been consistent. Her feelings have been steadily against it from first to last. She never ceased to think of what was due to you. You will find Fanny everything you could wish.’”

There is absolutely no chance that any novelist now living will even attempt to portray a Fanny Price remotely like the character Austen created. It is impossible to imagine any moral idea more completely alien to the spirit of our time that the notion that someone can exhibit virtue by refraining from participating in the recreations that other people enjoy. Prig! Prude! Narrow-minded bigot!

If anyone ever does sign on to re-write Mansfield Park, one of two things will happen: either Fanny will become a completely different character, one not noted for her “immobility” and resistance to evils small and large, or she will have those traits and will therefore be explicitly portrayed as what Kingsley Amis said the original Fanny was, “a monster of complacency and pride.” There are no other foreseeable options.

on The Book of Common Prayer: A Biography



My “biography” of the Book of Common Prayer is now available and I hope some of you will buy it. It was a great deal of fun to write — though I have to say, I found it extremely challenging to fit an extremely complex story into the relatively brief format of the series.

Speaking of the series, it’s a wonderful one, created by Princeton University Press’s religion editor, Fred Appel. Fred’s terrific idea was simply this: that all books, but in an especially interesting way religious books, have lives: their story really only begins when they appear, and develops over centuries or millennia as readers encounter and respond to them.

The Book of Common Prayer, unlike many other books in the series, constitutes something of a moving target because it has been revised several times and has given birth to prayer books in countries other than England. I have tried to trace some of those ramifying lines of development, though my chief emphasis has been the English book.

I loved working on this book because it gave me the chance to write about so many things that fascinate me: the Anglican tradition of which I have been a part for almost thirty years; the visual, aural, and written forms of worship; ecclesiastical controversy; literary influence and linguistic echoing; and, not least, the history of books and book-making (though I had to confine a good bit of that to an appendix).

And on that last point: this is my third time working with Princeton University Press, and of all the publishers I know they are the most devoted to the craft of bookmaking. The Book of Common Prayer: A Biography is at the very least a very beautiful little book — as were the two Auden editions I have also done for PUP, For the Time Being: A Christian Oratorio and The Age of Anxiety: A Baroque Eclogue. I own a great many e-books, but these you’ll want to have in paper and boards if at all possible.

One more thing: you might want to check out my tumblelog devoted to the book — it has some lovely images and even a few relevant words.

Friday, October 4, 2013

Aaron Swartz and MIT

I have one one thing to say about this statement from a MIT professor about the Aaron Swartz tragedy, and that’s that the piece doesn’t say anything. It’s supposed to point the way beyond the big report on how MIT dealt with Swartz, but it doesn’t. It just cycles through some typically vacuous boilerplate administrative prose: We need to “review our practices,” we “were not intellectually engaged,”“ we might have a responsibility to help [our students] grapple with the reality of that power.” what power, you ask? Um, let’s see: “The young people we work with are so extraordinary, and are so empowered by their time here.” That power. Or empowerment. Or whatever.

I bet there have been some really fascinating discussions going on at MIT since Swartz’s death, because the legal and ethical issues raised by (a) his actions and (b) his prosecution are manifold. I want to understand those issues better, and to see how major universities are going to respond to them in practice. Will they try to close down their networks to make access more difficult? Will they accept the legal aggressiveness of entities like JSTOR, or will they try to moderate them? What if powerful institutions, especially those that sponsor influential academic journals, were to refuse to cooperate with the JSTOR subscription model and sought other avenues for funding?

As I say, I bet the conversations at MIT on these matters are fascinating — though maybe not. The one really interesting point in the statement is this:

In reviewing the record for the report, I was struck by how little attention the MIT community paid to the Swartz case, at least before the suicide. The Tech carried regular news items on the arrest and the court proceedings. Yet in the two years of the prosecution, there was not one opinion piece, and not one letter to the editor. The Aaron Swartz case offers a textbook example of the issues of openness and intellectual property on the Internet—the kinds of issues for which people traditionally look to MIT for intellectual leadership. But when those issues erupted in our midst, we didn’t recognize them, and we were not intellectually engaged. Why not?

It is possible, of course, that the community was “intellectually engaged” — just not in public. As Aaron Swartz discovered to his cost, in these matters people and institutions can be quite fierce in protecting their interests. Were faculty and administration at MIT disengaged? Or just wary about going on the record?

In any case, I am sure in the aftermath of Swartz’s suicide that the university’s lawyers have gone over every public statement from MIT employees, including this one, to make sure that they are so anodyne that they might as well not have been written at all.

Wednesday, October 2, 2013

"a machine that would go of itself"

image via Grantland

So, mainly to spite Ross Douthat, I watched the finale (“Felina”) of Breaking Bad. It was impressive in every respect.

But of course there’s no way for me, even as someone who knows the plot of the series and how the major themes have developed over time, to understand the episode completely or to get the full effect — because, you know, I haven't actually been watching it. It’s interesting to think about the things that I didn’t know and couldn’t know as I watched. For instance, while I could clearly discern the valedictory character of Walt’s last meeting with Skyler, I was limited in my ability to grasp it by not having a reservoir of memories of how Bryan Cranston and Anna Gunn interacted with each other through the whole course of the series. I didn’t have a visual and aural record of body language, gesture, vocal intonation, eye contact, etc. I could cite many more examples.

All this makes me wonder what it might be like to go back and watch the whole series now, knowing the general story arc in advance and the concluding episode in detail. I’m unlikely to do that, but in principle such an experience need not be absolutely inferior to having watched the show all along. Though it would certainly be different.

All that said, I have one major point to make: it’s been interesting to see how the conclusion of the show hasn’t settled any of the long-standing arguments about it. Consider Walt’s intimidating Elliott and Gretchen into giving money to Skyler and Flynn: surely a deed confirming Team Walt in their belief that he’s basically a good guy who deeply loves his family? No, say others: it’s his refusal to accept their refusal of his dirty money, a determination to get his own way, to have his will realized, by hook or by crook and come what may. In this reading it’s not generosity, it’s the triumph of the will. Even his already-much-celebrated confession to Skyler — “I did it for me. I liked it. I was good at it. And I was really… I was alive.” — can be read either as a commendable final honesty from Walt or as a canny move on his part: he’s saying what he has to say to get Skyler to do what he wants her to do.

Whether out of good motives or bad or some combination of the two, Walt’s goal is to ensure that certain things happen after he dies — to build a machine that continues to function after its creator is dead. That’s why it seems to me that an important visual motif in “Felina” — one that I haven’t seen noticed by other commentators, though I’m sure some have mentioned it — emerges after the massacre of the Nazis: the massaging recliner continues to massage the dead body of its occupant, and the oscillator that Walt had built to guide the path of the machine gun continues to trace its pre-ordained path long after the gun is empty. (Even if the Nazis had shot Walt, if he had been able before his death to press the trunk-release they would have been killed anyway.) The quiet sounds of those two thoughtless, mindless machines in the aftermath of bloodshed are deeply eerie. That scene alone made me wish I had watched the show all the way through….

But this is what Walt is trying to build: a machine made of manipulation that will run on after his death, getting that dirty money to Skyler and Flynn whether they like it or not.

It’s a machine that works flawlessly — all of Walt’s plans in this episode work flawlessly, even something as wildly implausible as his poisoning of Lydia. Watch that scene again and try to figure out how he does it. I think it’s impossible — sort of like the way he drifts in and out of places that ought to be heavily guarded or closely observed. Implausibilities and impossibilities pile up one on another, to the point that it’s hard for me not to think that the Emily Nussbaum “it’s all a dream” theory — later endorsed by Douthat — makes for a better and more satisfying reading of the show’s conclusion than any other.

It’s nice to dream of machines working flawlessly, carrying out our plans to their envisioned perfection. But machines rarely work flawlessly; they’re like their makers in that respect.

the future of the URL

Interesting little comment by Nicholson Baker about something he does in his new novel:

The odd thing about the reaction to this book is that almost everybody is most interested in the fact that I included a YouTube URL in the book. Such a tiny thing, but in the moment I thought: okay, I’ll be really adventurous. I didn’t know it would be the thing people really paid attention to. Maybe it was a mistake. I think it was a bit of nostalgic postmodernism. In the way that people paint a photorealistic painting of a street sign. “Look at this! Look at this sequence of letters! Think about the fact that it takes you to a human voice singing in a Paris hotel room. Look at that, and be happy.” So, at that point, I was just sitting there, thinking: “Well, I really want people to listen to Stephen Fearing. I really would like that. If my book could do one thing, it would be that people would actually be guided to listen to Stephen Fearing.” And of course the worst possible way to tell them to go, I guess, is to give them a dead YouTube link, because they’re going to make a typo. The best way is to type “Paris hotel Fearing” or something. So I kind of blew it.

Which is a reminder of what a lousy technology, from the user’s point of view, the URL is — though oddly, it was created in order to help users: a URL, with its readable domain name, like www.google.com, merely points to what your computer and your network think of as the real target, the IP address, like http://74.125.224.72/. For human beings, the former is easier to remember than the latter. (The slashes come from UNIX file-path syntax: clearly Tim Berners-Lee and the other architects of the Web were thinking of pages within a domain as being like files within a directory.) Those alphabetic URLs worked pretty well, for those used to using UNIX anyway, until websites started proliferating, which led inevitably to URLs getting longer and longer — and then, equally inevitably, the rise of URL-shortening services.

But these URL shorteners weren’t intended to make locations more readable — shortened URLs are typically gibberish — but to keep them from taking over emails and other messages in which they were included. And anyway, within the last decade more and more people have been giving up on even a basic understanding of URLs, trusting Google results instead. Google’s highlighted blue links — in your own language! — offer a layer of additional, simplified comprehensibility above the layer of alphabetic comprehensibility that Berners-Lee created to cover the basically incomprehensible layer of the IP address.

This doesn’t always work: thus the ridiculous scene a few years ago when ReadWriteWeb ran a story called “Facebook Wants To Be Your One True Login” which temporarily became the first result if you Googled “facebook login.” The result: people clicking on that link and becoming outraged that Facebook was not allowing them to log in. This little event gives us a lot to reflect on: Should we think first about how dimwitted people must be to fail to notice that a ReadWriteWeb article was not Facebook? Or should we consider that Facebook had developed such a habit of changing its appearance that users weren’t that surprised to find a dramatically new design?

I dunno. But what interests me about the story is this: There are many users who are so resistant to or uncomfortable with URLs that typing “facebook login” into a Google search box is more comfortable than typing “Facebook.com” into a browser’s address bar.

So we should probably be thinking about what the next steps are in the evolution of the IP address. The creation of the address itself was the first stage; the creation of the URL the second; the preferential use of the search engine the third. I suppose the immediate future largely involves using services like Apple’s Siri to search by voice, but for ambitious technologists, that’s just a stopgap. The very idea of the IP address assumes an intelligent agent who has some idea of what she wants and some way to find it — and that’s an assumption tech companies don’t want to make any more. As Phil Libin of Evernote commented, “By the time you search, something’s already failed." The ultimate layer of abstraction away from the IP address is a world when intelligent agency, the human questioner, has disappeared altogether. We won't need URLs then.