Ned O’Gorman, in his response to my 79 theses, writes:

Of course technologies want. The button wants to be pushed; the trigger wants to be pulled; the text wants to be read — each of these want as much as I want to go to bed, get a drink, or get up out of my chair and walk around, though they may want in a different way than I want. To reserve “wanting” for will-bearing creatures is to commit oneself to the philosophical voluntarianism that undergirds technological instrumentalism.

We’re in interesting and difficult territory here, because what O’Gorman thinks obviously true I think obviously false. In fact, it seems impossible to me that O’Gorman even believes what he writes here.

Take for instance the case of the button that “wants to be pushed.” Clearly O’Gorman does not believe that the button sits there anxiously as a finger hovers over it thinking o please push me please please please. Clearly he knows that the button is merely a piece of plastic that when depressed activates an electrical current that passes through wires on its way to detonating a weapon. Clearly he knows that an identical button — buttons are, after all, to adopt a phrase from the poet Les Murray, the kind of thing that comes in kinds — might be used to start a toy car. So what can he mean when he says that the button “wants”?

I an open to correction, but I think he must mean something like this: “That button is designed in such a way — via its physical conformation and its emplacement in contexts of use — that it seems to be asking or demanding to be used in a very specific way.” If that’s what he means, then I fully agree. But to call that “wanting” does gross violence to the term, and obscures the fact that other human beings designed and built that button and placed it in that particular context. It is the desires, the wants, of those “will-bearing” human beings, that have made the button so eminently pushable.

(I will probably want to say something later about the peculiar ontological status of books and texts, but for now just this: even if I were to say that texts don’t want I wouldn’t thereby be “divesting” them of “meaningfulness,” as O’Gorman claims. That’s a colossal non sequitur.)

I believe I understand why O’Gorman wants to make this argument: the phrases “philosophical voluntarism” and “technological instrumentalism” are the key ones. I assume that by invoking these phrases O’Gorman means to reject the idea that human beings stand in a position of absolute freedom, simply choosing whatever “instruments” seem useful to them for their given project. He wants to avoid the disasters we land ourselves in when we say that Facebook, or the internal combustion engine, or the personal computer, or nuclear power, is “just a tool” and that “what matters is how you use it.” And O’Gorman is right to want to critique this position as both naïve and destructive.

But he is wrong if he thinks that this position is entailed in any way by my theses; and even more wrong to think that this position can be effectively combatted by saying that technologies “want.” Once you start to think of technologies as having desires of their own you are well on the way to the Borg Complex: we all instinctively understand that it is precisely because tools don’t want anything that they cannot be reasoned with or argued with. And we can become easily intimidated by the sheer scale of technological production in our era. Eventually we can end up talking even about what algorithms do as though algorithms aren’t written by humans.

I trust O’Gorman would agree with me that neither pure voluntarism nor purely deterministic defeatism are adequate responses to the challenges posed by our current technocratic regime — or the opportunities offered by human creativity, the creativity that makes technology intrinsic to human personhood. It seems that he thinks the dangers of voluntarism are so great that they must be contested by attributing what can only be a purely fictional agency to tools, whereas I believe that the conceptual confusion this creates leads to a loss of a necessary focus on human responsibility.

cross-posted at The Infernal Machine

6 Comments

  1. I am reminded by this interesting exchange of something our friend (and New Atlantis contributing editor and onetime IASC lecturer) Stephen Talbott wrote back in 1997. He was responding to a critic who wrote that computers and the Internet don't "do" anything, it's people who do things:

    All such statements of the "guns don't kill people" variety (or of the opposite, "guns do kill people" variety) are likely to provoke yet another installment of my periodic harangue about technological neutrality. This one is no exception.

    The argument that "guns don't kill people; people do" is unassailably correct — and comes down nicely on the side of human freedom to use technology as we choose….

    But there's another side to the story. Every technology already embodies certain human choices. It expresses meanings and intentions. A gun, after all, was pretty much designed to kill living organisms at a distance, which gives it an "essentially" different nature from, say, a pair of binoculars.

    If all technology bears human meanings and intentions, the networked computer carries the game to an entirely different level. Its whole purpose is to carry our meanings and intentions with a degree of explicitness, subtlety, intricacy, and completeness unimaginable in earlier machines. Every executing program is a condensation of certain human thinking processes. At a more general level, the computer embodies our resolve to approach much of life with a programmatic or recipe-like (algorithmic) mindset. That resolve, expressed in the machinery, is far from innocent or neutral when, for example, we begin to adapt group behavior to programmed constraints.

    Putting it in slightly different terms: Yes, our choices individually and collectively are the central thing. But a long history of choices is already built into the technology. We meet ourselves — our deepest tendencies, whether savory or unsavory, conscious or unconscious — in the things we have made. And, as always, the weight of accumulated choices begins to bind us. Our freedom is never absolute, but is conditioned by what we have made of ourselves and our world so far. The toxic materials I spread over my yard yesterday restrict my options today.

    It is true, then, that everything comes down to human freedom and responsibility. But the results of many free choices — above all today — find their way into technology, where they gain a life and staying power of their own. We need, on the one hand, to recognize ourselves — pat, formulaic, uncreative — in our machines even as, on the other hand, we allow that recognition to spur us toward mastery of the machine.

    It is not, incidentally, that the effort to develop the latest software and hardware was necessarily "pat and formulaic." It may have been extremely creative. But once the machine is running and doing its job, it represents only that past, creative act. Now it all too readily stifles the new, creative approaches that might arise among its users. Every past choice, so far as it pushes forward purely on the strength of its old impetus, so far as it remains automatically operative and thereby displaces new choices — so far, that is, as it discourages us from creatively embracing all the potentials of the current moment — diminishes the human being. And the computer is designed precisely to remain operative — to keep running by itself — as an automaton dutifully carrying out its program.

    The only way to keep our balance is to recognize what we have built into the computer and continually assert ourselves against it, just as you and I must continually assert ourselves against the limitations imposed by our pasts and expressed in our current natures.

    That's from the January 8, 1997 issue of Steve's "Netfuture" newsletter.

    I'm enjoying the discussion flowing from the 79 Theses, btw.

  2. "But to call that “wanting” does gross violence to the term, and obscures the fact that other human beings designed and built that button and placed it in that particular context. It is the desires, the wants, of those “will-bearing” human beings, that have made the button so eminently pushable."

    Not sure why delegated agency is any less agency post delegation. If I make a bomb, it is only a bomb if it has within it the (constrained) will to explode. When that will is released – which could happen by my pressing a button, or a timer pressing a button, but also just because of the instability of its chemical components, the bomb does the damage. I could be held legally responsible for having built and placed and timed the bomb, but that is a different matter from whether the bomb has agency.

    The expertise of an interaction designer is the creation of things that prompt users to act in particular ways. The phenomenological experience of using a device, especially in the early stages, is of negotiating with what that device's design intends. It is rare that you are thinking, 'what did Jonny Ives want me to do with this thing?' You instead engage with things as having their own not-yet-understood intentions. The process is precisely one of argumentation, one that hopefully becomes a hermeneutic fusion of horizons in the end. At that point, you can begin to rely on the agency of the thing to do what is was designed to do – you can get on with your own precious human agency trusting in the knowledge that you are accompanied by 'device-friends' looking out for other aspects of being human.

    I never see why this all seems so threatening to our notions of being human. I shake my head with Bruno Latour, but more poetically and thoroughly, see the final chapter of Elaine Scarry's _The Body in Pain_ (Harvard UP: 1985).

  3. Not sure why delegated agency is any less agency post delegation. If I make a bomb, it is only a bomb if it has within it the (constrained) will to explode.

    This seems to me to miss an elementary distinction, that between will and potential.

  4. I was hoping the argument was about how to nuance the distinction rather than explode it; about letting some things be called 'wanting' without prescribing wanting as something that only humans can do; about seeing why it might be fruitful to acknowledge kinds of non-human wanting.

    A button has the potential to be pushed, but so does the non-button bits of product, if you (want to) push hard enough. So there are potentials and potentials. A designer makes the button not only easier to push than the non-button bit, but also more attractive to push, and, if they are designing well, suggest that the button-pushing will cause the thing to do something. The button affords pushing; more than an action-possibility (potentia) and more like a action-promise (dynamis).

    Perhaps we should preserve 'wanting' for those special things called humans, but without something more than an instrumental materialism of potentials, how to explain how some designs work better than others, how some designs become more habitual and/or addictive than others, how some design function more responsively or irresponsibly than others?

  5. I was hoping the argument was about how to nuance the distinction rather than explode it; about letting some things be called 'wanting' without prescribing wanting as something that only humans can do…

    But who said wanting is "something that only humans can do"? Certainly not I. I said that buttons don't want, but there are more things in the world than humans and buttons.

    Also, I think it's very interesting to explore "how to explain how some designs work better than others, how some designs become more habitual and/or addictive than others, how some design function more responsively or irresponsibly than others," but the noun "design" requires a designer — not necessarily human: do spiders design their webs? — but certainly designs do not simply happen. I think "instrumentality" is precisely the nexus within which to understand them: designed things, especially in late capitalism, are first and foremost instruments meant to execute the wills of their makers and distributors, though they may possibly be adapted to execute the wills of their users.

Comments are closed.