Text Patterns - by Alan Jacobs

Friday, July 3, 2015

the blind man's stick


How Things Shape the Mind: A Theory of Material Engagement, by Lambros Malafouris, is a maddening but also fascinating book that is seriously helping me to think through some of the issues that concern me. Malafouris wants to argue that the human mind is “embodied, extended, enacted, and distributed” — extensive rather than intensive in its fundamental character.

He starts his exploration wonderfully: by considering a thought-experiment that Maurice Merleau-Ponty first posited in his Phenomenology of Perception. Merleau-Ponty asks us to imagine a blind man navigating a city street with a cane. What is the relationship between that cane and the man’s perceptual apparatus? Or, as Gregory Bateson put it in Steps to an Ecology of Mind,

Consider a blind man with a stick. Where does the blind man's self begin? At the tip of the stick? At the handle of the stick? Or at some point halfway up the stick? These questions are nonsense, because the stick is a pathway along which differences are transmitted under transformation, so that to draw a delimiting line across this pathway is to cut off a part of the systemic circuit which determines the blind man's locomotion.

(Bateson does not mention and probably was not aware of Merleau-Ponty.) For Malafouris the example of the blind man’s cane suggests that “what is outside the head may not necessarily be outside the mind.... I see no compelling reason why the study of the mind should stop at the skin or at the skull. It would, I suggest, be more productive to explore the hypothesis that human intelligence ‘spreads out’ beyond the skin into culture and the material world.” Moreover, things in the material world embody intentions and purposes — Malafouris thinks they actually have intentions and purposes, a view I think is misleading and sloppy — and these come to be part of the mind: they don't just influence it, they help constitute it.

I believe this example provides one of the best diachronic exemplars of what I call the gray zone of material engagement, i.e., the zone in which brains, bodies, and things conflate, mutually catalyzing and constituting one another. Mind, as the anthropologist Gregory Bateson pointed out, “is not limited by the skin,” and that is why Bateson was able to recognize the stick as a “pathway” instead of a boundary. Differentiating between “inside” and “outside” makes no real sense for the blind man. As Bateson notes, “the mental characteristics of the system are immanent, not in some part, but in the system as a whole.”

If we were to take this model seriously, then we would need to narrate the rise of modernity differently than we’ve been narrating it — proceeding in a wholly different manner than the three major stories I mentioned in my previous post. Among other things, we’d need to be ready to see the Oppenheimer Principle as having a far stronger motive role in history than is typical.

When I talk this way, some people tell me that they think I'm falling into technological determinism. Not so. Rather, it's a matter of taking with proper seriousness the power that some technologies have to shape culture. And that's not because they think or want, nor because we are their slaves. Rather, people make them for certain purposes, and either those makers themselves have socio-political power or the technologies fall into the hands of people who have socio-political power, so that the technologies are put to work in society. We then have the option to accept the defaults or undertake the difficult challenge of hacking the inherited tools — bending them in a direction unanticipated and unwanted by those who deployed them.

To write the technological history of modernity is to investigate how our predecessors have received the technologies handed to them, or used upon them, by the powerful; and also, perhaps, to investigate how countercultural tech has risen up from below to break up the one-way flow of power. These are things worth knowing for anyone who is uncomfortable with the dominant paradigm we live under now.

Wednesday, July 1, 2015

my big fat intellectual project

If there is any one general topic that has preoccupied me in the last decade, it’s ... well, it’s hard to put in a phrase. Let’s try this: The ways that technocratic modernity has changed the possibilities for religious belief, and the understanding of those changes that we get from studying the literature that has been attentive to them. But literature has not been merely an observer of these vast seismic tremors; it has been a participant, insofar as literature has been, for many, the chief means by which a disenchanted world can be re-enchanted — but not fully — and by which buffered selves can become porous again — but not wholly. There are powerful literary responses to technocratic modernity that serve simultaneously as case studies (what it’s like to be modern) and diagnostic (what's to be done about being modern).

I have not chosen to write a book about all this, but rather to explore it in a series of essays. The two key ones, the ones that form a kind of presentatonal diptych for my thoughts, are “Fantasy and the Buffered Self”, which appeared here in The New Atlantis last year, and “The Witness of Literature: A Genealogical Sketch”, which has just appeared in The Hedgehog Review.

These essays offer the fullest laying-out of the history as I understand it to date, but there are a few others in which I have elaborated some of the key ideas in more detail: see this essay on Thomas Pynchon, this one on Walker Percy, this one on Iain M. Banks, and this one on Iain Sinclair. Some of these writers are religious, some are not, some are ambivalent or ambiguous; all of them are deeply concerned with modernity and its real or imagined alternatives, especially those which seem to connect us with what used to be called the transcendent.

These recent posts of mine on what I’m calling the technological history of modernity are part of the same overarching project — a way to understand more deeply and more broadly where we are and how we got here. My reflections will on these matters will continue, probably in one form or another for the rest of my life.

the three big stories of modernity

So far there have been three widely influential stories about the rise of modernity: the Emancipatory, the Protestant, and the Neo-Thomist. The Emancipatory account argues that modernity is fundamentally about the use of rediscovered classical learning, especially the Skeptics and Epicureans in their literary and philosophical modes, to liberate European Man from bondage to a power-hungry church and religious superstition. The Protestant account argues that modernity marks the moment when rediscovered biblical languages reconnected people with the authentic Gospel of Jesus Christ, obscured for many centuries by those same power-hungry priests and by the obscurantist pedantries of Scholastic philosophy. The Neo-Thomist account argues that what the others portray as liberation or deliverance was instead a tragedy, an unwarranted rebellion against a church that, while flawed, had managed to achieve by the high Middle Ages a unity of thought, feeling, and action — manifest in the poetry of Dante, the philosophy of Thomas Aquinas, and the great cathedrals of the era — that gave great aid, comfort, and understanding to generations of people, the high and the low alike.

The Neo-Thomists agree with the Protestants in rejecting the Emancipators' irreligion and false, truncated "humanism." The Protestants join the Emancipators in condemning the priestcraft, superstition, and hostility to progress of the Neo-Thomists. The Neo-Thomists and the Emancipators share the belief that the Protestants are neither fish nor fowl, neither religious nor secular.

All of these accounts began five hundred years ago, and all survive today, in popular and in scholarly forms. The Protestant account undergirds the massive studies of Jesus and Paul recently produced by N. T. Wright; the Neo-Thomist account (which was articulated most fully in the early twentieth century by Jacques Maritain and Etienne Gilson) continues in the work of scholars as varied as the English Radical Orthodoxy crowd and Catholic scholars such as Brad Gregory; a classic version of the Emancipatory account, Stephen Greenblatt's The Swerve, recently received both the Pulitzer Prize and the National Book Award.

There may seem to be little that all three have in common, but in fact all are committed to a single governing idea, one stated seventy years ago by an influential Neo-Thomist, Richard Weaver of the University of Chicago: Ideas Have Consequences. But we can present their shared convictions with greater specificity through a twofold expansion: (a) philosophical and theological ideas (b) that emerged half a millennium ago are the most vital ones for who we are in the West today. That is, all these narrators of modernity see our own age as one in which the consequences of 500-year-old debates conducted by philosophers and theologians are still being played out.

I think all of these narratives are wrong. They are wrong because they are the product of scholars in universities who overrate the historical importance and influence of other scholars in universities, and because they neglect ideas that connect more directly with the material world. All of these grands recits should be set aside, and they should not immediately be replaced with others, but with more particular, less sweeping, and more technologically-oriented stories. The technologies that Marshall McLuhan called "the extensions of Man" are infinitely more important for Man's story, for good and for ill, than the debates of the schoolmen and interpreters of the Bible. Instead of grand narratives of the emergence of The Modern we need something far more plural: technological histories of modernity.

It is not my purpose here to supply such histories: that would be a vast undertaking indeed. The closest analogue to what I have in mind is perhaps the 27-book series Science and Civilisation in China (1954-2008), initiated and for several decades edited by Joseph Needham; or perhaps, also on a massive scale, Lynn Thorndike's A History of Magic and Experimental Science (8 volumes, 1923–58) — Thorndike’s project being actually a part of the story I think needs to be told, though it’s outdated now. Other pieces of the technological history of modernity already exist, of course: in the thriving discipline of book history, in various economic and social histories, in books like A Pattern Language and Paul Starr’s The Creation of the Media and Roy Porter’s The Greatest Benefit to Mankind.

Had Porter not died prematurely he would have been the person best suited to telling the whole story, though it’s too big for any one person to tell extremely well. But it needs to be told: we need a complex, multifaceted, materially-oriented account of how modernity arose and developed, starting with the later Middle Ages. The three big stories, with their overemphasis on theological and philosophical ideas and inattentiveness to economics and technology, have reigned long enough — more than long enough.

Monday, June 29, 2015

from coal to pixels


This is the Widows Creek power plant on the Tennessee River in Alabama, soon to become a Google data center. Or Google will use the site, anyway — I'm not sure about the future of the buildings. Big chunks of riverfront land are highly desirable to any company that processes a lot of data, because the water can be circulated through the center to help cool the machines that we overheat with photos and videos.

But there are enormous coal plants throughout America that can't be so readily repurposed, and the creativity devoted to remaking them is quite remarkable: here's an MIT Technology Review post on the subject.

Uber, algorithms, and trust


I encourage you to read Adam Greenfield’s analysis of Uber and its core values — it’s brilliant.

I find myself especially interested in the section in which Greenfield explores this foundational belief: “Interpersonal exchanges are more appropriately mediated by algorithms than by one’s own competence.” It’s a long section, so these excerpts will be pretty long too:

Like other contemporary services, Uber outsources judgments of this type to a trust mechanic: at the conclusion of every trip, passengers are asked to explicitly rate their driver. These ratings are averaged into a score that is made visible to users in the application interface: “John (4.9 stars) will pick you up in 2 minutes.” The implicit belief is that reputation can be quantified and distilled to a single salient metric, and that this metric can be acted upon objectively....

What riders are not told by Uber — though, in this age of ubiquitous peer-to- peer media, it is becoming evident to many that this has in fact been the case for some time — is that they too are rated by drivers, on a similar five-point scale. This rating, too, is not without consequence. Drivers have a certain degree of discretion in choosing to accept or deny ride requests, and to judge from publicly-accessible online conversations, many simply refuse to pick up riders with scores below a certain threshold, typically in the high 3’s.

This is strongly reminiscent of the process that I have elsewhere called “differential permissioning,” in which physical access to everyday spaces and functions becomes ever-more widely apportioned on the basis of such computational scores, by direct analogy with the access control paradigm prevalent in the information security community. Such determinations are opaque to those affected, while those denied access are offered few or no effective means of recourse. For prospective Uber patrons, differential permissioning means that they can be blackballed, and never know why....

And here’s the key point:

All such measures stumble in their bizarre insistence that trust can be distilled to a unitary value. This belies the common-sense understanding that reputation is a contingent and relational thing — that actions a given audience may regard as markers of reliability are unlikely to read that way to all potential audiences. More broadly, it also means that Uber constructs the development of trust between driver and passenger as a circumstance in which algorithmic determinations should supplant rather than rely upon (let alone strengthen) our existing competences for situational awareness, negotiation and the detection of verbal and nonverbal social cues.

Contrast this model to that of MaraMoja Transport, a new company in Nairobi that matches drivers with riders on the basis of personal trust. Users of MaraMoja compare experiences with those of their friends and acquaintances: if someone you know well and like has had a good experience with a driver, then you can feel pretty confident that you’ll have a good experience too. But of course some of your friends will have higher risk tolerances than others; some will prefer speed to friendliness, others safety above all... It’s a kind of multi-dimensional sliding scale, in which you’re not just handed a single number but get the chance to consider and weigh multiple factors.

MaraMoja also rejects Uber’s infamous surge-pricing model in favor of a fixed price based on journey length. So, all in all, like Uber — but human and ethical.

Thursday, June 25, 2015

a parable

John Martin, Pandemonium (1841)

In Milton's Paradise Lost, almost as soon as the rebel angels crash to the floor of Hell they begin thinking about how to alter their environment. They design and construct the great city of Pandemonium, in the coffeeshops of which they debate theology and philosophy.

Having built out their immediate environment, they look for new opportunities elsewhere, and construct a great bridge between their realm and Earth, so that they may pass back and forth, sharing with the inhabitants of Earth their wisdom. And perhaps such intercourse is beneficial to the devils as well.

Meanwhile, their leader Satan discovers that he can change his shape: through the exercise of a kind of spiritual biotechnology, a cosmetic surgery activated by the will alone, he can take the appearance of a lesser angel. Later he assumes the form of a cormorant; he is found "squat like a toad" at the ear of a woman, whispering dreams to her. Eventually it is the form of a serpent that he assumes. He does not seem to notice that he is always working his way down the Great Chain of Being, from beings of greater dignity and complexity to those of less. But what he does discover — though only because someone points it out — is that when he appears in his own form he is noticeably less beautiful than he had been when, named Lucifer, Son of the Morning, he had drawn near to the throne of God.

With the encouragement and support of his followers, he shares their vision of new possibility with the two human residents of Earth, who are living in a simple garden, working with their hands, and have left their appearance wholly unmodified, not even wearing clothing. Once they have been brought around to Satan’s way of thinking, the first technologies they employ are to make coverings for their bodies — to alter, though in a rudimentary way, their appearance, to make themselves seem rather different than they are.

When Satan returns to Pandemonium, crossing the bridge that had been constructed while he was at work, and announces his successful imparting to the strangers of the values of his community, he expects great applause. But what he hears is the hissing of the snakes his colleagues have been transformed into. From this time forward they will have no hands with which to make, no legs with which to walk, no voices with which to speak the words of possibility and otherness and transformation.

Wednesday, June 24, 2015

on reading books to children


The thing about making the same joke over and over and over again is that after a while it becomes pretty clear to everyone concerned that you're not joking. Did any of you people ever notice that your parents read to you without needing to tell the world how annoying it was? Many of the elementary duties of life are not especially pleasant, so just get over yourself and put a sock in it.

And maybe repetition of such duties is not an enemy of the good life, but an intrinsic part of it. To set yourself straight, read Chesterton, who has precisely the right attitude about this:

The sun rises every morning. I do not rise every morning; but the variation is due not to my activity, but to my inaction. Now, to put the matter in a popular phrase, it might be true that the sun rises regularly because he never gets tired of rising. His routine might be due, not to a lifelessness, but to a rush of life. The thing I mean can be seen, for instance, in children, when they find some game or joke that they specially enjoy. A child kicks his legs rhythmically through excess, not absence, of life. Because children have abounding vitality, because they are in spirit fierce and free, therefore they want things repeated and unchanged. They always say, ‘Do it again’; and the grown-up person does it again until he is nearly dead. For grown-up people are not strong enough to exult in monotony. But perhaps God is strong enough to exult in monotony. It is possible that God says every morning, ‘Do it again’ to the sun; and every evening, ‘Do it again’ to the moon. It may not be automatic necessity that makes all daisies alike; it may be that God makes every daisy separately, but has never got tired of making them. It may be that He has the eternal appetite of infancy; for we have sinned and grown old, and our Father is younger than we.

My son is 22 now. What I wouldn't give to be able to go back to the time when I read to him every day. (As long as I don't have to lose all the really good things about having a grown-up young man for a son.) (I guess what I'm really saying is that I have loved every stage of being a parent and wouldn't willingly forego any of it.)

more on the THM

So I continue to think about this whole technological history of modernity thing, about which I have some announcements.

1) A few days ago I thought Hey, I’m ready to write an essay about this, and within 24 hours thought No. I am not even close to being ready to write about this — if indeed I ever will be. It’s all so big and complex, and I am feeling thoroughly inadequate to the task. So I am going to continue to work through the ideas in a ramshackle and incoherent way here on this blog, for the five people who read it and for my own sanity’s sake.

2) I’m adding a “THM” tag to this post and to the previous ones, and will continue to use that tag for future entries.

3) I will in the next couple of weeks have several posts on stuff I’ve been reading lately that contribute to this project, or maybe I should say “project.” One hint of where I’m headed with at least some of this stuff: next year my colleague Jonathan Tran and I will be team-teaching a graduate course called “Bruno Latour and Theology.” There may be comments on that too, when the time comes.

Tuesday, June 23, 2015

art as industrial lubricant

Holy cow, does Nick Carr pin this one to the wall. Google says, "At any moment in your day, Google Play Music has whatever you need music for — from working, to working out, to working it on the dance floor — and gives you curated radio stations to make whatever you’re doing better. Our team of music experts, including the folks who created Songza, crafts each station song by song so you don’t have to."

Nick replies:

This is the democratization of the Muzak philosophy. Music becomes an input, a factor of production. Listening to music is not itself an “activity” — music isn’t an end in itself — but rather an enhancer of other activities, each of which must be clearly demarcated....  

Once you accept that music is an input, a factor of production, you’ll naturally seek to minimize the cost and effort required to acquire the input. And since music is “context” rather than “core,” to borrow Geoff Moore’s famous categorization of business inputs, simple economics would dictate that you outsource the supply of music rather than invest personal resources — time, money, attention, passion — in supplying it yourself. You should, as Google suggests, look to a “team of music experts” to “craft” your musical inputs, “song by song,” so “you don’t have to.” To choose one’s own songs, or even to develop the personal taste in music required to choose one’s own songs, would be wasted labor, a distraction from the series of essential jobs that give structure and value to your days. 

Art is an industrial lubricant that, by reducing the friction from activities, makes for more productive lives.

If music be the lube of work, play on — and we'll be Getting Things Done.

Sunday, June 21, 2015

on sustainability

Makoko neighborhood, Lagos Lagoon

Ross Douthat writes:

It’s possible to believe that climate change is happening while doubting that it makes “the present world system ... certainly unsustainable,” as the pope suggests. Perhaps we’ll face a series of chronic but manageable problems instead; perhaps “radical change” can, in fact, be persistently postponed.

Indeed, perhaps our immediate future fits neither the dynamist nor the catastrophist framework.

We might have entered a kind of stagnationist position, a sustainable decadence, in which the issues Pope Francis identifies percolate without reaching a world-altering boil.

In that case, the deep critique our civilization deserves will have to be advanced without the threat of imminent destruction. The arguments in “Laudato Si’” will still resonate, but they will have to be structured around a different peril: Not a fear that the particular evils of our age can’t last, but the fear that actually, they can.

I think this is a very powerful response, but one that needs unpacking. The key terms are “sustainable” and “manageable,” and the key questions are “Sustainable for whom?” and “Manageable by whom?”

(Please note that what follows is written under the assumption that the standard predictions are right: that anthropogenic climate change exists and will continue, that temperatures and sea levels will rise, etc. If those predictions are wrong and the climate does not alter significantly, then “the present world system” will continue to function — unless rendered unsustainable for wholly other reasons.)

To write as Ross does here is to take a government’s-eye view of the matter — or perhaps a still higher-level view. One example: Rising sea levels will be neither sustainable nor manageable for poor people whose homes are drowned, and who will have to move inland, perhaps in some cases into refugee camps. But it is unlikely that these people will be able to stage a successful rebellion against the very political order that has left them in poverty. Resources will need to be diverted to manage them; but in the developed world that will probably be possible.

In poorer countries with less extensive political infrastructures, chaos could ensue. But those countries are typically not essential to the functioning of “the present world system,” and indeed, the people who run that system may find the resources of such countries easier to exploit when they become politically incoherent. Thus it’s not hard to imagine, as a long-term consequence of climate change, multinational corporations becoming ever more important and influential — a scenario imagined in some detail by Kim Stanley Robinson in his Mars Trilogy. In such an environment, “the present world system” might actually become more rather than less secure.

In light of these thoughts, it might be worthwhile to look at the whole paragraph in which the Pope deems the current order “unsustainable”:

On many concrete questions, the Church has no reason to offer a definitive opinion; she knows that honest debate must be encouraged among experts, while respecting divergent views. But we need only take a frank look at the facts to see that our common home is falling into serious disrepair. Hope would have us recognize that there is always a way out, that we can always redirect our steps, that we can always do something to solve our problems. Still, we can see signs that things are now reaching a breaking point, due to the rapid pace of change and degradation; these are evident in large-scale natural disasters as well as social and even financial crises, for the world’s problems cannot be analyzed or explained in isolation. There are regions now at high risk and, aside from all doomsday predictions, the present world system is certainly unsustainable from a number of points of view, for we have stopped thinking about the goals of human activity. “If we scan the regions of our planet, we immediately see that humanity has disappointed God’s expectations”.

The key phrase here is “from a number of points of view.” It might be that national governments remain stable, that the worldwide economic order continues in its present form, and yet the whole enterprise genuinely is unsustainable in ecological and moral terms — in terms of what damage to the earth and to human well-being the system inflicts. Devastation to the created order, of which humanity is a part, may prove to be politically sustainable, but it will be devastation nonetheless.

Saturday, June 20, 2015

more thoughts on Laudato Si'

Having made some preliminary comments on Pope Francis's new encyclical, I now want to develop more specific thoughts.

First, I would call attention to Francis's constant reference to the Earth as “our common home” — not a planet or even an environment, but home. All the economic questions he explores later in the encyclical are therefore grounded in the etymology of “economy”: the governance of the oikos, the household. Such domestic language is a powerful means of fighting the abstracting effects of any attempt to “think globally.” Francis seems to be saying that if you want to act globally, you should think locally: think of the earth as your home, one you share with others to whom you are accountable.

Remembering our responsibilities to the other members of our household is not something that we humans are good at, which is why Francis titles his third chapter “The Human Roots of the Ecological Crisis” — a subtle invocation (and rebuke) of Lynn White's famous essay “The Historical Roots of Our Ecologic Crisis”. White argues that Christians have historically used Genesis 1:28 — in which God gives to human beings “dominion” over the rest of creation — as a justification for exploitative abuse of the environment, and are therefore largely to blame for the current “ecologic crisis.” Francis implicitly counters White's claims by noting that thoughtless exploitation of “our common home,” including the other human beings with whom we share that home, is a universal human tendency, and that Christianity offers the means by which this might be corrected.

That means is, of course, Jesus Christ, whose example Francis discusses at the end of Chapter 2 — just before he turns to the task of (implicitly) answering Lynn White. Of Jesus he writes:

Jesus lived in full harmony with creation, and others were amazed: “What sort of man is this, that even the winds and the sea obey him?” (Mt 8:27). His appearance was not that of an ascetic set apart from the world, nor of an enemy to the pleasant things of life. Of himself he said: “The Son of Man came eating and drinking and they say, ‘Look, a glutton and a drunkard!’” (Mt 11:19). He was far removed from philosophies which despised the body, matter and the things of the world. Such unhealthy dualisms, nonetheless, left a mark on certain Christian thinkers in the course of history and disfigured the Gospel. Jesus worked with his hands, in daily contact with the matter created by God, to which he gave form by his craftsmanship. It is striking that most of his life was dedicated to this task in a simple life which awakened no admiration at all: “Is not this the carpenter, the son of Mary?” (Mk 6:3). In this way he sanctified human labour and endowed it with a special significance for our development.

Jesus loves and honors all of Creation: his rightly ordered love — manifested in how he treats other human beings as well as how he treats the rest of Creation — grounds and enables the true and proper dominion he possesses. Not just because he is the one “through whom all things were made” (John 1:3, Colossians 1:16), but also because of this right regard for the things that were made, “even the winds and the sea obey him.” And insofar as Christians have failed to imitate that right regard, they have “disfigured the Gospel.” Therefore the answer to “the ecological crisis” is not to set Christianity aside, but rather to acknowledge the ways we have disfigured the Gospel, and to return to Jesus once again as example as well as Lord.

For Francis, an understand of who Jesus is and what he has done are intrinsic to what he calls “integral ecology.” In my last post I mentioned that this phrase clearly owes a debt to Jacques Maritain's “integral humanism,” which is driven by a similar logic: it is in Christ and only in Christ that we can become fully human, rightly related to God and our neighbor. Francis merely extends that argument: it is only in Christ that we can become rightly related to God, our neighbor, and “our common home.”

Writing in the New Yorker, Elizabeth Kolbert wrote that Laudato Si' “spares no one.” This is indeed true, but Kolbert doesn't mention that among those whom this encyclical seeks to convict are those who believe that our home can be recsued from its current misery without our first coming to know the God who has already known and loved us.

Thursday, June 18, 2015

first thoughts on Laudato Si'

1) The encyclical is noteworthy for its dialogical character. The word “dialogue” appears repeatedly, and Francis begins by situating his thoughts in conversation with (a) recent popes, (b) Patriarch Bartholomew, and (c) St. Francis of Assisi. Throughout the encyclical he cites several national conferences of bishops.

2) A key passage comes early (pp. 16-17): “Technology, which, linked to business interests, is presented as the only way of solving these problems, in fact proves incapable of seeing the mysterious network of relations between things and so sometimes solves one problem only to create others.” (My emphasis.) That there is such a mysterious network of relations is central to Franciscan spirituality, and this concept points to a wholly different understanding of “network” than our technocracy offers.

3) “The climate is a common good, belonging to all and meant for all.” It is therefore simply immoral to act in such a way as to generate changes in the climate that affect others — especially those who because of poverty cannot adjust or adapt. “Many of the poor live in areas particularly affected by phenomena related to warming, and their means of subsistence are largely dependent on natural reserves and ecosystemic services such as agriculture, fishing and forestry. They have no other financial activities or resources which can enable them to adapt to climate change or to face natural disasters, and their access to social services and protection is very limited” (p. 20).

4) There are few italicized phrases in the encyclical, but these are the ones I noticed — and they seem to me key to grasping the whole argument:

  • access to safe drinkable water is a basic and universal human right, since it is essential to human survival and, as such, is a condition for the exercise of other human rights” (p. 23)
  • they [the poor] are denied the right to a life consistent with their inalienable dignity” (p. 24)
  • “a true ecological approach always becomes a social approach; it must integrate questions of justice in debates on the environment, so as to hear both the cry of the earth and the cry of the poor” (p. 35)
  • “We must continue to be aware that, regarding climate change, there are differentiated responsibilities” (p. 38)
  • Quoting John Paul II: ““God gave the earth to the whole human race for the sustenance of all its members, without excluding or favouring anyone” (p. 69)
  • “The basic problem goes even deeper: it is the way that humanity has taken up technology and its development according to an undifferentiated and one-dimensional paradigm” (p. 79)
  • “I suggest that we now consider some elements of an integral ecology, one which clearly respects its human and social dimensions” (p. 103)
  • “the development of a social group presupposes an historical process which takes place within a cultural context and demands the constant and active involvement of local people from within their proper culture” (p. 109)
  • “Interdependence obliges us to think of one world with a common plan” (p. 122)
  • St. Bonaventure “teaches us that each creature bears in itself a specifically Trinitarian structure, so real that it could be readily contemplated if only the human gaze were not so partial, dark and fragile” (p. 174)

5) Most of the early sections of the encyclical are not theological in their rhetoric or their orientation to the problems they address. In those sections, even when Francis is making points that seem to cry out for theological elaboration, he declines to do so. For example:

At the same time we can note the rise of a false or superficial ecology which bolsters complacency and a cheerful recklessness. As often occurs in periods of deep crisis which require bold decisions, we are tempted to think that what is happening is not entirely clear. Superficially, apart from a few obvious signs of pollution and deterioration, things do not look that serious, and the planet could continue as it is for some time. Such evasiveness serves as a licence to carrying on with our present lifestyles and models of production and consumption. This is the way human beings contrive to feed their self-destructive vices: trying not to see them, trying not to acknowledge them, delaying the important decisions and pretending that nothing will happen. (p.43)

Christians have some distinctive and detailed explanations for why human beings act this way, but Francis saves reflection on those explanations for later. I understand why he does this: he is trying to establish grounds for dialogue. But I fear that these passages will be quoted and used without reference to the theological context provided later in the encyclical.

6) This is an especially beautiful and powerful passage, in which Francis tries to steer between the Scylla of “anthropocentrism” and the Charybdis of “biocentrism”:

This situation has led to a constant schizophrenia, wherein a technocracy which sees no intrinsic value in lesser beings coexists with the other extreme, which sees no special value in human beings. But one cannot prescind from humanity. There can be no renewal of our relationship with nature without a renewal of humanity itself. There can be no ecology without an adequate anthropology. When the human person is considered as simply one being among others, the product of chance or physical determinism, then “our overall sense of responsibility wanes”. A misguided anthropocentrism need not necessarily yield to “biocentrism”, for that would entail adding yet another imbalance, failing to solve present problems and adding new ones. Human beings cannot be expected to feel responsibility for the world unless, at the same time, their unique capacities of knowledge, will, freedom and responsibility are recognized and valued. (p. 88)

7) For those of us who hold to the “seamless garment” or “consistent life ethic,” it’s interesting to see an early quotation from Patriarch Bartholomew: “It is our humble conviction that the divine and the human meet in the slightest detail in the seamless garment of God’s creation, in the last speck of dust of our planet”. Though the phrase “seamless garment” does not appear again, the concept governs much of the encyclical. For instance:

Since everything is interrelated, concern for the protection of nature is also incompatible with the justification of abortion. How can we genuinely teach the importance of concern for other vulnerable beings, however troublesome or inconvenient they may be, if we fail to protect a human embryo, even when its presence is uncomfortable and creates difficulties? “If personal and social sensitivity towards the acceptance of the new life is lost, then other forms of acceptance that are valuable for society also wither away”. (pp. 89-90, quoting Benedict XVI)

The phrase “throwaway culture” appears five times in the encyclical, and Francis clearly means to indicate by that our habit of discarding anything — including other human beings — that does not seem to contribute to our happiness-of-the-moment.

8) The notion of “integral ecology” pays tribute to Jacques Maritain’s notion of “integral humanism”. For Maritain, any true humanism must incorporate the “vertical dimension” of our relationship with God; Francis is clearly saying, with a similar logic, that any valid (any whole and healthy) ecology or model of “creation care” must incorporate our relationships with one another and with God. Thus one cannot think of what’s good for the environment without also thinking of what’s good for human culture. Integral ecology is cultural as well as natural:

It is not a matter of tearing down and building new cities, supposedly more respectful of the environment yet not always more attractive to live in. Rather, there is a need to incorporate the history, culture and architecture of each place, thus preserving its original identity. Ecology, then, also involves protecting the cultural treasures of humanity in the broadest sense. More specifically, it calls for greater attention to local cultures when studying environmental problems, favouring a dialogue between scientific-technical language and the language of the people. Culture is more than what we have inherited from the past; it is also, and above all, a living, dynamic and participatory present reality, which cannot be excluded as we rethink the relationship between human beings and the environment.

9) A book frequently quoted in this encyclical is Romano Guardini’s The End of the Modern World. Pope Francis has long been interested in and influenced by Guardini, who was also a major influence on Benedict XVI. If I had my way, I’d spend the next couple of months preparing to teach a class in which this encyclical — a far richer work than I had expected it to be, and one that I hope will have lasting power — would be read alongside Guardini’s book, with both accompanied by repeated viewings of Mad Max: Fury Road. The class would be called “Who Killed the World?”

Sunday, June 14, 2015

organizing the sensorium


In his extraordinary book The Presence of the Word (1967), Walter Ong wrote,

Growing up, assimilating the wisdom of the past, is in great part learning how to organize the sensorium productively for intellectual purposes. Man’s sensory perceptions are abundant and overwhelming. He cannot attend to them all at once. In great part a given culture teaches him one or another way of productive specialization. It brings him to organize his sensorium by attending to some types of perception more than others, by making an issue of certain ones while relatively neglecting other ones. The sensorium is a fascinating focus for cultural studies. Given sufficient knowledge of the sensorium exploited within a specific culture, one could probably define the culture as a whole in virtually all its aspects.

The idea of organizing the sensorium productively for intellectual purposes is a very powerful one, and links the history of technology with the history of institutions. Consider, for instance, the way that medieval guilds were means of teaching people the use of particular technologies but also of ratifying their abilities to participate in the life of the guild community. Medieval universities worked in much the same way: texts were scarce and had to be cared for, so people were painstakingly initiated into their responsible use. The disputatio was at once a social ceremony and a demonstration of technical mastery. This technological mastery was demonstrated by the disciplined use of sight, hearing, and speech — an organization of the sensorium embedded in a structure of social organization.

When Martin Luther came along and had the local printer print for his students a clean text of Paul’s letter to the Romans with wide margins and no commentary, he was initiating those students into a different technology and an correspondingly different model of social integration.

In light of these thoughts, the “technological history of modernity” that I have been calling for will also need to be sociological through and through. I’m getting in way over my head here, but I wonder if in trying to think about these technological/sociological connections I need to read John Levi Martin’s Social Structures, which Gabriel Rossman has described as “all about emergence and how fairly minor changes in the nature of social mechanisms can create quite different macro social structures.” And Rossman himself has written about “the diffusion of legitimacy”: how “innovations – concrete products and behaviors – [are] nested within institutions – abstract cognitive schema for evaluating the legitimacy of innovations. In effect, social actors assess the legitimacy of innovations vis-a-vis conformity to institutions such that a sufficiently legitimate innovation may be adopted without direct reference to the behavior of peers.” (Hey Gabriel: Why do you refer to institutions as “abstract cognitive schema” rather than as social organizations with significant physical presences in the world?)

Especially noteworthy in this regard are the connections between emergent behavior in social insects and internet protocols, as though there’s an underlying logic of emergence — of small acts with large consequences — shared by many different animals, including human animals with their digital machines. And these are political as well as biological and technological questions: consider Adam Roberts’s extraordinary novel New Model Army, which imagines how the conjunction of anarchist theory and secure social media tech might produce a new lifeform, what I’ve called a “hivemind singularity.”

Perhaps apparently insignificant, and merely local, adjustments in how people in a given institution strive to “organize the sensorium” can have major consequences down the line. (Not the “butterfly effect” but the “Luther’s print shop effect.”) And larger changes, like the “haptic simplification” of interacting with glass screens often to the exclusion of other forms of tactile exploration? And the ways that those screens increasingly serve as the standard user interface of automated procedures? How can those consequences not be massive?

There’s too damn much that needs to be known about all this, and I know the tiniest fraction of it. But a genuine technological history of modernity will be alert to emergent effects, social structures, and the relation between technical expertise and communal belonging.

Tuesday, June 9, 2015

a clarification

A quick note: in response to my previous post several people have emailed or tweeted to recommend Jacques Ellul or Lewis Mumford or George Grant or Neil Postman. All of those are valuable writers and thinkers, but none of them do anything like what I was asking for in that post. They provide a philosophical or theological critique of technocratic society, but that’s not a technological history of modernity. If you look at the books I recommend in that post, all of them are deeply engaged with the creation, implementation, and consequences of specific technologies — and that’s what I think we need more of, though in a larger frame, covering the whole of modernity fromt he 16th century to today. A deeply material history — a history of the pressur of made things on human behavior; something like Siegfried Giedion’s Mechanization Takes Command but theologically informed — or at least infused with a stronger sense of human telos than Giedion has; a serious critique of technological modernity that’s not afraid to get grease on its hands.

Monday, June 8, 2015

the technological history of modernity

I’m going to try to piece a few things together here, so hang on for the ride —

I have been reading and enjoying Matthew Crawford’s The World Beyond Your Head, and I’ll have more to say about it here later. I strongly recommend it to you. But today I’m going to talk about something in it I disagree with. On the book’s first page Crawford writes of “profound cultural changes” that have

a certain coherence to them, an arc — one that begins in the Enlightenment, accelerates in the twentieth century, and is perhaps culminating now. Though digital technologies certainly contribute to it, our current crisis of attention is the coming to fruition of a picture of the human being that was offered some centuries ago.

With this idea in mind, Crawford later in the book gives us a chapter called “A Brief History of Freedom” that spells out the philosophical ideas that, he believes, paved the way for the emergence of a culture in which lengthy and patient attentiveness is all but impossible.

Since attention is something I think about a lot — and have written about here and elsewhere — I’m deeply sympathetic to Crawford’s general critique. But I am not persuaded by his history. In fact, I have come to believe — as I have also written here — that the way Crawford tells the history has things backwards, in much the same way that the neo-Thomist interpretation of history gets things backwards. I don't think we have our current attention economy because of Kant, any more than we have Moralistic Therapeutic Deism because of Ockham and Duns Scotus.

To make the kind of argument that Crawford and the neo-Thomists make is to take philosophy too much at its own self-valuation. Philosophy likes to see itself as operating largely independently of culture and society and setting the terms on which people will later think. But I believe that philosophy is far more a product of existing social and economic structures than it is an independent entity. We don't have the modern attention economy because of Kant; rather, we got Kant because of certain features of technological modernity — especially those involving printing, publishing, and international postal delivery — that also have produced our current attention economy, which, I believe, would work just as it does if Kant had never lived. What I call the Oppenheimer Principle — “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you've had your technical success” — has worked far more powerfully to shape our world than any of our master thinkers. Indeed, those thinkers are, in ways we scarcely understand, themselves the product of the Oppenheimer Principle.

So while it is true that, as I said in one of those earlier posts, “those of us who are seriously seeking alternatives to the typical modes of living in late modernity need a much, much better philosophy and theology of technology,” we also need better history — what I think I want to call a technological history of modernity.

To be sure, that already exists in bits and pieces — indeed, in fairly large chunks. Some existing works that might help us re-orient our thinking towards a better account of how we got to Us:


Those of us who — out of theological conviction or out of some other conviction — have some serious doubts about the turn that modernity has taken have been far too neglectful of this material, economic, and technological history. We need to remedy that deficiency. And someone needs to write a really comprehensive and ambitious technological history of modernity. I don't think I’m up to that challenge, but if no one steps up to the plate....

My current book project has convinced me of the importance of these issues. All of the figures I am writing about there understood that they could not think of World War II simply as a conflict between the Allies and the Axis. There were, rather, serious questions to be asked about the emerging character of the Western democratic societies. On some level each of these figures intuited or explicitly argued that if the Allies won the war simply because of their technological superiority — and then, precisely because of that success, allowed their societies to become purely technocratic, ruled by the military-industrial complex — their victory would become largely a hollow one. Each of them sees the creative renewal of some form of Christian humanism as a necessary counterbalance to technocracy.

I agree with them, and think that at the present moment our world needs — desperately — the kind of sympathetic and humane yet strong critique of technocracy they tried to offer. But such a critique can only be valuable if it grows from a deep understanding — an attentive understanding — of both the present moment, in all its complexities, and the present moment’s antecedents, in all their complexities. In the coming months, as I continue to work on my book, I’ll be thinking about how that technological history of modernity might be told, and will share some thoughts here. That will probably mean posting less often but more substantively; we’ll see. The idea is to lay the foundation for future work. Please stay tuned.

Thursday, June 4, 2015

the abolition of sadness

I want to follow up on a recent post, which considered, among other things, the ways that our investment of energy, attention, and money in communications technologies might constrain innovation in other areas. In light of that argument, consider Katie Roiphe’s answer to the question “Which Contemporary Habits Will Be Most Unthinkable 100 Years From Now?”:

Sadness. Drug companies will have developed an over-the-counter, side-effect-free pill (or patch or lotion) that combats the feeling. People will swallow this pill casually, in the same way they take Advil, when they feel the first glimmers of melancholy. It will have no stigma and will be as common and unexamined as the Band‑Aids and Tylenol in every medicine cabinet.

So suppose this happens. What effect will that have on innovation and creativity, in the arts and in humanistic scholarship as well as in the sciences, especially medicine? What do we profit if we abolish sadness without abolishing the things that make us sad?

Tuesday, June 2, 2015

mother (and other) tongues

Languages

This map of languages around the world is messed up in several ways, some of them easily avoidable, some not so much. But the most notable oddities — the complete neglect of African languages, the absence of the Indian subcontinent from the English bubble — are a product of that curious concept “first language.” If you live in Nairobi your first language, in some sense, might be Gĩkũyũ, but you may also speak English or Swahili far, far more often — and maybe more fluently as well. Similarly, for many millions of people in India and Pakistan it just doesn’t make practical sense to think of English as their second or third language. It’s as “first” as Hindi or Urdu. 

The great polymathic scholar George Steiner, in his masterwork After Babel, has written of how deeply people believe in the idea of a first language, a “mother tongue,” and how resistant they can be to the idea that one can be truly multilingual — multilingual all the way down. I’ll leave you with a fascinating passage on this: 

I have no recollection whatever of a first language. So far as I am aware, I possess equal currency in English, French, and German. What I can speak, write, or read of other languages has come later and retains a ‘feel’ of conscious acquisition. But I experience my first three tongues as perfectly equivalent centres of myself. I speak and I write them with indistinguishable ease. Tests made of my ability to perform rapid routine calculations in them have shown no significant variations of speed or accuracy. I dream with equal verbal density and linguistic-symbolic provocation in all three. The only difference is that the idiom of the dream follows, more often than not, on the language I have been using during the day (but I have repeatedly had intense French- or English-language dreams while being in a German-speaking milieu, as well as the reverse). Attempts to locate a ‘first language’ under hypnosis have failed. The banal outcome was that I responded in the language of the hypnotist. In the course of a road accident, while my car was being hurled across oncoming traffic, I apparently shouted a phrase or sentence of some length. My wife does not remember in what language. But even such a shock-test of linguistic primacy may prove nothing. The hypothesis that extreme stress will trigger one’s fundamental or bedrock speech assumes, in the multilingual case, that such a speech exists. The cry might have come, quite simply, in the language I happened to have used the instant before, or in English because that is the language I share with my wife.

Monday, June 1, 2015

more on the "Californian ideology"


A brief follow-up to a recent post ... Here's an interesting article by Samuel Loncar called "The Vibrant Religious Life of Silicon Valley, and Why It’s Killing the Economy." A key passage:

The “religion of technology” is not itself new. The late historian David Noble, in his book by that title, traced its origins in a particular strain of Christianity which saw technology as means of reversing the effects of the Fall. What is new, and perhaps alarming, is that the most influential sector of the economy is awash in this sea of faith, and that its ethos in Silicon Valley is particularly unfriendly to human life as the middle classes know it. The general optimism about divinization in Silicon Valley motivates a widespread (though by no means universal) disregard for, and even hostility toward, material culture: you know, things like bodies (which Silva calls “skin bags”) and jobs which involve them.

The very fact that Silicon Valley has incubated this new religious culture unbeknownst to most of the outside world suggests how insulated it is. On the one hand, five minutes spent listening to the CEO of Google or some other tech giant will show you how differently people in Silicon Valley think from the rest of the country — listen carefully and you realize most of them simply assume there will be massive unemployment in the coming decades — and how unselfconscious most are of their differences. On the other hand, listen to mainstream East Coast journalists and intellectuals, and you would think a kind of ho-hum secularism, completely disinterested in becoming gods, is still the uncontested norm among modern elites.

If religion makes a comeback, but this is the religion that comes back....

More on this later, but for now just one brief note about bodies as "skin bags": in the opening scene of Mad Max: Fury Road, Max is captured and branded and used to provide blood transfusions to an ill War Boy named Nux. Nux calls Max "my blood bag." Hey, it's only a body.

Saturday, May 30, 2015

Tav's Mistake


Neal Stephenson's Seveneves is a typical Neal Stephenson novel: expansive and nearly constantly geeking out over something. If a character in one of Stephenson's SF novels is about to get into a spacesuit, you know that'll take five pages because Stephenson will want to tell you about every single element of the suit's construction. If a spacecraft needs to rendezvous with a comet, and must get from one orbital plane to another, Stephenson will need to explain every decision and the math underlying it, even if that takes fifty pages — or more. If you like that kind of thing, Seveneves will be the kind of thing you like.

I don't want to write a review of the novel here, beyond what I've just said; instead, I want to call attention to one passage. Setting some of the context for it is going to take a moment, though, so bear with me. (If you want more details, here's a good review.)

The novel begins with this sentence: "The moon blew up without warning and for no apparent reason." After the moon breaks into fragments, and the fragments start bumping into each other and breaking into ever smaller fragments, scientists on earth figure out that at a certain point those fragments will become a vast cloud (the White Sky) and then, a day or two later, will fall in flames to earth — so many, and with such devastating force, that the whole earth will become uninhabitable: all living things will die. This event gets named the Hard Rain, and it will continue for millennia. Humanity has only two years to prepare for this event: this involves sending a few people from all the world's nations up to the International Space Station, which is frantically being expanded to house them. Also sent up is a kind of library of genetic material, in the hope that the diversity of the human race can be replicated at some point in the distant future.

The residents of the ISS become the reality-TV stars for those on earth doomed to die: every Facebook post and tweet scrutinized, every conversation (even the most private) recorded and played back endlessly. Only a handful of these people survive, and as the Hard Rain continues on a devastated earth, their descendants very slowly rebuild civilization — focusing all of their intellectual resources on the vast problems of engineering with which they're faced as a consequence of the deeply unnatural condition of living in space. This means that, thousands of years after the Hard Rain begins, as they are living in an environment of astonishing technological complexity, they don't have much in the way of social media.

In the decades before Zero [the day the moon broke apart], the Old Earthers had focused their intelligence on the small and the soft, not the big and the hard, and built a civilization that was puny and crumbling where physical infrastructure was concerned, but astonishingly sophisticated when it came to networked communications and software. The density with which they’d been able to pack transistors onto chips still had not been matched by any fabrication plant now in existence. Their devices could hold more data than anything you could buy today. Their ability to communicate through all sorts of wireless schemes was only now being matched — and that only in densely populated, affluent places like the Great Chain.

But in the intervening centuries, those early textual and visual and aural records of the survivors had been recovered and turned into The Epic — the space-dwelling humans’ equivalent of the Mahabharata, a kind of constant background to the culture, something known to everyone. And when the expanding human culture divides into two distinct groups, the Red and the Blue, the second of those groups became especially attentive to one of those pioneers, a jounalist named Tavistock Prowse. “Blue, for its part, had made a conscious decision not to repeat what was known as Tav’s Mistake.”

Fair or not, Tavistock Prowse would forever be saddled with blame for having allowed his use of high-frequency social media tools to get the better of his higher faculties. The actions that he had taken at the beginning of the White Sky, when he had fired off a scathing blog post about the loss of the Human Genetic Archive, and his highly critical and alarmist coverage of the Ymir expedition, had been analyzed to death by subsequent historians. Tav had not realized, or perhaps hadn’t considered the implications of the fact, that while writing those blog posts he was being watched and recorded from three different camera angles. This had later made it possible for historians to graph his blink rate, track the wanderings of his eyes around the screen of his laptop, look over his shoulder at the windows that had been open on his screen while he was blogging, and draw up pie charts showing how he had divided his time between playing games, texting friends, browsing Spacebook, watching pornography, eating, drinking, and actually writing his blog. The statistics tended not to paint a very flattering picture. The fact that the blog posts in question had (according to further such analyses) played a seminal role in the Break, and the departure of the Swarm, only focused more obloquy upon the poor man.

But — and this is key to Stephenson’s shrewd point — Tav is a pretty average guy, in the context of the social-media world all of us inhabit:

Anyone who bothered to learn the history of the developed world in the years just before Zero understood perfectly well that Tavistock Prowse had been squarely in the middle of the normal range, as far as his social media habits and attention span had been concerned. But nevertheless, Blues called it Tav’s Mistake. They didn’t want to make it again. Any efforts made by modern consumer-goods manufacturers to produce the kinds of devices and apps that had disordered the brain of Tav were met with the same instinctive pushback as Victorian clergy might have directed against the inventor of a masturbation machine.

So the priorities of the space-dwelling humanity are established first by sheer necessity: when you’re trying to create and maintain the technologies necessary to keep people alive in space there’s no time for working on social apps. But it’s in light of that experience that the Spacers grow incredulous at a society that lets its infrastructure deteriorate and its medical research go underfunded in order to devote its resources of energy, attention, technological innovation, and money to Snapchat, YikYak, and Tinder.

Stephenson has been talking about this for a while now. He calls it “Innovation Starvation”:

My life span encompasses the era when the United States of America was capable of launching human beings into space. Some of my earliest memories are of sitting on a braided rug before a hulking black-and-white television, watching the early Gemini missions. In the summer of 2011, at the age of fifty-one — not even old — I watched on a flatscreen as the last space shuttle lifted off the pad. I have followed the dwindling of the space program with sadness, even bitterness. Where's my donut-shaped space station? Where's my ticket to Mars? Until recently, though, I have kept my feelings to myself. Space exploration has always had its detractors. To complain about its demise is to expose oneself to attack from those who have no sympathy that an affluent, middle-aged white American has not lived to see his boyhood fantasies fulfilled.

Still, I worry that our inability to match the achievements of the 1960s space program might be symptomatic of a general failure of our society to get big things done. My parents and grandparents witnessed the creation of the automobile, the airplane, nuclear energy, and the computer, to name only a few. Scientists and engineers who came of age during the first half of the twentieth century could look forward to building things that would solve age-old problems, transform the landscape, build the economy, and provide jobs for the burgeoning middle class that was the basis for our stable democracy.

Now? Not so much.

I think Stephenson is talking about something very, very important here. And I want to suggest that the decision to focus on “the small and the soft” instead of “the big and the hard” creates a self-reinforcing momentum. So I’ll end here by quoting something I wrote about this a few months ago:

Self-soothing by Device. I suspect that few will think that addiction to distractive devices could even possibly be related to a cultural lack of ambition, but I genuinely think it’s significant. Truly difficult scientific and technological challenges are almost always surmounted by obsessive people — people who are grabbed by a question that won’t let them go. Such an experience is not comfortable, not pleasant; but it is essential to the perseverance without which no Big Question is ever answered. To judge by the autobiographical accounts of scientific and technological geniuses, there is a real sense in which those Questions force themselves on the people who stand a chance of answering them. But if it is always trivially easy to set the question aside — thanks to a device that you carry with you everywhere you go — can the Question make itself sufficiently present to you that answering is becomes something essential to your well-being? I doubt it.

Tuesday, May 19, 2015

Pynchon and the "Californian Ideology"

In a recent post I wrote,

The hidden relations between these two worlds — Sixties counterculture and today’s Silicon Valley business world — is, I believe, one of the major themes of Thomas Pynchon’s fiction and the chief theme of his late diptych, Inherent Vice and Bleeding Edge. If you want to understand the moral world we’re living in, you could do a lot worse than to read and reflect on those two novels.

Then yesterday I read this great post by Audrey Watters on what she calls the “Silicon Valley narrative” — a phrase she’s becoming ambivalent about, and wonders whether it might profitably be replaced by “Californian ideology.” That phrase, it turns out, comes from a 1995 essay by Richard Barbrook and Andy Cameron. I knew about this essay, have known about it for years, but had completely forgotten about it until reminded by Watters. Here’s the meat of the introduction:

At the end of the twentieth century, the long predicted convergence of the media, computing and telecommunications into hypermedia is finally happening. Once again, capitalism’s relentless drive to diversify and intensify the creative powers of human labour is on the verge of qualitatively transforming the way in which we work, play and live together. By integrating different technologies around common protocols, something is being created which is more than the sum of its parts. When the ability to produce and receive unlimited amounts of information in any form is combined with the reach of the global telephone networks, existing forms of work and leisure can be fundamentally transformed. New industries will be born and current stock market favourites will swept away. At such moments of profound social change, anyone who can offer a simple explanation of what is happening will be listened to with great interest. At this crucial juncture, a loose alliance of writers, hackers, capitalists and artists from the West Coast of the USA have succeeded in defining a heterogeneous orthodoxy for the coming information age: the Californian Ideology.

This new faith has emerged from a bizarre fusion of the cultural bohemianism of San Francisco with the hi-tech industries of Silicon Valley. Promoted in magazines, books, TV programmes, websites, newsgroups and Net conferences, the Californian Ideology promiscuously combines the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich. Not surprisingly, this optimistic vision of the future has been enthusiastically embraced by computer nerds, slacker students, innovative capitalists, social activists, trendy academics, futurist bureaucrats and opportunistic politicians across the USA. As usual, Europeans have not been slow in copying the latest fad from America. While a recent EU Commission report recommends following the Californian free market model for building the information superhighway, cutting-edge artists and academics eagerly imitate the post human philosophers of the West Coast’s Extropian cult. With no obvious rivals, the triumph of the Californian Ideology appears to be complete.

Putting this together with Watters’s post and with my essay on the late Pynchon… wow, does all this give me ideas. Perhaps Pynchon is the premier interpreter of the Californian ideology — especially when you take into account some of his earlier books as well, especially Vineland — someone who understands both its immense appeal and its difficulty in promoting genuine human flourishing. Much to think about and, I hope, to report on here, later.

Monday, May 18, 2015

ideas and their consequences

I want to spend some time here expanding on a point I made in my previous post, because I think it’s relevant to many, many disputes about historical causation. In that post I argued that people don't get an impulse to alter their/our biological conformation by reading Richard Rorty or Judith Butler or any other theorists within the general orbit of the humanities, according to a model of Theory prominent among literary scholars and in Continental philosophy and in some interpretations of ancient Greek theoria. Rather, technological capability is its own ideology with its own momentum, and people who practice that ideology may sometimes be inclined to use Theory to provide ex post facto justifications for what they would have done even if Theory didn’t exist at all.

I think there is a great tendency among academics to think that cutting-edge theoretical reflection is ... well, is cutting some edges somewhere. But it seems to me that Theory is typically a belated thing. I’ve argued before that some of the greatest achievements of 20th-century literary criticism are in fact rather late entries in the Modernist movement: “We academics, who love to think of ourselves as being on the cutting-edge of thought, are typically running about half-a-century behind the novelists and poets.” And we run even further behind the scientists and technologists, who alter our material world in ways that generate the Lebenswelt within which humanistic Theory arises.

This failure of understanding — this systematic undervaluing of the materiality of culture and overvaluing of what thinkers do in their studies — is what produces vast cathedrals of error like what I have called the neo-Thomist interpretation of history. When Brad Gregory and Thomas Pfau, following Etienne Gilson and Jacques Maritain and Richard Weaver, argue that most of the modern world (especially the parts they don't like) emerges from disputes among a tiny handful of philosophers and theologians in the University of Paris in the fifteenth century, they are making an argument that ought to be self-evidently absurd. W. H. Auden used to say that the social and political history of Europe would be exactly the same if Dante, Shakespeare, and Mozart had never lived, and that seems to me not only to be true in those particular cases but also as providing a general rule for evaluating the influence of writers, artists, and philosophers. I see absolutely no reason to think that the so-called nominalists — actually a varied crew — had any impact whatsoever on the culture that emerged after their deaths. When you ask proponents of this model of history to explain how the causal chain works, how we got from a set of arcane, recondite philosophical and theological disputes to the political and economic restructuring of Western society, it’s impossible to get an answer. They seem to think that nominalism works like an airborne virus, gradually and invisibly but fatally infecting a populace.

It seems to me that Martin Luther’s ability to get a local printer to make an edition of Paul’s letter to the Romans stripped of commentary and set in wide margins for student annotation was infinitely more important for the rise of modernity than anything that William of Ockham and Duns Scotus ever wrote. If nominalist philosophy has played any role in this history at all — and I doubt even that — it has been to provide (see above) ex post facto justification for behavior generated not by philosophical change but by technological developments and economic practices.

Whenever I say this kind of thing people reply But ideas have consequences! And indeed they do. But not all ideas are equally consequential; nor do all ideas have the same kinds of consequences. Dante and Shakespeare and Mozart and Ockham and Scotus have indeed made a difference; but not the difference that those who advocate the neo-Thomist interpretation of history think they made. Moreover, and still more important, scientific ideas are ideas too; as are technological ideas; as are economic ideas. (It’s for good reason that Robert Heilbroner called his famous history of the great economists The Worldly Philosophers.)

If I’m right about all this — and here, as in the posts of mine I’ve linked to here, I have only been able to sketch out ideas that need much fuller development and much better support — then those of us who are seriously seeking alternatives to the typical modes of living in late modernity need a much, much better philosophy and theology of technology. Which is sort of why this blog exists ... but at some point, in relation to all the vital topics I’ve been exploring here, I’m going to have to go big or go home.

prosthetics, child-rearing, and social construction

There’s much to think and talk about in this report by Rose Eveleth on prosthetics, which makes me think about all the cool work my friend Sara Hendren is doing. But I’m going to set most of that fascinating material aside for now, and zero in on one small passage from Eveleth’s article:

More and more amputees, engineers, and prospective cyborgs are rejecting the idea that the “average” human body is a necessary blueprint for their devices. “We have this strong picture of us as human beings with two legs, two hands, and one head in the middle,” says Stefan Greiner, the founder of Cyborgs eV, a Berlin-based group of body hackers. “But there’s actually no reason that the human body has to look like as it has looked like for thousands of years.”

Well, that depends on what you mean by “reason,” I think. We should probably keep in mind that having “two legs, two hands [or arms], and one head in the middle” is not something unique to human beings, nor something that has been around for merely “thousands” of years. Bilateral symmetry — indeed, morphological symmetry in all its forms — is something pretty widely distributed throughout the evolutionary record. And there are very good adaptive “reasons” for that.

I’m not saying anything here about whether people should or should not pursue prosthetic reconstructions of their bodies. That’s not my subject. I just want to note the implication of Greiner’s statement — an implication that, if spelled out as a proposition, he might reject, but is there to be inferred: that bilateral symmetry in human bodies is a kind of cultural choice, something that we happen to have been doing “for thousands of years,” rather than something deeply ingrained in a vast evolutionary record.

You see a similar but more explicit logic in the way the philosopher Adam Swift talks about child-rearing practices: “It’s true that in the societies in which we live, biological origins do tend to form an important part of people’s identities, but that is largely a social and cultural construction. So you could imagine societies in which the parent-child relationship could go really well even without there being this biological link.” A person could say that the phenomenon of offspring being raised by their parents “is largely a social and cultural construction” only if he is grossly, astonishingly ignorant of biology — or, more likely, has somehow managed to forget everything he knows about biology because he has grown accustomed to thinking in the language of an exceptionally simplistic and naïve form of social constructionism.

N.B.: I am not arguing for or against changing child-rearing practices. I am exploring how and why people simply forget that human beings are animals, are biological organisms on a planet with a multitude of other biological organisms with which they share many structural and behavioral features because they also share a long common history. (I might also say that they share a creaturely status by virtue of a common Maker, but that’s not a necessary hypothesis at the moment.) In my judgment, such forgetting does not happen because people have been steeped in social constructionist arguments; those are, rather, just tools ready to hand. There is a deeper and more powerful and (I think) more pernicious ideology at work, which has two components.

Component one: that we are living in a administrative regime built on technocratic rationality whose Prime Directive is, unlike the one in the Star Trek universe, one of empowerment rather than restraint. I call it the Oppenheimer Principle, because when the physicist Robert Oppenheimer was having his security clearance re-examined during the McCarthy era, he commented, in response to a question about his motives, “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you've had your technical success. That is the way it was with the atomic bomb.” Social constructionism does not generate this Prime Directive, but it can occasionally be used — in, as I have said, a naïve and simplistic form — to provide ex post facto justifications of following that principle. We change bodies and restructure child-rearing practices not because all such phenomena are socially constructed but because we can — because it’s “technically sweet.”

My use of the word “we” in that last sentence leads to component two of the ideology under scrutiny here: Those who look forward to a future of increasing technological manipulation of human beings, and of other biological organisms, always imagine themselves as the Controllers, not the controlled; they always identify with the position of power. And so they forget evolutionary history, they forget biology, they forget the disasters that can come from following the Oppenheimer Principle — they forget everything that might serve to remind them of constraints on the power they have ... or fondly imagine they have.

Saturday, May 16, 2015

Station Eleven and global cooling



I recently read Emily St. John Mandel’s Station Eleven, which didn’t quite overwhelm me the way it has overwhelmed many others — though I liked it. It’s good, but it could have been great. The post-apocalyptic world is beautifully and convincingly rendered: I kept thinking, Yes: this is indeed what we would value, should all be lost. But the force of the book is compromised, I think, by its chief structural conceit, which is that all the major characters in the novel’s present tense of civilizational ruin are linked in some way to an actor named Arthur Leander who died just before the Georgia Flu wiped out 99.9% of the human race. This conceit leads Mandel to flash back repeatedly to our own world and moment, and every time that happened I thought Dammit. I just didn’t care about Arthur Leander; I didn't want to read fairly conventional realistic-novel stuff. I wanted to rush through all that to get back to the future Mandel imagines so powerfully.

All that said, I have one small thought, totally irrelevant to my feelings about the book as a whole, that keeps returning to my mind. In one of the book’s first scenes, a troupe of musicians and actors (the Traveling Symphony) is walking along an old road somewhere in Michigan, and it’s very very hot, over a hundred degrees. This is twenty years after civilization died, which makes me wonder: Would the world by then be any cooler? If all of our culture’s heat sources ceased functioning today — no more air conditioners emitting hot air, no more internal combustion engines, no more factories blowing out smoke — how long would it take before there was a measurable cooling of the world’s climate?

Monday, May 11, 2015

rewiring the reading organ

Here's Gary Shteyngart on Saul Bellow:

The first time I tackled Ravelstein, back in 2000, this American mind was as open to long-form fiction as any other and I wolfed the novel down in one Saturday between helpings of oxygen and water and little else. Today I find that Bellow’s comment, ‘It is never an easy task to take the mental measure of your readers,’ is more apt than ever. As I try to read the first pages of Ravelstein, my iPhone pings and squawks with increasing distress. The delicate intellectual thread gets lost. Macaulay. Ping! Antony and Cleopatra. Zing! Keynes. Marimba! And I’m on just pages 5 and 6 of the novel. How is a contemporary person supposed to read 201 pages? It requires nothing less than performing brain surgery on oneself. Rewiring the organ so that the neurons revisit the haunts they once knew, hanging out with Macaulay and Keynes, much as they did in 2000, before encounters with both were reduced to brief digital run-ins on some highbrow content-provider’s blog, back when knowledge was actually something to be enjoyed instead of simply being ingested in small career-sustaining bursts.

Shteyngart is sort of channeling Nick Carr here. Several years ago Carr wrote

Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

Of course, some people have always been this way. In my book on The Pleasures of Reading in an Age of Distraction, I claim that John Self, the protagonist of Martin Amis’s early novel Money, is our patron saint. Self tries reading Animal Farm in order to please a woman who bought it for him: 

Reading takes a long time, though, don’t you find? It takes such a long time to get from, say, page twenty-one to page thirty. I mean, first you’ve got page twenty-three, then page twenty-five, then page twenty-seven, then page twenty-nine, not to mention the even numbers. Then page thirty. Then you’ve got page thirty-one and page thirty-three — there’s no end to it. Luckily Animal Farm isn’t that long a novel. But novels . . . they’re all long, aren’t they. I mean, they’re all so long. After a while I thought of ringing down and having Felix bring me up some beers. I resisted the temptation, but that took a long time too. Then I rang down and had Felix bring me up some beers. I went on reading.

Nothing against the Shteyngart piece, but it’s not really telling is anything new. People keep reminding themselves that this is The Way We Live Now but they just keep on living that way. Eventually they’ll either live some other way or start telling different stories, I guess.