Text Patterns - by Alan Jacobs
Showing posts with label automation. Show all posts
Showing posts with label automation. Show all posts

Monday, April 6, 2015

Morozov on Carr

Evgeny Morozov is probably not really “Evgeny Morozov,” but he plays him on the internet and has been doing so for years. It’s a simple role — you tell everyone else writing about technology that they’re wrong — and I suspect that it gets tiring after a while, though Morozov himself has been remarkably consistent in the vigor he brings to the part. A few years ago he joked on Twitter, “Funding my next book tour entirely via Kickstarter. For $10, I promise not to tweet at you. For $1000, I won’t review your book.” Well, I say “joked,” but ...

In his recent review of Nicholas Carr’s book The Glass Cage — a book I reviewed very positively here — Morozov takes a turn which will enable him to perpetuate and extend his all-critique-all-the-time approach indefinitely. You can see what’s coming when he chastises Carr for being insufficiently inattentive to philosophical traditions other than phenomenology. If, gentle reader, upon hearing this you wonder why a book on automation would be obliged to attend to any philosophical tradition, bear with me as Morozov moves toward his peroration:

Unsurprisingly, if one starts by assuming that every problem stems from the dominance of bad ideas about technology rather than from unjust, flawed, and exploitative modes of social organization, then every proposed solution will feature a heavy dose of better ideas. They might be embodied in better, more humane gadgets and apps, but the mode of intervention is still primarily ideational. The rallying cry of the technology critic — and I confess to shouting it more than once — is: “If only consumers and companies knew better!” One can tinker with consumers and companies, but the market itself is holy and not to be contested. This is the unstated assumption behind most popular technology criticism written today.

And:

Even if Nicholas Carr’s project succeeds — i.e., even if he does convince users that all that growing alienation is the result of their false beliefs in automation and even if users, in turn, convince technology companies to produce new types of products — it’s not obvious why this should be counted as a success. It’s certainly not going to be a victory for progressive politics.

And:

At best, Carr’s project might succeed in producing a different Google. But its lack of ambition is itself a testament to the sad state of politics today. It’s primarily in the marketplace of technology providers — not in the political realm — that we seek solutions to our problems. A more humane Google is not necessarily a good thing — at least, not as long as the project of humanizing it distracts us from the more fundamental political tasks at hand. Technology critics, however, do not care. Their job is to write about Google.

So on this account, if you make the mistake of writing a book about our reliance on technologies of automation and the costs and benefits to human personhood of that reliance, instead of writing about “unjust, flawed, and exploitative modes of social organization”; if your book does not strive to be “a victory for progressive politics”; if your book merely pushes for “a different Google” rather than ... I don't know, probably the dismantling of global capitalism; if your book, in short, is so lamentably without “ambition”; well, then, there’s only one thing to say.

I guess everyone other than Michael Hardt and Antonio Negri, Thomas Piketty, and maybe David Graeber have been wasting their (and our) time. God help the next person who writes about Bach without railing against the music industry’s role as an ideological state apparatus, or who writes a love story without protesting the commodification of sex under late capitalism. I don't think Morozov will be happy until every writer sounds like a belated member of the Frankfurt School.

But the thing is, Carr’s book could actually be defended on political grounds, should someone choose to do so. The book is primarily concerned with balancing the gains in automated efficiency and safety with the costs to human flourishing, and human flourishing is what politics is all about. People who have become so fully habituated to an automated environment that they simply can’t function without it will scarcely be in a position to offer serious resistance to our political-economic regime. Carr could be said to be laying part of the foundation for such resistance, by getting his readers to begin to think about what a less automated and more active, decisive life could look like.

But is it really necessary that every book be evaluated by these criteria?

Tuesday, March 4, 2014

the self that computers know

Ed Finn:

The idea that a computer might know you better than you know yourself may sound preposterous, but take stock of your life for a moment. How many years of credit card transactions, emails, Facebook likes, and digital photographs are sitting on some company’s servers right now, feeding algorithms about your preferences and habits? What would your first move be if you were in a new city and lost your smartphone? I think mine would be to borrow someone else’s smartphone and then get Google to help me rewire the missing circuits of my digital self.  

My point is that this is not about inconvenience — increasingly, it’s about a more profound kind of identity outsourcing....  

In history, in business, in love, and in life, the person (or machine) who tells the story holds the power. We need to keep learning how to read and write in these new languages, to start really seeing our own shadow selves and recognizing their power over us. Maybe we can even get them on our side.

A few years ago I quoted Jaron Lanier on the Turing Test:

But the Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?

Ed Finn is inadvertently illustrating Lanier’s point. What does a computer think my “identity” is, my “self” is? Why, credit card transactions and Facebook likes, of course. So Finn agrees with the computer. He for one welcomes our new cloud-based overlords.

Monday, November 18, 2013

Carr on automation

If you haven't done so, you should read Nick Carr’s new essay in the Atlantic on the costs of automation. I’ve been mulling it over and am not sure quite what I think.

After describing two air crashes that happened in large part because pilots accustomed to automated flying were unprepared to take proper control of their planes during emergencies, Carr comes to his key point:

The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world. That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us. Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.

And late in the essay he writes,

In schools, the best instructional programs help students master a subject by encouraging attentiveness, demanding hard work, and reinforcing learned skills through repetition. Their design reflects the latest discoveries about how our brains store memories and weave them into conceptual knowledge and practical know-how. But most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity. Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience. We pick the program that lightens our load, not the one that makes us work harder and longer. Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.

Carr isn’t arguing here that the automating of tasks is always, or even usually, bad, but rather than the default assumption of engineers — and then, by extension, most of the rest of us — is that when we can automate we should automate, in order to eliminate that pesky thing called “human error.”

Carr’s argument for reclaiming a larger sphere of action for ourselves, for taking back some of the responsibilities we have offloaded to machines, seems to be twofold:

1) It’s safer. If we continue to teach people to do the work that we typically delegate to machines, and do what we can to keep those people in practice, then when the machines go wrong we’ll have a pretty reliable fail-safe mechanism: us.

2) It contributes to human flourishing. When we understand and can work within our physical environments, we have better lives. Especially in his account of Inuit communities that have abandoned traditional knowledge of their geographical surroundings in favor of GPS devices, Carr seems to be sketching out — he can’t do more in an essay of this length — an account of the deep value of “knowledge about reality” that Albert Borgmann develops at length in his great book Holding on to Reality.

But I could imagine people making some not-obviously-wrong counterarguments — for instance, that the best way to ensure safety, especially in potentially highly dangerous situations like air travel, is not to keep human beings in training but rather to improve our machines. Maybe the problem in that first anecdote Carr tells is setting up the software so that in certain kinds of situations responsibility is kicked back to human pilots; maybe machines are just better at flying planes than people are, and our focus should be on making them better still. It’s a matter of properly calculating risks and rewards.

Carr’s second point seems to me more compelling but also more complicated. Consider this: if the Inuit lose something when they use GPS instead of traditional and highly specific knowledge of their environment, what would I lose if I had a self-driving car take me to work instead of driving myself? I’ve just moved to Waco, Texas, and I’m still trying to figure out the best route to take to work each day. In trying out different routes, I’m learning a good bit about the town, which is nice — but what if I had a Google self-driving car and could just tell it the address and let it decide how to get there (perhaps varying its own route based on traffic information)? Would I learn less about my environment? Maybe I would learn more, if instead of answering email on the way to work I looked out the window and paid attention to the neighborhoods I pass through. (Of course, in that case I would learn still more by riding a bike or walking.) Or what if I spent the whole trip in contemplative prayer, and that helped me to be a better teacher and colleague in the day ahead? I would be pursuing a very different kind of flourishing than that which comes from knowing my physical environment, but I could make a pretty strong case for its value.

I guess what I’m saying is this: I don't know how to evaluate the loss of “knowledge about reality” that comes from automation unless I also know what I am going to be doing with the freedom that automation grants me. This is the primary reason why I’m still mulling over Carr’s essay. In any case, it’s very much worth reading.