You’re at the wheel, tired. You close your eyes, drift from your lane. This time you are lucky. You awaken, scared. If you are smart, you won’t drive again when you are about to fall asleep.
Well ... people don't always have a choice about this kind of thing. I mean, sometimes people drive when they’re about to fall asleep because they’ve been working a really long time and driving is the only way they can get home. But never mind. Proceed.
Through your mistakes, you learn. But other drivers won’t learn from your mistakes. They have to make the same mistakes by themselves — risking other people’s lives.
This is true. Also, when I learned to walk, to read, to hit a forehand, to drive a manual-transmission car, no one else but me learned from my mistakes. This seems to be how learning works, in general. However, some of the people who taught me these things explained them to me in ways that helped me to avoid mistakes; and often they were drawing on their own experience. People may even have told me to load up on caffeine before driving late at night. This kind of thing happens a lot among humans — the sharing of knowledge and experience.
Not so the self-driving car. When it makes a mistake, all the other cars learn from it, courtesy of the people programming them. The first time a self-driving car encountered a couch on the highway, it didn’t know what to do and the human safety driver had to take over. But just a few days later, the software of all cars was adjusted to handle such a situation. The difference? All self-driving cars learn from this mistake, not just one. Including future, “unborn” cars.
Okay, so the cars learn ... but I guess the people in the cars don't learn anything.
When it comes to artificial intelligence (AI), computers learn faster than people.
I don't understand what “when it comes to” means in this sentence, but “Some computers learn some things faster than some people” would be closer to a true statement. Let’s stick with self-driving cars for a moment: you and I have no trouble discerning and avoiding a pothole, but Google’s cars can’t do that at all. You and I can tell when a policeman on the side of the road is signaling for you to slow down or stop, and can tell whether that’s a big rock in the road or just a piece of cardboard, but Google’s cars are clueless.
The Gutenberg Bible is a beautiful early example of a technology that helped humans distribute information from brain to brain much more efficiently. AI in machines like the self-driving car is the Gutenberg Bible, on steroids.
“On steroids”?
The learning speed of AI is immense, and not just for self-driving cars. Similar revolutions are happening in fields as diverse as medical diagnostics, investing, and online information access.
I wonder what simple, everyday tasks those systems are unable to perform.
Because machines can learn faster than people, it would seem just a matter of time before we will be outranked by them.
“Outranked”?
Today, about 75 percent of the United States workforce is employed in offices — and most of this work will be taken away by AI systems. A single lawyer or accountant or secretary will soon be 100 times as effective with a good AI system, which means we’ll need fewer lawyers, accountants, and secretaries.
What do you mean by “effective”?
It’s the digital equivalent of the farmers who replaced 100 field hands with a tractor and plow. Those who thrive will be the ones who can make artificial intelligence give them superhuman capabilities.
“Make them”? How?
But if people become so very effective on the job, you need fewer of them, which means many more people will be left behind.
“Left behind” in what way? Left behind to die on the side of the road? Or what?
That places a lot of pressure on us to keep up, to get lifelong training for the skills necessary to play a role.
“Lifelong training”? Perhaps via those MOOCs that have been working so well? And what does “play a role” mean? The role of making artificial intelligence give me superhuman capabilities?
The ironic thing is that with the effectiveness of these coming technologies we could all work one or two hours a day and still retain today’s standard of living.
How? No, seriously, how would that play out? How do I, in my job, get to “one or two hours a day”? How would my doctor do it? How about a plumber? I’m not asking for a detailed roadmap of the future, but just sketch out a path, dude. Otherwise I might think you’re just talking through your artificially intelligent hat. Also, do you know what “ironic” means?
But when there are fewer jobs — in some places the chances are smaller of landing a position at Walmart than gaining admission to Harvard —
That’s called lying with statstics, but never mind, keep going.
— one way to stay employed is to work even harder. So we see people working more, not less.
If by “people,” you mean “Americans,” then that is probably true — but these things have been highly variable throughout history. Any anyway, how does “people working more” fit with your picture of the coming future?
Get ready for unprecedented times.
An evergreen remark, that one is.
We need to prepare for a world in which fewer and fewer people can make meaningful contributions.
Meaningful contributions to what?
Only a small group will command technology and command AI.
What do you mean by “command”? Does a really good plumber “command technology”? If not, why not? How important is AI in comparison to other technologies, like, for instance, farming?
What this will mean for all of us, I don’t know.
Finally, an honest and useful comment. Thrun doesn't know anything about what he was asked to comment on, but that didn't stop him from extruding a good deal of incoherent vapidity, nor did it stop an editor at Pacific Standard from presenting it to the world.
| 





