You are here

Jobs - for machines, life - for people

On dec. 2, 1942, a team of scientists led by Enrico Fermi came back from lunch and watched as humanity created the first self-sustaining nuclear reaction inside a pile of bricks and wood underneath a football field at the University of Chicago. Known to history as Chicago Pile-1, it was celebrated in silence with a single bottle of Chianti, for those who were there understood exactly what it meant for humankind, without any need for words.

Now, something new has occurred that, again, quietly changed the world forever. Like a whispered word in a foreign language, you may have heard it but couldn’t fully understand. The language is something called deep learning. And the whispered word was a computer’s use of it to defeat one of the world’s top players in a game called Go. Go is a board game so complex that it can be likened to playing 10 chess matches simultaneously on the same table.

This may sound like a small accomplishment, another feather in the cap of machines as they continue to prove themselves superior in parlor games that humans invented to fill their idle hours. But this feat is about far more than bragging rights. This was considered a “holy grail” level of achievement, and it’s a clear signal that advances in technology are now so exponential that milestones we once thought far away will start arriving rapidly. What’s more, humans are entirely unprepared. These exponential advances, most notably in forms of artificial intelligence, will prove daunting for as long as we continue to insist upon employment as our primary source of income. The White House, in a stunning report to Congress this week, put the probability at 83 percent that a worker making less than $20 an hour in 2010 will eventually lose his job to a machine. Even workers making as much as $40 an hour face odds of 31 percent.

We’re building a world where a universal basic income may be the only rational, fair way for society to function — and that’s not a future we should fear.

FIRST, A WORD on how we got here. All work can be divided into four types: routine and nonroutine, cognitive and manual. Routine work is the same stuff day in and day out, while nonroutine work varies. Within these two varieties, is the work that requires mostly our brains (cognitive) and the work that requires mostly our bodies (manual). Routine work started to stagnate in 1990, because some of that work can be best handled by machines.

Of course, routine work once formed the basis of the American middle class. It’s routine, manual work that Henry Ford paid people middle-class wages to perform, and it’s routine cognitive work that once filled American office buildings. That world is dwindling, leaving only two kinds of jobs with rosy outlooks: jobs that require so little thought that they pay next to nothing, and jobs that require so much thought that the salaries are exorbitant.

A four-engine plane can stay aloft with only two engines working. But what happens when the last two begin to sputter? That’s what the advancing fields of robotics and AI represent to those final two engines of nonroutine work because, for the first time, we are successfully teaching machines to learn.

Machines are getting smarter because we’re getting better at building them. And we’re getting better at it, in part, because we are smarter about the ways in which our own brains function.

What’s in our skulls is essentially a mass of interconnected cells. Some of these connections are short, and some are long; some cells are only connected to one other, and some are connected to many. Electrical signals then pass through these connections, at various rates, and subsequent neural firings happen in turn. It’s all kind of like falling dominoes, but far faster, larger, and more complex.

Deep neural networks are kind of like pared-down virtual brains. They provide an avenue to machine learning that’s made incredible leaps previously thought to be much further down the road. How? It’s not just the obvious growing capability of our computers and our expanding knowledge in the neurosciences, but the vastly growing expanse of our collective data.

Big data isn’t just some buzzword. We’re creating and standardizing so much data that a 2013 report by SINTEF estimated that 90 percent of all data in the world had been created in just the prior two years. This incredible rate of data creation is doubling every 18 months thanks to the Internet, where we uploaded 300 hours of video to YouTube and sent 350,000 tweets each minute last year.

Everything we do is generating data, and lots of data is exactly what machines need in order to learn to learn. Imagine programming a computer to recognize a chair. Early incarnations of the program would be far better at determining what isn’t a chair than what is.

Humans learn the difference as children, when chairs are identified for us by name. If children point at a table and say “chair,” they’re corrected with “table.” This is called reinforcement learning. The label “chair” gets connected to every chair, such that certain neural pathways are weighted and others aren’t. For “chair” to fire in our brains, what we perceive has to be close enough to our previous chair encounters. Essentially, our lives are big data filtered through our brains.

The unprecedented power of deep learning is that it’s a way of using massive amounts of data to get machines to operate more like we do without giving them explicit instructions. Instead of describing “chairness” to a computer, we can just plug it into the Internet and feed it millions of pictures of chairs for a general idea. Next, we test it with even more images. When the machine is wrong, it’s corrected, further improving its “chairness” detection.

Repetition of this process results in a computer that knows what a chair is when it sees it, often as well as a human can. Unlike us, however, it can then sort through millions of images within a matter of seconds. And when one machine learns something, it can pass on that knowledge to an entire network of connected machines — instantly.

One powerful example of this learning process comes from the electric car maker Tesla. Google spent six years accumulating 1.7 million miles of driving data with its prototype self-driving cars. Tesla, on the other hand, simply sent out a software update, instantly teaching its fleet how to drive themselves with a new “autopilot” ability. The network started racking up Google’s total mileage every week. Every single Tesla is now effectively teaching all other Teslas the “chairness” of driving.

Extend the Tesla example to the Internet of Things, where any interaction with a connected object has the potential of teaching something new to every connected object, and the immense scaling of networked machine learning becomes almost unimaginable.

IN A FREQUENTLY cited paper, an Oxford University study estimated the potential automation of about half of all existing jobs by 2033. Meanwhile self-driving vehicles, again thanks to machine learning, have the capability of drastically affecting all economies by eliminating millions of jobs within a short span of time. New jobs are no longer created faster than technology destroys them. A report by the World Economic Forum has estimated that despite the creation of millions of new jobs over the next four years, there will likely be a net loss of 5 million.

All of this is why it’s those most knowledgeable in the AI field who are now actively sounding the horn for basic income. During a panel discussion at the end of 2015 at Singularity University, prominent data scientist Jeremy Howard asked, “Do you want half of people to starve because they literally can’t add economic value, or not?” before going on to suggest, “If the answer is not, then the smartest way to distribute the wealth is by implementing a universal basic income.”

The combination of deep learning and Big Data has resulted in astounding accomplishments just in the past year. Google’s DeepMind AI learned how to read and comprehend what it read through hundreds of thousands of annotated news articles. DeepMind also taught itself to play dozens of Atari 2600 video games better than humans, just by looking at the screen and its score, and playing games repeatedly. An AI named Giraffe taught itself how to play chess in a similar manner using a dataset of 175 million chess positions, attaining International Master level status in just 72 hours by repeatedly playing itself.

In 2015, an AI even passed a visual Turing test by learning to learn in a way that enabled it to be shown an unknown character in a fictional alphabet, then instantly reproduce that letter in a way that was entirely indistinguishable from a human given the same task. These are all major milestones in AI.

Nonetheless, when asked to estimate how long it would take a computer to defeat a prominent player in the game of Go, the answer — just months prior to the announcement by Google of AlphaGo’s victory — was about a decade. That was considered a fair guess because Go is a game with more possibilities than atoms in the known universe. That made impossible any brute-force approach to scan every possible move to determine the next best move. But deep neural networks got around that barrier in the same way our own minds do, by learning to estimate what feels like the best move. We do this through observation and practice, and so did AlphaGo. It analyzed millions of professional games and played itself millions of times. For the game of Go, the enemy wasn’t a month’s march from the castle — it was already inside the keep, feet up on the table, eating the king’s lunch.

The Go lesson shows us that nothing humans do as a job is safe anymore. From making hamburgers to anesthesiology, machines will be able to successfully perform such tasks and at lower costs than humans.

AMELIA IS many things. But she’ll never take a sick day, join a union, or waste time on Facebook on the job. Created by IPsoft over the past 16 years, the AI system learned how to perform the work of call center employees. She can learn in seconds what takes humans months to master, and she can do it in 20 languages. Because she’s able to learn, she’s able to do more over time. In one company trial, she successfully handled one of every 10 calls in the first week, and by the end of the second month, she could resolve six in 10. Deploy her worldwide, and 250 million people can start looking for a new job.

Viv is an AI coming soon from the creators of Siri who’ll be our own personal assistant. She’ll perform tasks online for us and even function as a Facebook News Feed on steroids by suggesting we consume the media she’ll know we’ll like best. With Viv doing all this for us, we’ll see far fewer ads, and that means the entire advertising industry — that industry the entire Internet is built upon — stands to be hugely disrupted.

A world with Amelia and Viv — and the countless other AI counterparts coming online soon — is going to force serious societal reconsiderations. Is it fair to ask any human to compete against a potentially flawless machine in the next cubicle? If machines are performing most of our jobs and not getting paid, where does that money go instead? And what does that unpaid money no longer buy? Is it even possible that many of the jobs we’re creating don’t need to exist at all, and only do because of the incomes they provide?

We must seriously start talking about decoupling income from work. Adopting a universal basic income, aside from immunizing against the negative effects of automation, also decreases the risks inherent in entrepreneurship, and the sizes of bureaucracies otherwise necessary to boost incomes. It’s for these reasons, it has cross-partisan support, and is even now in the beginning stages of implementation in countries like Switzerland, Finland, and the Netherlands.

Artificial intelligence pioneer Chris Eliasmith, director of the Centre for Theoretical Neuroscience, also warned about the immediate impacts of AI on society in a recent interview with Futurism, “AI is already having a big impact on our economies. . . . My suspicion is that more countries will have to follow Finland’s lead in exploring basic income guarantees for people.”

Even Baidu’s chief scientist and founder of Google’s “Google Brain” deep learning project, Andrew Ng, during an onstage interview at this year’s Deep Learning Summit, expressed the shared notion that basic income must be “seriously considered” by governments, citing “a high chance that AI will create massive labor displacement.”

When those building the tools begin warning about the implications of their use, shouldn’t those wishing to use those tools listen with the utmost of attention, especially when it’s the very livelihoods of millions at stake?

No nation is yet ready for the changes ahead. High rates of labor force nonparticipation leads to social instability, as does a lack of consumers within consumer economies. It turns out, humans are good at designing things, but not so great at picturing a world that their technology will create. What’s the big lesson to learn, in a century when machines can learn? Maybe it is that jobs are for machines, and life is for people.

Scott Santens