Google on Artificial-Intelligence Panic: Get a Grip
DeepMind burst into the limelight last year when it published a paper in Nature that showed how a computer could be programmed to teach itself to play Atari games better than most humans. After years of promise and disappointment in the once-obscure field of AI, DeepMind’s breakthrough was that instead of having to teach the machine to play each game, the machine could transfer the knowledge gained from previous games onto the next ones.
It is this breakthrough—what Suleyman calls one of the most significant breakthroughs in artificial intelligence in a long time—that has rekindled anxiety about the potential risks of AI. Just over this past year, figures such as astrophysicist Stephen Hawking, Microsoft MSFT -0.67%’s Bill Gates, and Tesla’s Elon Musk—an early investor in DeepMind—have voiced concern over AI’s potential to harm humanity.
“On existential risk, our perspective is that it’s become a real distraction from the core ethics and safety issues, and it’s completely overshadowed the debate,” Suleyman said. ”The way we think about AI is that it’s going to be a hugely powerful tool that we control and that we direct, whose capabilities we limit, just as you do with any other tool that we have in the world around us, whether they’re washing machines or tractors. We’re building them to empower humanity and not to destroy us.”
Google DeepMind now employs around 140 researchers from around the world at its lab in a new building at Kings Cross, London. Machine learning is being used across Google, in areas such as image search, robotics, biotech and Google X, the company’s highly experimental lab. For instance, Google last month unveiled a new feature in its photos product, allowing users to search their photos by text for labels like beds, children, holiday, even though the users never labeled those photos as such originally.
It is these types of advances, and the potential to solve some of humanity’s really big problems—food insecurity, global warming, and income inequality—that are being overshadowed by “hype” around AI’s existential threat, Suleyman said. “The idea that we should spending these moments now talking about consciousness and robot rights is really quite preposterous,” he said.
But Suleyman is taking the issues behind the hype seriously. DeepMind made the establishment of an ethics and safety board a condition of its acquisition by Google. The latter also couldn’t allow any AI-related work conducted by DeepMind to be used for military or intelligence purposes.
But over a year since the acquisition, Google still isn’t making any more information about this ethics board public, despite repeated requests from academia and media. By contrast, Google last year created a panel of experts to advise it on the European “right to be forgotten” debate. That advisory council was made public, and its meetings were also open. That panel, made up of academics and Internet experts, was nonetheless led by Google Chairman Eric Schmidt and Chief Legal Officer David Drummond.
Suleyman wouldn’t comment on the makeup of the AI ethics board, how its members are chosen, under what mechanism it will operate or what powers it will have. He said Google was building a team of academics, policy researchers, economists, philosophers and lawyers to tackle the ethics issue, but currently had only three or four of them focused on it. The company was looking to hire experts in policy, ethics and law, he said. “We will make it public in due course,” Suleyman said. A Google spokeswoman on Monday also declined to make any official comment on the ethics board.
Asked why Google was keeping the composition of the AI ethics board a secret despite calls for transparency and caution, Suleyman said, “That’s what I said to Larry [Page, Google’s co-founder]. I completely agree. Fundamentally we remain a corporation and I think that’s a question for everyone to think about. We’re very aware that this is extremely complex and we have no intention of doing this in a vacuum or alone.”
Amir Mizroch