The chief executive of Alphabet calls for regulation of artificial intelligence as his own company delves deeper into developing new technologies.
CEO of Google and Alphabet Sundar Pichai is convinced that AI must be regulated in order to prevent the potential negative consequences of tools including deepfakes and facial recognition, he said in an op-ed for the Financial Times on Monday.
“There is no question in my mind that artificial intelligence needs to be regulated,” Pichai wrote. “It is too important not to. The only question is how to approach it.”
But the Google boss already appears to have some answers to this question ready in his own mind. His suggestions include international alignment between the UK and the EU, agreement on “core values,” using open-source tools (such as those already being developed by Google) to test for adherence to written principles and using existing regulation, including Europe’s GDPR, to build out broader regulatory frameworks.
The timing of the editorial coincides with a big push from Google to reveal some of the results of its own work in AI and bring tools it has developed out into the world. We’re only a few weeks into the new decade and already Google has already announced a number of breakthroughs, including a tool to spot breast cancer missed by human eyes.
As the company pushes ahead with its own research and development into AI, it’s unsurprising to see Pichai bringing the debate around ethics and regulation into the spotlight. For Google, this is not a conversation that can be saved for tomorrow when its AI tools are being built and implemented today.
But while Pichai is clear that his own company needs to take a “principled approach to applying AI,” he also wants to help others by offering Google’s “expertise, experience and tools.” The risks he sees from the inside likely extend far and wide. “We need to be clear-eyed about what could go wrong,” he said.