Google CEO Sundar Pichai said last week that concerns about harmful uses of artificial intelligence are "very legitimate".
In a Washington Post interview, Pichai said that AI tools would need ethical protection reports and would require companies to think deeply about how the technology could be abused.
"I think technology must realize that it just can not build it and then fix it," said Pichai, fresh from his testimony to House Lawmakers. "I think it does not work."
Technical giants must provide artificial intelligence with "self-service" not harm humanity, noted Pichai.
HOW TO BLOCK ROBOKALLS ON IPHONE AND ANDROID
The technology management, which runs a company that uses AI in many of its products, including the powerful search engine, said he is optimistic about the technology's long-term benefits , but the assessment of AI's potential downsides parallels Lawyers and technicians have warned about AI's ability to convey authoritarian regimes, strengthen mass monitoring and spread misinformation, among other opportunities.
SpaceX and Tesla (1
Google's work with Project Maven, a military AI program, protested by its employees and led the technology t to announce that it will not continue work when the contract expires in 2019.
10 IPHONE TRICKS YOU WANT TO KNOW KNEW SOONER
Pichai said in interviews that governments around the world are still trying to understand the effects of AI and the potential need for regulation.
"Sometimes I worry that people underestimate the extent of change that is possible in the medium and long term, and I think the questions are really quite complex," he told the post. Other technology companies, such as Microsoft, have embraced the regulation of AI – both by the companies that make the technology and governments that monitor its use.