As the world's biggest militaries embrace artificial intelligence, Google says it will no longer be involved.
In a set of principles laid out to guide its development of artificial intelligence, Google has announced it would not use artificial intelligence (AI) for weapons or to "cause or directly facilitate injury to people", as it unveiled a set of principles for the technologies.
Chief executive Sundar Pichai, in a blog post outlining the company's artificial intelligence policies, noted that even though Google won't use AI for weapons, "we will continue our work with governments and the military in many other areas" including cybersecurity, training, and search and rescue.
The news comes with Google facing pressure from employees and others over a contract with the US military, which the California tech giant said last week would not be renewed.
Google committed to seven principles to guide its development of AI applications, and it laid out four specific areas for which it will not develop AI.
In addition to weaponry, Google said it will not design or deploy AI for:
• Technologies that cause or are likely to cause harm.
• Technologies that gather or use information for surveillance violating internationally accepted norms.
• Technologies whose purpose contravenes widely accepted principles of international law and human rights.
That means steering clear of "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people", and systems "that gather or use information for surveillance violating internationally accepted norms".
Google also will ban the use of any technologies "whose purpose contravenes widely accepted principles of international law and human rights", Mr Pichai said.
By Neo Sesinye
Follow Neo Sesinye on Twitter
Follow IT News Africa on Twitter