Google breaks promise not to use AI in weapons creation

Google recently updated its AI principles, concrete standards intended to guide how the web giant develops AI. Initially, there were seven principles, but they have been reduced to three core tenets

These are titled:

  • Bold Innovation
  • Responsible Development and Deployment
  • Collaborative Progress, Together

Gone is any promise not to do any harm, which has drawn ethical concern from human rights watchdogs and advocates. According to The Conversation, the removed statements included promises such as:

  • Not pursuing technologies or weapons that cause or are likely to cause overall harm
  • Not pursuing technologies that gather or use information for surveillance 
  • Not pursuing technologies that infringe upon international law and human rights.

Google has justified changing its AI principles by pointing to the complicated geopolitical situation in the world right now, citing national security as a key area of AI development.

Criticism of weaponized AI

Human Rights Watch points out that while Google states it will “stay consistent with widely accepted principles of international law and human rights” the wording is vague and doesn’t give any specific examples as to how. This is highly concerning, as this change has a huge potential for infringing on human rights. 

Coincidentally, the Doomsday Clock counted down to 89 seconds not long before this Google policy update. The clock measures how close humanity is to destroying our world with its own dangerous technologies, decided by a board made up of scientists and other experts in climate science and nuclear technology. This newest update was due to an array of reasons, one of which was AI. 

On AI’s potential in wartime, The Doomsday Clock states

“Systems that incorporate artificial intelligence in military targeting have been used in Ukraine and the Middle East, and several countries are moving to integrate artificial intelligence into their militaries. Such efforts raise questions about the extent to which machines will be allowed to make military decisions—even decisions that could kill on a vast scale, including those related to the use of nuclear weapons.”

An ongoing debate

The general perception of AI has been volatile since its recent boom. There has been much debate about its ethics and whether it is a negative or positive force, mainly in relation to generative AI and its impact on the environment, as well as copyright issues regarding the data large language models are trained on. While it can’t be denied that AI has great potential to improve many areas of our everyday lives, including cancer detection, this new development with Google’s stance on AI and weapons is sure to add further controversy to the ongoing discussion.

Share on Twitter, Facebook, Google+