Google Revises AI Ethics Policy, Drops Pledge on Weapons and Surveillance

Google quietly axed its past promise to avoid artificial intelligence applications in weapons and surveillance in a new, updated version of its ethics policy.

Previously, the California-based tech giant’s “AI Principles” declared that it would not develop AI technologies that could “cause or are likely to cause overall harm.” That meant a promise to not use AI for weapons or surveillance that violates “internationally accepted norms.” But in an update the company announced on Tuesday, that language is gone, replaced by talk of complying with “widely accepted principles of international law and human rights.”

We believe that democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” wrote Demis Hassabis, head of Google DeepMind, and James Manyika, senior vice president of research labs, in a blog post. “And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.

Googleplex HQ (cropped)
The Pancake of Heaven!, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons

While the new policy shifts the emphasis, it brings into question Google’s position regarding AI in military and surveillance contexts. The company did not comment on further clarifications.

Google’s Evolution of AI Ethics

Google first published its AI principles in 2018, after employee backlash over its involvement in the U.S. Department of Defense’s Project Maven. The project aimed to use AI in military operations, especially in identifying drone strike targets. The controversy led to employee resignations and a petition signed by thousands demanding Google withdraw from the initiative.

After much internal uprising, Google decided not to renew its contract with the Pentagon and later opted out of a chance to compete for a $10 billion cloud computing contract with the U.S. Department of Defense, this time because it didn’t align with its AI principles.

The latest shift in Google’s AI ethics policy coincides with significant political changes in the U.S. Only a week after taking office on January 20, President Donald Trump rescinded an executive order by his predecessor, Joe Biden, which had mandated AI developers to share safety test results with the government before launching new technologies.

Joining a cluster of prominent tech leaders, including Amazon’s Jeff Bezos and Meta’s Mark Zuckerberg, who came to attend Donald Trump’s inauguration, was Sundar Pichai, Google’s CEO. The timing of the Google policy change did little to kill speculation that it is cooperating with a shift in U.S. government priorities on AI development.

Possible Consequences

This suggests that the new attitude of Google toward AI applications might be softer, as it creates an environment for future collaboration with government agencies or defense organizations on potential applications. However, as the company continues to highlight the importance of ethics, leaving behind the commitment to not support AI-powered weapons and surveillance tools marks a policy shift that is highly notable.

This is also more reflective of the broader industry trends, as tech companies increasingly navigate the complex intersection of AI innovation, ethics, and national security. As AI continues to evolve, there is a continued debate over how best to balance one’s ethical commitments and business opportunities within the industry, as well as beyond.

Whether Google’s new principles will translate into actual policy changes in its AI development strategy remains to be seen. However, the removal of explicit restrictions on military and surveillance applications raises questions about the company’s long-term vision for AI and its role in global security and governance.

The Bank of England City of London Christmas day ()

UK Very Likely on New Interest Rate Path as Inflation Takes a Backseat