Google has quietly removed a crucial part of its AI ethics policy, eliminating its earlier pledge to avoid using the technology for weapon development and mass surveillance.
This change signals a significant shift in the tech giant’s stance on the ethical boundaries of artificial intelligence.
Previously, Google’s AI principles, accessible through the internet archive Wayback Machine, clearly stated the company would not pursue AI applications for weapons or technology designed to cause harm.
Another explicitly prohibited area was technology used for surveillance beyond internationally accepted standards.
These restrictions no longer appear in the updated version of the company’s principles.
The acceleration of AI development, particularly since OpenAI launched ChatGPT in 2022, has left legislation and regulatory frameworks struggling to keep up.
Amid the rapidly evolving AI landscape, Google’s recent update appears to relax its own ethical guardrails.
In a blog post on Tuesday, Google’s senior vice president of research, labs, technology & society, James Manyika, and head of Google DeepMind, Demis Hassabis, acknowledged how evolving global AI policies have shaped the company’s perspective on the potential and risks of AI.
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” they wrote.
They further stressed the importance of collaboration between companies, governments, and organizations that share those values.
“We believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security,” the blog post continued.
Google first introduced its AI principles in 2018, years before AI became a cornerstone of daily life.
At that time, the company’s focus on ethical guidelines was so strong that it dropped out of a $10 billion bid for a Pentagon cloud computing contract. Google explained the decision by saying, “we couldn’t be assured that it would align with our AI Principles.”
The move came after more than 4,000 employees signed a petition demanding Google adopt a clear policy banning the development of warfare technology.
Around a dozen employees resigned in protest. This latest update marks a stark departure from those original commitments.