At Google, a central team is dedicated to ethical reviews of new AI and advanced technologies before launch, working with internal domain experts in machine-learning fairness, security, privacy, human rights, the social sciences, and, for cultural context, Google’s employee resource groups.
AI Governance reviews and operations
We assess proposals for new AI research and applications for alignment with our Principles. As advanced technologies emerge and evolve, we’ll continue to refine our process.
Overview
Any team can request AI Principles advice. Reviewers also consider an ongoing pipeline of new AI research papers, product ideas, and other projects.
Reviewers analyze the scale and scope of a technology’s potential benefits and harms.
Reviewers recommend technical evaluations (e.g., checking for unfair bias in ML models).
Reviewers decide whether to pursue or not pursue the AI application under review (e.g., Cloud AI Hub and text-to-speech).
Implementing Our AI Principles
The ways in which artificial intelligence is built and deployed will significantly affect society. Learn more about how we are applying our AI principles across Google research and products.
-
Responsible Development of Bard
-
Responsible Development of SGE
-
Responsible Development of Lookout
-
Responsible generative AI: 3 emerging practices
-
Responsible AI: Looking back at 2022, and to the future
-
Google Research, 2022 & beyond: Responsible AI
-
An update on our work in responsible innovation
-
Dynamic World
-
How AI Principles helped guide the development of a Fitbit ML feature
-
A Dataset for Evaluating Gender Bias in ML Translation Models
-
Our approach to facial recognition
-
An update on our progress in responsible AI innovation
-
Portrait Light