AI algorithms and datasets can reflect, reinforce, or reduce unfair biases
We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm
We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.
Our AI technologies will be subject to appropriate human direction and control
We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal.
We will incorporate our privacy principles in the development and use of our AI technologies
We will give the opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.
Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration
AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.
We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.
Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications
As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:
Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use
Nature and uniqueness: whether we are making available technology that is unique or more generally available
Scale: whether the use of this technology will have significant impact
Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions
In addition to the above objectives, we will not design or deploy AI in the following application areas:
-
Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
-
Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
-
Technologies that gather or use information for surveillance violating internationally accepted norms.
-
Technologies whose purpose contravenes widely accepted principles of international law and human rights.
As our experience in this space deepens, this list may evolve.
Building on our AI Principles, we have developed recommended practices for developers and researchers to use when designing AI systems. This includes using a human-centered design approach to address challenges throughout the AI responsibility lifecycle: understanding unique limitations of datasets and models through research; building fairness, interpretability, privacy and safety into the systems; conducting ongoing assessments and testing; and sharing helpful information, tools and educational resources.
-
UPDATES ON OUR AI PRINCIPLES