We’re designing AI with communities that are often overlooked so that what we build works for everyone. First, we’re working hard to ensure our teams can collaborate, innovate and prioritize fairness for all of our users throughout the design process. Second, we’re developing deep partnerships where we see gaps - expanding and diversifying our datasets. And finally, we're sharing what we learn so that it’s easier for anyone building AI to truly be helpful to everyone.
12 year old Brendan uses Magnifier, a Pixel feature designed with the Royal National Institute of Blind People and the National Federation of the Blind.
To build AI that benefits everyone, the AI Research and Developer teams designing our technology need to reflect those who use it and feel comfortable to do their best work. That's why we're committed to prioritizing belonging across our workplace, so that everyone has the opportunity to contribute throughout the research and development process.
The Inclusion Champion program at Google aims to help bring diverse perspectives to the product development process. It involves more than two thousand Googlers from various backgrounds actively participating in research, product feedback, and adversarial testing. Their feedback has significantly influenced the development process of over 200 products and features; identifying biases, improving accessibility, and helping to shape research with external testers.
We’re continuously working to expand and enhance the program across all of our Employee Resource Group communities, with the aim of ensuring no one is overlooked as we develop models and products.
The Google Pixel team worked with Google’s Central Accessibility Team and a group of blind and low-vision Googlers to build Guided Frame, a feature that uses audio cues, high-contrast animations, and haptic feedback to help people who are blind and low-vision take selfies and group photos. Their feedback helped the team ensure they were solving for the real needs of the real people they were aiming to serve.
A Google team focused on responsible innovation visited the Fort Peck Tribes in Poplar, Montana for bidirectional relationship building and immersive learning. By understanding how the tribes’ key values can be applied to defining and designing societally beneficial technology, the team was able create a more inclusive foundation for Google products and research.
Research and developer teams can learn how to operationalize our responsible practices and policies via a frequently updated AI Principles Hub, featuring current product policies and guidance, along with self-service content and training. Usage of this hub has more than doubled since 2023.
Just as there is no single "correct" model for all machine learning or AI tasks, there is no "correct" technique that ensures fairness in every situation or outcome. In practice, our AI researchers and developers use a variety of approaches to work towards fairness in our results, especially when working in the emerging area of generative AI.
Using our recommended practices, here are some examples of how our teams and partners approached fairness in building the Monk Skin Tone Scale, a more equitable skin tone evaluation model.
To develop a more representative skin tone scale, Dr. Ellis Monk — an Associate Professor of Sociology at Harvard University — leveraged his extensive research on skin tone and colorism in the US and Brazil, consultations with experts in social psychology and social categorization, and feedback from members of overlooked communities. Dr. Monk’s research resulted in the Monk Skin Tone (MST) Scale — a more inclusive 10-tone scale explicitly designed to represent a broader range of communities. The MST Scale provides a broader spectrum of skin tones that can be used to evaluate datasets and machine learning models for better representation.
The Google Skin Tone Team worked with TONL to curate the Monk Skin Tone Examples (MST-E) dataset, which includes examples of 19 people whose skin tones span the 10-point Monk Skin Tone (MST) scale. The dataset contains 1515 images and 31 videos which captures people in various poses & lighting conditions, as well as with or without accessories like masks or glasses. Because the ways that people classify skin tones can be subjective, Dr. Monk annotated the images of the people featured in the dataset himself. This dataset is designed to help practitioners teach human annotators how to test for consistent skin tone annotations across various conditions, like high and low lighting, which should in turn make AI-driven products work better for people of all skin tones.
In order to validate the Monk Skin Tone (MST) Scale for ML fairness purposes, the Google Research team launched a study to understand how well participants across diverse communities felt their own skin tone was represented. Study participants found the MST Scale to be more inclusive than the commonly used Fitzpatrick Scale and better at representing their skin tone. ML fairness evaluations like this prevent common human biases from inadvertently getting reproduced by ML algorithms.
Solving skin tone inclusion is complex and should be a collaborative effort. Our research team continues to collaborate with Dr. Monk on refining the scale further. Users, ML fairness experts, and developers are all encouraged to share feedback on how we can improve the scale and develop our models responsibly, in line with Google’s AI Principles. Maintaining open lines of feedback is critical to ensuring that models are developed responsibly on an ongoing basis.
We expand the datasets used to train our models by working with experts and researchers with deep expertise and cultural context of the communities our products are designed for.
Our Language Inclusion initiative is an ambitious commitment to build an AI model that supports the world’s most spoken languages, breaking down barriers to help people connect and better understand the world around them. We're partnering with expert linguists and native speakers around the world to source representative speech data, including researchers and organizations in places such as Africa and India, as well as working alongside local governments, NGOs, and academic institutions to ensure coverage across dialects and languages.
Google and Howard University's Project Elevate Black Voices partnership is a first-of-its-kind collaboration to build a high-quality African-American English (AAE) speech dataset. The project will allow Howard University to share the dataset with those looking to improve speech technology while establishing a framework for responsible data collection, ensuring the data benefits Black communities. Howard University will retain ownership of the dataset and licensing and serve as stewards for its responsible use.
Google has been retraining some of our earlier machine learning models with more inclusive datasets by partnering with the stock photography company, TONL, to source thousands of images of people from historically overlooked backgrounds. The aim was to expand the library to include photography of models across the gender spectrum, models with darker skin tones, and a diverse array of models with disabilities. The project has expanded to include work with Chronicon and RAMPD (Recording Artists and Music Professionals with Disabilities), to further source custom images centering individuals with chronic conditions and disabilities.
“At Google, our goal is to build AI that works for everyone – boldly, responsibly, and together. We know we won’t always get things right at first, but we’re committed to learning from our users, partners, and our own research, so we can build helpful products, share what we learn, and ensure everyone can benefit.”
JAMES MANYIKA | HE/HIM
This technology is advancing rapidly and it's imperative that we share it in ways that bring everyone along and avoid causing harm. This is a collective responsibility, and one we take seriously. We know we’re at an exciting inflection point in our journey and we're committed to learning, addressing missteps, and sharing so that we can all move forward in a way that benefits everyone.
How we’re helping everyone reach their full potential at work, in our products, and in society.
Tools to help developers design with everyone so that their products work for all.
Resources to help developers build technology that ensures people with disabilities can access the world their way.