Our approach to facial recognition
Face-related technologies can be useful for people and society, and it's important these technologies are developed thoughtfully and responsibly.
We’ve seen how useful the spectrum of face-related technologies can be for people and for society overall. It can make products safer and more secure—for example, face authentication can ensure that only the right person gets access to sensitive information meant just for them.
It can also be used for tremendous social good; there are nonprofits using face recognition to fight against the trafficking of minors.
But it’s important to develop these technologies the right way.
We share many of the widely-discussed concerns over the misuse of face recognition.
As we’ve said in our
AI Principles and in our
Privacy and Security Principles, it’s crucial that these technologies are developed and used responsibly. When it comes to face-related technology:
It needs to be fair, so it doesn’t reinforce or amplify existing biases, especially where this might impact underrepresented groups.
It should not be used in surveillance that violates internationally accepted norms.
And it needs to protect people’s privacy, providing the right level of transparency and control.
That’s why we’ve been so cautious about deploying face recognition in our products, or as services for others to use. We’ve done the work to provide
technical recommendations on privacy, fairness, and more that others in the community can use and build on. In the process we’ve learned to watch out for sweeping generalizations or simplistic solutions. For example, the particular technologies matter a lot.
Face detection is not the same as face recognition; detection just means detecting whether any face is in an image, not whose face it is. Likewise, face clustering can determine which groups of faces look similar, without determining whose face is whose. The way these technologies are deployed also matters—for example, using them for authentication (to confirm that a person is who they claim) is not the same as using them for mass identification (to identify individuals out of a database of options, without necessarily obtaining explicit consent). There are different considerations for each of these contexts.
As we’ve developed advanced technologies, we’ve built a rigorous decision-making process to ensure that existing and future deployments align with our principles. You can read more about
how we structure these discussions and
how we evaluate new products and services against our principles before launch.
In thinking across the face-related products and applications we’re developing, we’ve identified five key dimensions for consideration—(1) intended use; (2) notice, consent, and control; (3) data collection, storage, and sharing; (4) model performance and design; and (5) user interface. We’ve also worked out questions to think through in each of these dimensions. For example, no system will get a perfect answer every time, so what level of quality—in precision, recall, latency, or another aspect—should be required before initial launch for a given application? A security feature to
unlock your phone using face recognition should have a higher quality threshold than an
art selfie app to match people to art portraits. In the same vein, we know that no system will perform exactly the same for every person. What’s an acceptable distribution of performance across people? And how many different people are needed to test a given application before it’s launched?
While it is not reasonable to prescribe universal requirements for criteria like accuracy or fairness—different applications and use cases will require different thresholds, and technology and societal norms and expectations are always evolving—there are many considerations to keep in mind in designing new products to identify clear objectives ahead of any given launch. These include comparing the proposed feature against the performance of the best existing products or technologies, performing user studies to understand and measure against expectations, thinking through the impact of false positives and negatives, and comparing to human levels of accuracy and variation.
It’s important to note that no one company, country, or community has all the answers; on the contrary, it’s crucial for policy stakeholders worldwide to engage in these conversations.
In addition to careful development of our own products, we also support the development of solutions-focused regulatory frameworks that recognize the nuances and implications of these advanced technologies beyond one industry or point of view, and that encourage innovation so products can become more useful and improve privacy, fairness, and security.
We work to ensure that new technologies incorporate considerations of user privacy and where possible enhances it. As just one example, in 2016 we invented
Federated Learning, a new way to do machine learning (that is, having software learn and improve based on examples) on a device like a smartphone. Sensitive data stays on the device, while the software still adapts and gets more useful for everyone with use.
We think this careful, solutions-focused approach is the right one, and we’ve gotten good support from key external stakeholders. We’ve spoken with a diverse array of policymakers, academics, and civil society groups around the world who’ve given us useful perspectives and input on this topic.
We’re going to keep being thoughtful on these issues, ensuring that the technology we develop is helpful to individuals and beneficial to society.