Jump to Content

Software engineering and programming languages

Software engineering and programming language researchers at Google study all aspects of the software development process, from the engineers who make software to the languages and tools that they use.

About the team

We are a collection of teams from across the company who study the problems faced by engineers and invent new technologies to solve those problems. Our teams take a variety of approaches to solve these problems, including empirical methods, interviews, surveys, innovative tools, formal models, predictive machine learning modeling, data science, experiments, and mixed-methods research techniques. As our engineers work within the largest code repository in the world, the solutions need to work at scale, across a team of global engineers and over 2 billion lines of code.

We aim to make an impact internally on Google engineers and externally on the larger ecosystem of software engineers around the world.

Team focus summaries

Developer Tools

Google provides its engineers’ with cutting edge developer tools that operate on codebase with billions of lines of code. The tools are designed to provide engineers with a consistent view of the codebase so they can navigate and edit any project. We research and create new, unique developer tools that allow us to get the benefits of such a large codebase, while still retaining a fast development velocity.

Developer Inclusion and Diversity

We aim to understand diversity and inclusion challenges facing software developers and evaluate interventions that move the needle on creating an inclusive and equitable culture for all.

Developer Productivity

We use both qualitative and quantitative methods to study how to make engineers more productive. Google uses the results of these studies to improve both our internal developer tools and processes and our external offerings for developers on GCP and Android.

Program Analysis and Refactoring

We build static and dynamic analysis tools that find and prevent serious bugs from manifesting in both Google’s and third-party code. We also leverage this large-scale analysis infrastructure to refactor Google’s code at scale.

Machine Learning for Code

We apply deep learning to Google’s large, well-curated codebase to automatically write code and repair bugs.

Programming Language Design and Implementation

We design, evaluate, and implement new features for popular programming languages like Java, C++, and Go through their standards’ processes.

Automated Software Testing and Continuous Integration

We design, implement and evaluate tools and frameworks to automate the testing process and integrate tests with the Google-wide continuous integration infrastructure.

Featured publications

Enabling the Study of Software Development Behavior with Cross-Tool Logs
Ben Holtz
Edward K. Smith
Andrea Marie Knight Dolan
Elizabeth Kammer
Jillian Dicker
Lan Cheng
IEEE Software, vol. Special Issue on Behavioral Science of Software Engineering (2020)
Preview abstract Understanding developers’ day-to-day behavior can help answer important research questions, but capturing that behavior at scale can be challenging, particularly when developers use many tools in concert to accomplish their tasks. In this paper, we describe our experience creating a system that integrates log data from dozens of development tools at Google, including tools that developers use to email, schedule meetings, ask and answer technical questions, find code, build and test, and review code. The contribution of this article is a technical description of the system, a validation of it, and a demonstration of its usefulness. View details
What Predicts Software Developers’ Productivity?
David C. Shepherd
Michael Phillips
Andrea Knight Dolan
Edward K. Smith
Transactions on Software Engineering (2019)
Preview abstract Organizations have a variety of options to help their software developers become their most productive selves, from modifying office layouts, to investing in better tools, to cleaning up the source code. But which options will have the biggest impact? Drawing from the literature in software engineering and industrial/organizational psychology to identify factors that correlate with productivity, we designed a survey that asked 622 developers across 3 companies about these productivity factors and about self-rated productivity. Our results suggest that the factors that most strongly correlate with self-rated productivity were non-technical factors, such as job enthusiasm, peer support for new ideas, and receiving useful feedback about job performance. Compared to other knowledge workers, our results also suggest that software developers’ self-rated productivity is more strongly related to task variety and ability to work remotely. View details
FUDGE: Fuzz Driver Generation at Scale
Yaohui Chen
Markus Kusano
Caroline Lemieux
Wei Wang
Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ACM
Preview abstract At Google we have found tens of thousands of security and robustness bugs by fuzzing C and C++ libraries. To fuzz a library, a fuzzer requires a fuzz driver—which exercises some library code—to which it can pass inputs. Unfortunately, writing fuzz drivers remains a primarily manual exercise, a major hindrance to the widespread adoption of fuzzing. In this paper, we address this major hindrance by introducing the Fudge system for automated fuzz driver generation. Fudge automatically generates fuzz driver candidates for libraries based on existing client code. We have used Fudge to generate thousands of new drivers for a wide variety of libraries. Each generated driver includes a synthesized C/C++ program and a corresponding build script, and is automatically analyzed for quality. Developers have integrated over 200 of these generated drivers into continuous fuzzing services and have committed to address reported security bugs. Further, several of these fuzz drivers have been upstreamed to open source projects and integrated into the OSS-Fuzz fuzzing infrastructure. Running these fuzz drivers has resulted in over 150 bug fixes, including the elimination of numerous exploitable security vulnerabilities. View details
DeepDelta: Learning to Repair Compilation Errors
Ali Mesbah
Andrew Rice
Nick Glorioso
Eddie Aftandilian
Proceedings of the 2019 27th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) (2019)
Preview abstract Programmers spend a substantial amount of time manually repairing code that does not compile. We observe that the repairs for any particular error class typically follow a pattern and are highly mechanical. We propose a novel approach that automatically learns these patterns with a deep neural network and suggests program repairs for the most costly classes of build-time compilation failures. We describe how we collect all build errors and the human-authored, in-progress code changes that cause those failing builds to transition to successful builds at Google. We generate an AST diff from the textual code changes and transform it into a domain-specific language called Delta that encodes the change that must be made to make the code compile. We then feed the compiler diagnostic information (as source) and the Delta changes that resolved the diagnostic (as target) into a Neural Machine Translation network for training. For the two most prevalent and costly classes of Java compilation errors, namely missing symbols and mismatched method signatures, our system called DeepDelta, generates the correct repair changes for 19,314 out of 38,788 (50%) of unseen compilation errors. The correct changes are in the top three suggested fixes 86% of the time on average. View details
State of Mutation Testing at Google
Proceedings of the 40th International Conference on Software Engineering 2017 (SEIP) (2018) (to appear)
Preview abstract Mutation testing assesses test suite efficacy by inserting small faults into programs and measuring the ability of the test suite to detect them. It is widely considered the strongest test criterion in terms of finding the most faults and it subsumes a number of other coverage criteria. Traditional mutation analysis is computationally prohibitive which hinders its adoption as an industry standard. In order to alleviate the computational issues, we present a diff-based probabilistic approach to mutation analysis that drastically reduces the number of mutants by omitting lines of code without statement coverage and lines that are determined to be uninteresting - we dub these arid lines. Furthermore, by reducing the number of mutants and carefully selecting only the most interesting ones we make it easier for humans to understand and evaluate the result of mutation analysis. We propose a heuristic for judging whether a node is arid or not, conditioned on the programming language. We focus on a code-review based approach and consider the effects of surfacing mutation results on developer attention. The described system is used by 6,000 engineers in Google on all code changes they author or review, affecting in total more than 14,000 code authors as part of the mandatory code review process. The system processes about 30% of all diffs across Google that have statement coverage calculated. View details
Preview abstract Code review is a powerful technique to ensure high quality software and spread knowledge of best coding practices between engineers. Unfortunately, code reviewers may have biases about authors of the code they are reviewing, which can lead to inequitable experiences and outcomes. In this paper, we describe a field experiment with anonymous author code review, where we withheld author identity information during 5217 code reviews from 300 professional software engineers at one company. Our results suggest that during anonymous author code review, reviewers can frequently guess authors’ identities; that focus is reduced on reviewer-author power dynamics; and that the practice poses a barrier to offline, high-bandwidth conversations. Based on our findings, we recommend that those who choose to implement anonymous author code review should reveal the time zone of the author by default, have a break-the-glass option for revealing author identity, and reveal author identity directly after the review. View details
Who Broke the Build? Automatically Identifying Changes That Induce Test Failures In Continuous Integration at Google Scale
Proceedings of the 39th International Conference on Software Engineering: Software Engineering in Practice Track, IEEE Press, Buenos Aires, Argentina (2017), pp. 113-122 (to appear)
Preview abstract Quickly identifying and fixing code changes that introduce regressions is critical to keep the momentum on software development, especially in very large scale software repositories with rapid development cycles, such as at Google. Identifying and fixing such regressions is one of the most expensive, tedious, and time consuming tasks in the software development life-cycle. Therefore, there is a high demand for automated techniques that can help developers identify such changes while minimizing manual human intervention. Various techniques have recently been proposed to identify such code changes. However, these techniques have shortcomings that make them unsuitable for rapid development cycles as at Google. In this paper, we propose a novel algorithm to identify code changes that introduce regressions, and discuss case studies performed at Google on 140 projects. Based on our case studies, our algorithm automatically identifies the change that introduced the regression in the top-5 among thousands of candidates 82% of the time, and provides considerable savings on manual work developers need to perform View details
Lessons from Building Static Analysis Tools at Google
Edward Aftandilian
Alex Eagle
Liam Miller-Cushon
Communications of the ACM (CACM), vol. 61 Issue 4 (2018), pp. 58-66
Preview abstract In this article, we describe how we have applied the lessons from Google’s previous experience with FindBugs Java analysis, as well as lessons from the academic literature, to build a successful static analysis infrastructure that is used daily by the majority of engineers at Google. Our tooling detects thousands of issues per day that are fixed by engineers, by their own choice, before the problematic code is checked into the codebase. View details
Scalable Build Service System with Smart Scheduling Service
Ahmed Mustafa Ali Gad
Daniel Lucas Rall
Vijay Sagar Gullapalli
Xin Huang
International Symposium on Software Testing and Analysis (2020) (to appear)
Preview abstract Build automation is critical for developers to check if their code compiles, passes all tests and is able to deploy to the server. Many companies adopt Continuous Integration (CI) services to make sure that the code changes from multiple developers can be safely merged at the head of the project. Internally, CI triggers software builds to makes sure that the new code change does not break the compila- tion or the tests. For any large company which has a monolithic code repository and thousands of developers, it is hard to make sure that all code changes are safe to submit in a timely manner. Because each code change may involve multiple builds, and there could be millions of builds to run each day to guarantee developers’ daily productivity. Company C is one of those large companies that need a scalable build service to support developers’ work. More than 100,000 code changes are submitted to our repository on average each day, in- cluding changes from either human users or automated tools. More than 15 million builds are executed on average each day. In this EXPERIENCE paper, we first describe an overview of our scal- able build service architecture. Then, we discuss more details about how we make build scheduling decisions. Finally, we discuss some experience in the scalability of the build service system and the performance of the build scheduling service. View details
Preview abstract Command line interfaces (CLIs) remain a popular tool among developers and system administrators. Since CLIs are text based interfaces, they are sometimes considered accessible alternatives to predominantly visual developer tools like IDEs. However, there is no systematic evaluation of accessibility of CLIs in the literature. In this paper, we describe two studies with 12 developers on their experience of using CLIs with screen readers. Our findings show that CLIs have their own set of accessibility issues - the most important being CLIs are unstructured text interfaces. Based on our findings, we provide a set of recommendations for improving accessibility of command line interfaces. View details

Some of our people