Jump to Content

Celal Ziftci

Celal Ziftci is a Software Engineer at Google. He received his BS from Bilkent University in Turkey, working on Natural Language Processing. He received his MS from University of Illinois Urbana-Champaign (UIUC), working on Computer Vision and Machine Learning. Finally, he received his PhD from University of California San Diego (UCSD), working on software testing and maintenance. His research interests are software testing, software analytics, program analysis, and applications of data mining and machine learning in these fields.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Improving Design Reviews at Google
    Ben Greenberg
    International Conference on Automated Software Engineering, IEEE/ACM (2023)
    Preview abstract Design review is an important initial phase of the software development life-cycle where stakeholders gain and discuss early insights into the design’s viability, discover potentially costly mistakes, and identify inconsistencies and inadequacies. For improved development velocity, it is important that design owners get their designs approved as quickly as possible. In this paper, we discuss how engineering design reviews are typically conducted at Google, and propose a novel, structured, automated solution to improve design review velocity. Based on data collected on 141,652 approved documents authored by 41,030 users over four years, we show that our proposed solution decreases median time-to-approval by 25%, and provides further gains when used consistently. We also provide qualitative data to demonstrate our solution’s success, discuss factors that impact design review latency, propose strategies to tackle them, and share lessons learned from the usage of our solution. View details
    De-Flake Your Tests: Automatically Locating Root Causes of Flaky Tests in Code At Google
    Diego Cavalcanti
    International Conference on Software Maintenance and Evolution (ICSME) 2020, IEEE
    Preview abstract Regression testing is a critical part of software development and maintenance. It ensures that modifications to existing software do not break existing behavior and functionality. One of the key assumptions about regression tests is that their results are deterministic: when executed without any modifications with the same configuration, either they always fail or they always pass. In practice, however, there exist tests that are non-deterministic, called flaky tests. Flaky tests cause the results of test runs to be unreliable, and they disrupt the software development workflow. In this paper, we present a novel technique to automatically identify the locations of the root causes of flaky tests on the code level to help developers debug and fix them. We study the technique on flaky tests across 428 projects at Google. Based on our case studies, the technique helps identify the location of the root causes of flakiness with 82% accuracy. Furthermore, our studies show that integration into the appropriate developer workflows, simplicity of debugging aides and fully automated fixes are crucial and preferred components for adoption and usability of flakiness debugging and fixing tools. View details
    Who Broke the Build? Automatically Identifying Changes That Induce Test Failures In Continuous Integration at Google Scale
    Proceedings of the 39th International Conference on Software Engineering: Software Engineering in Practice Track, IEEE Press, Buenos Aires, Argentina (2017), pp. 113-122 (to appear)
    Preview abstract Quickly identifying and fixing code changes that introduce regressions is critical to keep the momentum on software development, especially in very large scale software repositories with rapid development cycles, such as at Google. Identifying and fixing such regressions is one of the most expensive, tedious, and time consuming tasks in the software development life-cycle. Therefore, there is a high demand for automated techniques that can help developers identify such changes while minimizing manual human intervention. Various techniques have recently been proposed to identify such code changes. However, these techniques have shortcomings that make them unsuitable for rapid development cycles as at Google. In this paper, we propose a novel algorithm to identify code changes that introduce regressions, and discuss case studies performed at Google on 140 projects. Based on our case studies, our algorithm automatically identifies the change that introduced the regression in the top-5 among thousands of candidates 82% of the time, and provides considerable savings on manual work developers need to perform View details
    No Results Found