Jump to Content
John C. Platt

John C. Platt

John Platt is best known for his work in machine learning: the SMO algorithm for support vector machines and calibrating the output of models. He was an early adopter of convolutional neural networks in the 1990s. However, John has worked in many different fields: data systems, computational geometry, object recognition, media UIs, analog computation, handwriting recognition, and applied math. He has discovered two asteroids, and won a Technical Academy Award in 2006 for his work in computer graphics. John currently leads the Applied Science branch of Google Research, which works at the intersection between computer science and physical or biological science. His latest goal is to help to solve climate change. Previously, he was Deputy Director of the Microsoft Research Redmond lab, and was Director of Research at Synaptics.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    A scalable system to measure contrail formation on a per-flight basis
    Erica Brand
    Sebastian Eastham
    Carl Elkin
    Thomas Dean
    Zebediah Engberg
    Ulrike Hager
    Joe Ng
    Dinesh Sanekommu
    Tharun Sankar
    Marc Shapiro
    Environmental Research Communications (2024)
    Preview abstract In this work we describe a scalable, automated system to determine from satellite data whether a given flight has made a persistent contrail. The system works by comparing flight segments to contrails detected by a computer vision algorithm running on images from the GOES-16 Advanced Baseline Imager. We develop a `flight matching' algorithm and use it to label each flight segment as a `match' or `non-match'. We perform this analysis on 1.6 million flight segments and compare these labels to existing contrail prediction methods based on weather forecast data. The result is an analysis of which flights make persistent contrails several orders of magnitude larger than any previous work. We find that current contrail prediction models fail to correctly predict whether we will match a contrail in many cases. View details
    Preview abstract Practical quantum computing will require error rates that are well below what is achievable with physical qubits. Quantum error correction [1, 2] offers a path to algorithmically-relevant error rates by encoding logical qubits within many physical qubits, where increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low in order for logical performance to improve with increasing code size. Here, we report the measurement of logical qubit performance scaling across multiple code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, both in terms of logical error probability over 25 cycles and logical error per cycle (2.914%±0.016% compared to 3.028%±0.023%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10−6 logical error per round floor set by a single high-energy event (1.6 × 10−7 when excluding this event). We are able to accurately model our experiment, and from this model we can extract error budgets that highlight the biggest challenges for future systems. These results mark the first experimental demonstration where quantum error correction begins to improve performance with increasing qubit number, and illuminate the path to reaching the logical error rates required for computation. View details
    Preview abstract The majority of IPCC scenarios call for active CO2 removal (CDR) to remain below 2ºC of warming. On geological timescales, ocean uptake regulates atmospheric CO2 concentration, with two homeostats driving sequestration: dissolution of deep ocean calcite deposits and terrestrial weathering of silicate rocks, acting on 1ka to 100ka timescales. Many current ocean-based CDR proposals effectively act to accelerate the latter. Here we present a method which relies purely on the redistribution and dilution of acidity from a thin layer of the surface ocean to a thicker layer of deep ocean, with the aim of accelerating the former carbonate homeostasis. This downward transport could be seen analogous to the action of the natural biological carbon pump. The method offers advantages over other ocean CDR methods and direct air capture approaches (DAC): the conveyance of mass is minimized (acidity is pumped in situ to depth), and expensive mining, grinding and distribution of alkaline material is eliminated. No dilute substance needs to be concentrated, avoiding the Sherwood’s Rule costs typically encountered in DAC. Finally, no terrestrial material is added to the ocean, avoiding significant alteration of seawater ion concentrations and issues with heavy metal toxicity encountered in mineral-based alkalinity schemes. The artificial transport of acidity accelerates the natural deep ocean invasion and subsequent compensation by calcium carbonate. It is estimated that the total compensation capacity of the ocean is on the order of 1500GtC. We show through simulation that pumping of ocean acidity could remove up to 150GtC from the atmosphere by 2100 without excessive increase of local ocean pH. For an acidity release below 2000m, the relaxation half time of CO2 return to the atmosphere was found to be ~2500 years (~1000yr without accounting for carbonate dissolution), with ~85% retained for at least 300 years. The uptake efficiency and residence time were found to vary with the location of acidity pumping, and optimal areas were calculated. Requiring only local resources (ocean water and energy), this method could be uniquely suited to utilize otherwise-stranded open ocean energy sources at scale. We examine technological pathways that could be used to implement it and present a brief techno-economic estimate of 130-250$/tCO2 at current prices and as low as 86$/tCO2 under modest learning-curve assumptions. View details
    Preview abstract Contrails (condensation trails) are the ice clouds that trail behind aircraft as they fly through cold and moist regions of the atmosphere. Avoiding these regions could potentially be an inexpensive way to reduce over half of aviation's impact on global warming. Development and evaluation of these avoidance strategies greatly benefits from the ability to detect contrails on satellite imagery. Since little to no public data is available to develop such contrail detectors, we construct and release a dataset of several thousand Landsat-8 scenes with pixel-level annotations of contrails. The dataset will continue to grow, but currently contains 3431 scenes (of which 47\% have at least one contrail) representing 800+ person-hours of labeling time. View details
    Preview abstract We determined the time-dependent geometry including high-frequency oscillations of the plasma density in TAE’s C2W experiment. This was done as a joint Bayesian reconstruction from a 14-chord FIR interferometer in the midplane, 32 Mirnov probes at the periphery, and 8 shine-through detectors at the targets of the neutral beams. For each point in time we recovered, with credibility intervals: the radial density profile of the plasma; bulk plasma displacement; amplitudes, frequencies and phases of the azimuthal modes n=1 to n=4. Also reconstructed were the radial profiles of the deformations associated with each of the azimuthal modes. Bayesian posterior sampling was done via Hamiltonian Monte Carlo with custom preconditioning. This gave us a comprehensive uncertainty quantification of the reconstructed values, including correlations and some understanding of multimodal posteriors. This method was applied to thousands of experimental shots on C-2W, producing a rich data set for analysis of plasma performance. View details
    Preview abstract TAE Technologies, Inc. (TAE) is pursuing an alternative approach to magnetically confined fusion, which relies on field-reversed configuration (FRC) plasmas composed of mostly energetic and well-confined particles by means of a state-of-the-art tunable energy neutral-beam (NB) injector system. TAE’s current experimental device, C-2W (also called “Norman”), is the world’s largest compact-toroid device and has made significant progress in FRC performance, producing record breaking, high temperature (electron temperature, Te >500 eV; total electron and ion temperature, Ttot >3 keV) advanced beam-driven FRC plasmas, dominated by injected fast particles and sustained in steady-state for up to 30 ms, which is limited by NB pulse duration. C-2W produces significantly better FRC performance than the preceding C-2U experiment, in part due to Google’s machine-learning framework for experimental optimization, which has contributed to the discovery of a new operational regime where novel settings for the formation sections yield consistently reproducible, hot, and stable plasmas. Active plasma control system has been developed and utilized in C-2W to produce consistent FRC performance as well as for reliable machine operations using magnets, electrodes, gas injection, and tunable NBs. The active control system has demonstrated a stabilization of FRC axial instability. Overall FRC performance is well correlated with NBs and edge-biasing system, where higher total plasma energy is obtained with increasing both NB injection power and applied-voltage on biasing electrodes. C-2W divertors have demonstrated a good electron heat confinement on open-field-lines using strong magnetic mirror fields as well as expanding the magnetic field in the divertors (expansion ratio >30); the electron energy lost per ion, ~6–8, is achieved, which is close to the ideal theoretical minimum. View details
    Preview abstract TAE Technologies’ research is devoted to producing high temperature, stable, long-lived field-reversed configuration (FRC) plasmas by neutral-beam injection (NBI) and edge biasing/control. The newly constructed C-2W experimental device (also called “Norman”) is the world’s largest compact-toroid (CT) device, which has several key upgrades from the preceding C-2U device such as higher input power and longer pulse duration of the NBI system as well as installation of inner divertors with upgraded electrode biasing systems. Initial C-2W experiments have successfully demonstrated a robust FRC formation and its translation into the confinement vessel through the newly installed inner divertor with adequate guide magnetic field. They also produced dramatically improved initial FRC states with higher plasma temperatures (Te ~250+ eV; total electron and ion temperature >1.5 keV, based on pressure balance) and more trapped flux (up to ~15 mWb, based on rigid-rotor model) inside the FRC immediately after the merger of collided two CTs in the confinement section. As for effective edge control on FRC stabilization, a number of edge biasing schemes have been tried via open field-lines, in which concentric electrodes located in both inner and outer divertors as well as end-on plasma guns are electrically biased independently. As a result of effective outer-divertor electrode biasing alone, FRC plasma is well stabilized and diamagnetism duration has reached up to ~9 ms which is equivalent to C-2U plasma duration. Magnetic field flaring/expansion in both inner and outer divertors plays an important role in creating a thermal insulation on open field-lines to reduce a loss rate of electrons, which leads to improvement of the edge and core FRC confinement properties. Experimental campaign with inner-divertor magnetic-field flaring has just commenced and early result indicates that electron temperature of the merged FRC stays relatively high and increases for a short period of time, presumably by NBI and ExB heating. View details
    Preview abstract Fusion Plasma Reconstruction work done at Google in partnership with TAE is presented. View details
    Quantum Supremacy using a Programmable Superconducting Processor
    Frank Arute
    Kunal Arya
    Rami Barends
    Rupak Biswas
    Fernando Brandao
    David Buell
    Yu Chen
    Jimmy Chen
    Ben Chiaro
    Roberto Collins
    William Courtney
    Andrew Dunsworth
    Edward Farhi
    Brooks Foxen
    Austin Fowler
    Rob Graff
    Keith Guerin
    Steve Habegger
    Michael Hartmann
    Alan Ho
    Trent Huang
    Travis Humble
    Sergei Isakov
    Kostyantyn Kechedzhi
    Sergey Knysh
    Alexander Korotkov
    Fedor Kostritsa
    Dave Landhuis
    Mike Lindmark
    Dmitry Lyakh
    Salvatore Mandrà
    Anthony Megrant
    Xiao Mi
    Kristel Michielsen
    Masoud Mohseni
    Josh Mutus
    Charles Neill
    Eric Ostby
    Andre Petukhov
    Eleanor G. Rieffel
    Vadim Smelyanskiy
    Kevin Jeffery Sung
    Matt Trevithick
    Amit Vainsencher
    Benjamin Villalonga
    Z. Jamie Yao
    Ping Yeh
    John Martinis
    Nature, vol. 574 (2019), 505–510
    Preview abstract The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2^53 (about 10^16). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times-our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm. View details
    Achievement of Sustained Net Plasma Heating in a Fusion Experiment with the Optometrist Algorithm
    E. Trask
    M. Binderbauer
    H. Gota
    R. Mendoza
    P.F. Riley
    Scientific Reports, vol. 7 (2017), pp. 6425
    Preview abstract Many fields of basic and applied science require efficiently exploring complex systems with high dimensionality. An example of such a challenge is optimising the performance of plasma fusion experiments. The highly-nonlinear and temporally-varying interaction between the plasma, its environment and external controls presents a considerable complexity in these experiments. A further difficulty arises from the fact that there is no single objective metric that fully captures both plasma quality and equipment constraints. To efficiently optimise the system, we develop the Optometrist Algorithm, a stochastic perturbation method combined with human choice. Analogous to getting an eyeglass prescription, the Optometrist Algorithm confronts a human operator with two alternative experimental settings and associated outcomes. A human operator then chooses which experiment produces subjectively better results. This innovative technique led to the discovery of an unexpected record confinement regime with positive net heating power in a field-reversed configuration plasma, characterised by a >50% reduction in the energy loss rate and concomitant increase in ion temperature and total plasma energy. View details
    Preview abstract Low-carbon electricity technologies are often evaluated by their Levelized Cost of Energy (LCOE). However, LCOE cannot model the impact of one electricity source on the value of others. In previous work, System LCOE was proposed to estimate the costs of integrating an intermittent source into a grid consisting of multiple dispatchable electricity sources. Using a new DOSCOE (Dispatch-optimized system cost of electricity) model, we generalize System LCOE. DOSCOE can handle any mixture of dispatchable and non-dispatchable sources. It can analyze systems which contain storage, have legacy infrastructure, or have imposed policies. DOSCOE thus updates System LCOE to be applicable to more realistic electricity grid models. DOSCOE uses a linear program to find the capacity and generation mix which yields minimum LCOE. Running this linear program multiple times yields System LCOE curves. DOSCOE shows that to cost-effectively remove the last 10-20% of fossil fuels requires a moderate price on carbon and either low-cost nuclear power or carbon capture and sequestration. Alternatively, a hypothetical zero-carbon source needs to have a net present cost less than $2200/kW to displace existing fossil-fuel plants. View details
    Estimating the Support of a High-Dimensional Distribution
    Bernhard Schoelkopf
    John Shawe-Taylor
    Alex J. Smola
    Robert C. Williamson
    Neural Computation, vol. 13 (2001), pp. 1443-1471
    Support Vector Method for Novelty Detection
    Bernhard Schoelkopf
    Robert C. Williamson
    Alex J. Smola
    John Shawe-Taylor
    NIPS (1999), pp. 582-588