AI

Performance tournaments with crowdsourced judges

Abstract

A performance slam is a competition among a fixed set of performances whereby pairs of performances are judged by audience participants. When performances are recorded on electronic media, performance slams become amenable to audiences that watch on­line and judge asynchronously (“crowdsourced”). In order to better entertain the audience, we want to show the better performances (“exploitation”). In order to identify the good videos, we want to glean a least some information about all videos (“exploration”).

Our approach has three elements: (1) We take our preference model from Bradley and Terry (1952). (2) Its parameters we calculate by rewriting the likelihood gradient into a fixed point estimate, one which mimics the estimate of Mantel and Haenszel (1959). (3) Each pair of performances is chosen sequentially, always chosen to minimize the weighted variance of (the logarithms of) the Bradley-Terry parameter estimates. Our preferred weights consist of the log­rank weights proposed by Savage (1956).