Feed-forward deep convolutional neural networks (DCNNs) are currently state-of-the-art for object classification tasks such as ImageNet. Further, they are quantitatively accurate models of the primate brain's visual encoding algorithm. Yet the primate brain has two dominant architectural features not shared with DCNNs: local recurrence and feedback from downstream cortical areas to upstream areas. Here we explore the role of feedback on both object classification performance on the ImageNet dataset and in improving the correspondence of DCNN models as models of primate brain's visual encoding algorithm. Our two major findings are that feedback improves classification performance on ImageNet on a per-parameter basis, if the recurrent structure is chosen carefully. Further, feedback improves the correspondence of DCNN networks to the ventral pathway of the primate visual cortex, both improving the overall variance explained, and also better explaining the time course of activations.