In this paper we study the impact of sharing memory resources on ﬁve Google datacenter applications: a web search engine, bigtable, content analyzer, image stitching, and protocol buﬀer. While prior work has found neither positive nor negative eﬀects from cache sharing across the PARSEC benchmark suite, we ﬁnd that across these datacenter applications, there is both a sizable beneﬁt and a potential degradation from improperly sharing resources. In this paper, we ﬁrst present a study of the importance of thread-tocore mappings for applications in the datacenter as threads can be mapped to share or to not share caches and bus bandwidth. Second, we investigate the impact of co-locating threads from multiple applications with diverse memory behavior and discover that the best mapping for a given application changes depending on its co-runner. Third, we investigate the application characteristics that impact performance in the various thread-to-core mapping scenarios.
Finally, we present both a heuristics-based and an adaptive approach to arrive at good thread-to-core decisions in the datacenter. We observe performance swings of up to 25%% for web search and 40%% for other key applications, simply based on how application threads are mapped to cores. By employing our adaptive thread-to-core mapper, the performance of the datacenter applications presented in this work improved by up to 22%% over status quo thread-to-core mapping and performs within 3%% of optimal.