Jump to Content
Rina Panigrahy

Rina Panigrahy

I am broadly interested in theoretical and applied algorithms in areas such as similarity search, sketching and streaming, cuckoo hashing, learning and prediction, and large graph analysis, Network algorithms. Much of my research has been related to problems arising in engineering applications such as provable guarantees for deep learning, distributed caching for content delivery networks that led to the founding of Akamai Technologies, risk-reward trade-off for stock prediction, space efficient hashing in networking equipment, low power Ternary-CAMs, space efficient similarity search, and fast distance estimation in social networks.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract Can deep learning solve multiple, very different tasks simultaneously? We investigate how the representations of the underlying tasks affect the ability of a single neural network to learn them jointly. We present theoretical and empirical findings that a single neural network is capable of simultaneously learning multiple tasks from a combined data set, for a variety of methods for representing tasks---for example, when the distinct tasks are represented by well-separated clusters or decision trees over some task-code attributes. Indeed, more strongly, we present a novel analysis that shows that families of simple programming-like constructs for the task codings are learnable by two-layer neural networks with standard training. We study more generally how the complexity of learning such combined tasks grows with the complexity of the task codes; we find that learning many tasks can be provably hard, even though the individual tasks are easy to learn. We provide empirical support for the usefulness of the learning bounds by training networks on clusters, decision trees, and SQL-style aggregation. View details
    Convergence Results for Neural Networks via Electrodynamics
    Ali Rahimi
    Sushant Sachdeva
    Qiuyi Zhang:
    Innovations in Theoretical Computer Science Conference, ITCS 2018 (2018)
    Preview abstract We study whether a depth two neural network can learn another depth two network using gradient descent. Assuming a linear output node, we show that the question of whether gradient descent converges to the target function is equivalent to the following question in electrodynamics: Given k fixed protons in R^d, and k electrons, each moving due to the attractive force from the protons and repulsive force from the remaining electrons, whether at equilibrium all the electrons will be matched up with the protons, up to a permutation. Under the standard electrical force, this follows from the classic Earnshaw's theorem. In our setting, the force is determined by the activation function and the input distribution. Building on this equivalence, we prove the existence of an activation function such that gradient descent learns at least one of the hidden nodes in the target network. Iterating, we show that gradient descent can be used to learn the entire network one node at a time. View details
    Partitioning Orders in Online Shopping Services
    Debmalya Panigrahi
    Conf. on Information and Knowledge Management (CIKM) (2017)
    Preview abstract The rapid growth of the sharing economy has led to the widespread use of newer and richer models of online shopping and delivery services. The race to deliver fast has transformed such services into complex networks of shoppers, stores, and consumers. Needless to say, the efficiency of the store order management is critical to the business. Motivated by this setting, we consider the following problem: given a set of online shopping orders each consisting of a few items, how to best partition the orders among a given number of pickers? Owing to logistical constraints the orders are typically unsplittable in the partition. This partitioning, taking the physical location of the items in the store , has to optimize the utilization and amount of work done by the shoppers in the store. Formulating this as a combinatorial optimization problem, we propose a family of simple and efficient algorithms that admit natural constraints arising in this setting. In addition to showing provable guarantees for the algorithms, we also demonstrate their efficiency in practice on real-world data from Google Express [1], outperforming natural baselines. View details
    Preview abstract We consider the problem of approximating a given matrix by a low-rank matrix so as to minimize the entrywise ℓp-approximation error, for any p≥1; the case p=2 is the classical SVD problem. We obtain the first provably good approximation algorithms for this version of low-rank approximation that work for every value of p≥1, including p=∞. Our algorithms are simple, easy to implement, work well in practice, and illustrate interesting tradeoffs between the approximation quality, the running time, and the rank of the approximating matrix. View details
    Learning polynomials with neural networks
    Alexandr Andoni
    Gregory Valiant
    Li Zhang
    ICML (2014)
    Learning sparse polynomials
    Alexandr Andoni
    Gregory Valiant
    Li Zhang
    SODA (2014)
    Understanding cyclic trends in social choices
    Anish Das Sarma
    Sreenivas Gollapudi
    Li Zhang
    Proceedings of the fifth ACM international conference on Web search and data mining, ACM, New York, NY, USA (2012), pp. 593-602