Jump to Content
Avinatan Hassidim

Avinatan Hassidim

Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract Computing efficient traffic signal plans is often based on the amount of traffic in an intersection, its distribution over the various intersection movements and hours as well as on performance metrics such as traffic delay. In their simple and typical form plans are fixed in the same hour over weekdays. This allows low operation costs without the necessity for traffic detection and monitoring tools. A critical factor on the potential efficiency of such plans is the similarity of traffic patterns over the days along each of the intersection movements. We refer to such similarity as the traffic stability of the intersection and define simple metrics to measure it based on traffic volume and traffic delay. In this paper, we propose an automatic probe data based method, for city-wide estimation of traffic stability. We discuss how such measures can be used for signal planning such as in selecting plan resolution or as an indication as which intersections can benefit from dynamic but expensive traffic detection tools. We also identify events of major changes in traffic characteristics of an intersection. We demonstrate the framework by using real traffic statistics to study the traffic stability in the city of Haifa along its 162 intersections. We study the impact of the time of day on the stability, detect major changes in traffic and find intersections with high and low stability. View details
    Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns
    Ariel Goldstein
    Avigail Grinstein-Dabush
    Haocheng Wang
    Zhuoqiao Hong
    Bobbi Aubrey
    Samuel A. Nastase
    Zaid Zada
    Eric Ham
    Harshvardhan Gazula
    Eliav Buchnik
    Werner Doyle
    Sasha Devore
    Patricia Dugan
    Roi Reichart
    Daniel Friedman
    Orrin Devinsky
    Adeen Flinker
    Uri Hasson
    Nature Communications (2024)
    Preview abstract Contextual embeddings, derived from deep language models (DLMs), provide a continuous vectorial representation of language. This embedding space differs fundamentally from the symbolic representations posited by traditional psycholinguistics. We hypothesize that language areas in the human brain, similar to DLMs, rely on a continuous embedding space to represent language. To test this hypothesis, we densely record the neural activity patterns in the inferior frontal gyrus (IFG) of three participants using dense intracranial arrays while they listened to a 30-minute podcast. From these fine-grained spatiotemporal neural recordings, we derive a continuous vectorial representation for each word (i.e., a brain embedding) in each patient. We demonstrate that brain embeddings in the IFG and the DLM contextual embedding space have common geometric patterns using stringent zero-shot mapping. The common geometric patterns allow us to predict the brain embedding of a given left-out word in IFG based solely on its geometrical relationship to other nonoverlapping words in the podcast. Furthermore, we show that contextual embeddings better capture the geometry of IFG embeddings than static word embeddings. The continuous brain embedding space exposes a vector-based neural code for natural language processing in the human brain. View details
    Caravan - A global community dataset for large-sample hydrology
    Frederik Kratzert
    Nans Addor
    Tyler Erickson
    Martin Gauch
    Lukas Gudmundsson
    Daniel Klotz
    Sella Nevo
    Guy Shalev
    Scientific Data, vol. 10 (2023), pp. 61
    Preview abstract High-quality datasets are essential to support hydrological science and modeling. Several CAMELS (Catchment Attributes and Meteorology for Large-sample Studies) datasets exist for specific countries or regions, however these datasets lack standardization, which makes global studies difficult. This paper introduces a dataset called Caravan (a series of CAMELS) that standardizes and aggregates seven existing large-sample hydrology datasets. Caravan includes meteorological forcing data, streamflow data, and static catchment attributes (e.g., geophysical, sociological, climatological) for 6830 catchments. Most importantly, Caravan is both a dataset and open-source software that allows members of the hydrology community to extend the dataset to new locations by extracting forcing data and catchment attributes in the cloud. Our vision is for Caravan to democratize the creation and use of globally-standardized large-sample hydrology datasets. Caravan is a truly global open-source community resource. View details
    AI Increases Global Access to Reliable Flood Forecasts
    Asher Metzger
    Dana Weitzner
    Frederik Kratzert
    Guy Shalev
    Martin Gauch
    Sella Nevo
    Shlomo Shenzis
    Tadele Yednkachw Tekalign
    Vusumuzi Dube
    arXiv (2023)
    Preview abstract Floods are one of the most common natural disasters, with a disproportionate impact in developing countries that often lack dense streamflow gauge networks. Accurate and timely warnings are critical for mitigating flood risks, but hydrological simulation models typically must be calibrated to long data records in each watershed. Here we show that AI-based forecasting achieves reliability in predicting extreme riverine events in ungauged watersheds at up to a 5-day lead time that is similar to or better than the reliability of nowcasts (0-day lead time) from a current state of the art global modeling system (the Copernicus Emergency Management Service Global Flood Awareness System). Additionally, we achieve accuracies over 5-year return period events that are similar to or better than current accuracies over 1-year return period events. This means that AI can provide flood warnings earlier and over larger and more impactful events in ungauged basins. The model developed in this paper was incorporated into an operational early warning system that produces publicly available (free and open) forecasts in real time in over 80 countries. This work highlights a need for increasing the availability of hydrological data to continue to improve global access to reliable flood warnings. View details
    A Neural Encoder for Earthquake Rate Forecasting
    Oleg Zlydenko
    Brendan Meade
    Alexandra Sharon Molchanov
    Sella Nevo
    Yohai bar Sinai
    Scientific Reports (2023)
    Preview abstract Forecasting the timing of earthquakes is a long-standing challenge. Moreover, it is still debated how to formulate this problem in a useful manner, or to compare the predictive power of different models. Here, we develop a versatile neural encoder of earthquake catalogs, and apply it to the fundamental problem of earthquake rate prediction, in the spatio-temporal point process framework. The epidemic type aftershock sequence model (ETAS) effectively learns a small number of parameters to constrain assumed functional forms for the space and time relationships of earthquake sequences (e.g., Omori-Utsu law). Here we introduce learned spatial and temporal embeddings for point process earthquake forecast models that capture complex correlation structures. We demonstrate the generality of this neural representation as compared with ETAS model using train-test data splits and how it enables the incorporation of additional geophysical information. In rate prediction tasks, the generalized model shows > 4% improvement in information gain per earthquake and the simultaneous learning of anisotropic spatial structures analogous to fault traces. The trained network can be also used to perform short-term prediction tasks, showing similar improvement while providing a 1,000-fold reduction in run-time. View details
    Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback
    Paul Roit
    Johan Ferret
    Geoffrey Cideron
    Matthieu Geist
    Sertan Girgin
    Léonard Hussenot
    Nikola Momchev
    Piotr Stanczyk
    Nino Vieillard
    Olivier Pietquin
    Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics (2023), 6252–6272
    Preview abstract Despite the seeming success of contemporary grounded text generation systems, they often tend to generate factually inconsistent text with respect to their input. This phenomenon is emphasized in tasks like summarization, in which the generated summaries should be corroborated by their source article. In this work we leverage recent progress on textual entailment models to directly address this problem for abstractive summarization systems. We use reinforcement learning with reference-free, textual-entailment rewards to optimize for factual consistency and explore the ensuing trade-offs, as improved consistency may come at the cost of less informative or more extractive summaries. Our results, according to both automatic metrics and human evaluation, show that our method considerably improves the faithfulness, salience and conciseness of the generated summaries. View details
    TRUE: Re-evaluating Factual Consistency Evaluation
    Or Honovich
    Hagai Taitelbaum
    Vered Cohen
    Thomas Scialom
    NAACL 2022, The Association for Computational Linguistics (2022)
    Preview abstract Grounded text generation systems often generate text that contains factual inconsistencies, hindering their real-world applicability. Automatic factual consistency evaluation may help alleviate this limitation by accelerating evaluation cycles, filtering inconsistent outputs and augmenting training data. While attracting increasing attention, such evaluation metrics are usually developed and evaluated in silo for a single task or dataset, slowing their adoption. Moreover, previous meta-evaluation protocols focused on system-level correlations with human annotations, which leave the example-level accuracy of such metrics unclear. In this work, we introduce TRUE: a comprehensive study of factual consistency metrics on a standardized collection of existing texts from diverse tasks, manually annotated for factual consistency. Our standardization enables an example-level meta-evaluation protocol that is more actionable and interpretable than previously reported correlations, yielding clearer quality measures. Across diverse state-of-the-art metrics and 11 datasets we find that large-scale NLI and question generation-and-answering-based approaches achieve strong and complementary results. We recommend those methods as a starting point for model and metric developers, and hope TRUE will foster progress towards even better methods. View details
    Eddystone-EID: Secure and Private Infrastructural Protocol for BLE Beacons
    Liron David
    Alon Ziv
    IEEE Transactions on Information Forensics and Security (2022)
    Preview abstract Beacons are small devices which are playing an important role in the Internet of Things (IoT), connecting “things” without IP connection to the Internet via Bluetooth Low Energy (BLE) communication. In this paper we present the first private end-to-end encryption protocol called the Eddystone-Ephemeral-ID (Eddystone-EID) protocol. This protocol enables connectivity from any beacon to its remote owner, while supporting beacon’s privacy and security, and essentially preserving the beacon’s low power consumption. We describe the Eddystone-EID development goals, discuss the design decisions, show the cryptographic solution, and analyse its privacy, security, and performance. Finally, we present three secure IoT applications built on Eddystone-EID, demonstrating its utility as a security and privacy infrastructure in the IoT domain. Further, Eddystone-EID is a prototypical example of security design for an asymmetric system in which on one side there are small power-deficient elements (the beacons) and on the other side there is a powerful computing engine (a cloud). The crux of the design strategy is based on: (1) transferring work from the beacon to the cloud, and then (2) building a trade-off between cloud online work against cloud offline work, in order to enable fast real-time reaction of the cloud. These two principles seem to be generic and can be used for other problems in the IoT domain. View details
    Preview abstract Clinical notes often contain vital information not observed in other structured data, but their unstructured nature can lead to critical patient-related information being lost. To make sure this valuable information is utilized for patient care, algorithms that summarize notes into a problem list are often proposed. Focusing on identifying medically-relevant entities in the free-form text, these solutions are often detached from a canonical ontology and do not allow downstream use of the detected text-spans. As a solution, we present here a system for generating a canonical problem list from medical notes, consisting of two major stages. At the first stage, annotation, we use a transformer model to detect all clinical conditions which are mentioned in a single note. These clinical conditions are then grounded to a predefined ontology, and are linked to spans in the text. At the second stage, summarization, we aggregate over the set of clinical conditions detected on all of the patient's note, and produce a concise patient summary that organizes their important conditions. View details
    Preview abstract “Exposure Notification (EN) Systems” which have been envisioned by a number of academic and industry groups, are useful in aiding health authorities worldwide to fight the COVID-19 pandemic spread via contact tracing. Among these systems, many rely on the BLE based Google-Apple Exposure Notification (GAEN) API (for iPhones and Android systems). We assert that it is now the time to investigate how to deal with scale issues, assuming the next pandemic/ variant will be more extensive. To this end, we present two modular enhancements to scale up the GAEN API by improving performance and suggesting a better performance-privacy tradeoff. Our modifications have the advantage of affecting only the GAEN API modules and do not require any change to the systems built on top of it, therefore it can be easily adopted upon emerging needs. The techniques we suggest in this paper (called “dice and splice” and “forest from the PRF-tree”) are general and applicable to scenarios of searching values within anonymous pseudo-randomly generated sequences. View details
    Preview abstract Physicians record their detailed thought-processes about diagnoses and treatments as unstructured text in a section of a clinical note called the assessment and plan. This information is more clinically rich than structured billing codes assigned for an encounter but harder to reliably extract given the complexity of clinical language and documentation habits. We describe and release a dataset containing annotations of 579 admission and progress notes from the publicly available and de-identified MIMIC-III ICU dataset with over 30,000 labels identifying active problems, their assessment, and the category of associated action items (e.g. medication, lab test). We also propose deep-learning based models that approach human performance, with a F1 score of 0.88. We found that by employing weak supervision and domain specific data-augmentation, we could improve generalization across departments and reduce the number of human labeled notes without sacrificing performance. View details
    Preview abstract AI models have shown promise in performing many medical imaging tasks. However, our ability to explain what signals these models learn from the training data is severely lacking. Explanations are needed in order to increase the trust of doctors in AI-based models, especially in domains where AI prediction capabilities surpass those of humans. Moreover, such explanations could enable novel scientific discovery by uncovering signals in the data that aren’t yet known to experts. In this paper, we present a method for automatic visual explanations that can help achieve these goals by generating hypotheses of what visual signals in the images are correlated with the task. We propose the following 4 steps: (i) Train a classifier to perform a given task to assess whether the imagery indeed contains signals relevant to the task; (ii) Train a StyleGAN-based image generator with an architecture that enables guidance by the classifier (“StylEx”); (iii) Automatically detect and extract the top visual attributes that the classifier is sensitive to. Each of these attributes can then be independently modified for a set of images to generate counterfactual visualizations of those attributes (i.e. what that image would look like with the attribute increased or decreased); (iv) Present the discovered attributes and corresponding counterfactual visualizations to a multidisciplinary panel of experts to formulate hypotheses for the underlying mechanisms with consideration to social and structural determinants of health (e.g. whether the attributes correspond to known patho-physiological or socio-cultural phenomena, or could be novel discoveries) and stimulate future research. To demonstrate the broad applicability of our approach, we demonstrate results on eight prediction tasks across three medical imaging modalities – retinal fundus photographs, external eye photographs, and chest radiographs. We showcase examples where many of the automatically-learned attributes clearly capture clinically known features (e.g., types of cataract, enlarged heart), and demonstrate automatically-learned confounders that arise from factors beyond physiological mechanisms (e.g., chest X-ray underexposure is correlated with the classifier predicting abnormality, and eye makeup is correlated with the classifier predicting low hemoglobin levels). We further show that our method reveals a number of physiologically plausible novel attributes for future investigation (e.g., differences in the fundus associated with self-reported sex, which were previously unknown). While our approach is not able to discern causal pathways, the ability to generate hypotheses from the attribute visualizations has the potential to enable researchers to better understand, improve their assessment, and extract new knowledge from AI-based models. Importantly, we highlight that attributes generated by our framework can capture phenomena beyond physiology or pathophysiology, reflecting the real world nature of healthcare delivery and socio-cultural factors, and hence multidisciplinary perspectives are critical in these investigations. Finally, we release code to enable researchers to train their own StylEx models and analyze their predictive tasks of interest, and use the methodology presented in this paper for responsible interpretation of the revealed attributes. View details
    Flood forecasting with machine learning models in an operational framework
    Asher Metzger
    Chen Barshai
    Dana Weitzner
    Frederik Kratzert
    Gregory Begelman
    Guy Shalev
    Hila Noga
    Moriah Royz
    Niv Giladi
    Ronnie Maor
    Sella Nevo
    Yotam Gigi
    HESS (2022)
    Preview abstract Google’s operational flood forecasting system was developed to provide accurate real-time flood warnings to agencies and the public, with a focus on riverine floods in large, gauged rivers. It became operational in 2018 and has since expanded geographically. This forecasting system consists of four subsystems: data validation, stage forecasting, inundation modeling, and alert distribution. Machine learning is used for two of the subsystems. Stage forecasting is modeled with the Long Short-Term Memory (LSTM) networks and the Linear models. Flood inundation is computed with the Thresholding and the Manifold models, where the former computes inundation extent and the latter computes both inundation extent and depth. The Manifold model, presented here for the first time, provides a machine-learning alternative to hydraulic modeling of flood inundation. When evaluated on historical data, all models achieve sufficiently high-performance metrics for operational use. The LSTM showed higher skills than the Linear model, while the Thresholding and Manifold models achieved similar performance metrics for modeling inundation extent. During the 2021 monsoon season, the flood warning system was operational in India and Bangladesh, covering flood-prone regions around rivers with a total area of 287,000 km2, home to more than 350M people. More than 100M flood alerts were sent to affected populations, to relevant authorities, and to emergency organizations. Current and future work on the system includes extending coverage to additional flood-prone locations, as well as improving modeling capabilities and accuracy. View details
    Preview abstract A streaming algorithm is said to be adversarially robust if its accuracy guarantees are maintained even when the data stream is chosen maliciously, by an adaptive adversary. We establish a connection between adversarial robustness of streaming algorithms and the notion of differential privacy. This connection allows us to design new adversarially robust streaming algorithms that outperform the current state-of-the-art constructions for many interesting regimes of parameters. View details
    Shared computational principles for language processing in humans and deep language models
    Ariel Goldstein
    Zaid Zada
    Eliav Buchnik
    Amy Price
    Bobbi Aubrey
    Samuel A. Nastase
    Harshvardhan Gazula
    Gina Choe
    Aditi Rao
    Catherine Kim
    Colton Casto
    Lora Fanda
    Werner Doyle
    Daniel Friedman
    Patricia Dugan
    Lucia Melloni
    Roi Reichart
    Sasha Devore
    Adeen Flinker
    Liat Hasenfratz
    Omer Levy,
    Kenneth A. Norman
    Orrin Devinsky
    Uri Hasson
    Nature Neuroscience (2022)
    Preview abstract Departing from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECoG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language. View details
    Preview abstract Image classification models can depend on multiple different semantic attributes of the image. An explanation of the decision of the classifier needs to both discover and visualize these properties. Here we present StylEx, a method for doing this, by training a generative model to specifically explain multiple attributes that underlie classifier decisions. A natural source for such attributes is the S-space of StyleGAN, which is known to generate semantically meaningful dimensions in the image. However, these will typically not correspond to classifier-specific attributes since standard GAN training is not dependent on the classifier. To overcome this, we propose training procedure for a StyleGAN, which incorporates the classifier model. This results in an S-space that captures distinct attributes underlying classifier outputs. After training, the model can be used to visualize the effect of changing multiple attributes per image, thus providing an image-specific explanation. We apply StylEx to multiple domains, including animals, leaves, faces and retinal images. For these, we show how an image can be changed in different ways to change its classifier prediction. Our results show that the method finds attributes that align well with semantic ones, generate meaningful image-specific explanations, and are interpretable as measured in user-studies. View details
    Adversarial Robustness of Streaming Algorithms through Importance Sampling
    Vladimir Braverman
    Sandeep Silwal
    Samson Zhou
    Advances in Neural Information Processing Systems 34 (NeurIPS 2021) (2021)
    Preview abstract Robustness against adversarial attacks have recently been at the forefront of algorithmic design for machine learning tasks. In the adversarial streaming model, an adversary gives an algorithm a sequence of adaptively chosen updates $u_1,\ldots,u_n$ as a data stream. The goal of the algorithm is to compute or approximate some predetermined function for every prefix of the adversarial stream, but the adversary may generate future updates based on previous outputs of the algorithm. In particular, the adversary may gradually learn the random bits internally used by an algorithm to manipulate dependencies in the input. This is especially problematic as many important problems in the streaming model require randomized algorithms, as they are known to not admit any deterministic algorithms that use sublinear space. In this paper, we introduce adversarially robust streaming algorithms for central machine learning and algorithmic tasks, such regression and clustering, as well as their more general counterparts, subspace embedding, low-rank approximation, and coreset construction. For regression and other numerical linear algebra related tasks, we consider the row arrival streaming model. Our results are based on a simple, but powerful, observation that sampling based algorithms give rise to adversarial robustness which is in contrast to sketching based algorithms, which are very prevalent in the streaming literature but suffer from adversarial attacks. In addition, we show that the well-known merge and reduce paradigm in streaming is adversarially robust. Since the merge and reduce paradigm defines coreset constructions, we thus obtain robust algorithms for $k$-means, $k$-median, $k$-center, Bregman clustering, projective clustering, principal component analysis (PCA) and non-negative matrix factorization. To the best of our knowledge, these are the first adversarially robust methods for these problems. View details
    Preview abstract Given the ubiquity of negative campaigning in recent political elections, we find it important to study its properties from a computational perspective. To this end, we present a model where elections can be manipulated by convincing voters to demote specific non-favored candidates, and study its properties in the classic setting of scoring rules. When the goal is constructive (making a preferred candidate win), we prove that finding such a demotion strategy is easy for Plurality and Veto, while generally hard for t-approval and Borda. We also provide a t-factor approximation for t-approval for every fixed t, and a 3-factor approximation algorithm for Borda. Interestingly enough - following recent trends in political science that show that the effectiveness of negative campaigning depends on the type of candidate and demographic - when assigning varying prices to different possible demotion operations, we are able to provide inapproximability results. When the goal is destructive (making the leading opponent lose), we show that the problem is easy for a broad class of scoring rules. View details
    Learning and Evaluating a Differentially Private Pre-trained Language Model
    Shlomo Hoory
    Avichai Tendler
    Findings of the Association for Computational Linguistics: EMNLP 2021, Association for Computational Linguistics, Punta Cana, Dominican Republic, pp. 1178-1189
    Preview abstract Contextual language models have led to significantly better results on a plethora of language understanding tasks, especially when pre-trained on the same data as the downstream task. While this additional pre-training usually improves performance, it often leads to information leakage and therefore risks the privacy of individuals mentioned in the training data. One method to guarantee the privacy of such individuals is to train a differentially private model, but this usually comes at the expense of model performance. Moreover, it is hard to tell given a privacy parameter $\epsilon$ what was the effect on the trained representation and whether it maintained relevant information while improving privacy. To improve privacy and guide future practitioners and researchers, we demonstrate here how to train a differentially private pre-trained language model (i.e., BERT) with a privacy guarantee of $\epsilon=0.5$ with only a small degradation in performance. We experiment on a dataset of clinical notes with a model trained on an entity extraction (EE) task on and compare it to a similar model trained without differential privacy. Finally, we present a series of experiments showing how to interpret the differentially private representation and understand the information lost and maintained in this process. View details
    Preview abstract We study conversational domain exploration (CODEX), where the user’s goal is to enrich her knowledge of a given domain by conversing with an informative bot. Such conversations should be well grounded in high-quality domain knowledge as well as engaging and open-ended. A CODEX bot should be proactive and introduce relevant information even if not directly asked for by the user. The bot should also appropriately pivot the conversation to undiscovered regions of the domain. To address these dialogue characteristics, we introduce a novel approach termed dynamic composition that decouples candidate content generation from the flexible composition of bot responses. This allows the bot to control the source, correctness and quality of the offered content, while achieving flexibility via a dialogue manager that selects the most appropriate contents in a compositional manner. We implemented a CODEX bot based on dynamic composition and integrated it into the Google Assistant. As an example domain, the bot conversed about the NBA basketball league in a seamless experience, such that users were not aware whether they were conversing with the vanilla system or the one augmented with our CODEX bot. Results are positive and offer insights into what makes for a good conversation. To the best of our knowledge, this is the first real user experiment of open-ended dialogues as part of a commercial assistant system. View details
    Active Deep Learning to Detect Demographic Traits in Free-Form Clinical Notes
    Danny Vainstein
    Roni Rosenfeld
    Tzvika Hartman
    Journal of Biomedical Informatics (2020)
    Preview abstract The free-form portions of clinical notes are a significant source of information for research, but before they can be used, they must be de-identified to protect patients' privacy. De-identification efforts have focused on known identifier types (names, ages, dates, addresses, ID's, etc.). However, a note can contain residual "Demographic Traits" (DTs), unique enough to re-identify the patient when combined with other such facts. Here we examine whether any residual risks remain after removing these identifiers. After manually annotating over 140,000 words worth of medical notes, we found no remaining directly identifying information, and a low prevalence of demographic traits, such as marital status or housing type. We developed an annotation guide to the discovered Demographic Traits (DTs) and used it to label MIMIC-III and i2b2-2006 clinical notes as test sets. We then designed a "bootstrapped" active learning iterative process for identifying DTs: we tentatively labeled as positive all sentences in the DT-rich note sections, used these to train a binary classifier, manually corrected acute errors, and retrained the classifier. This train-and-correct process may be iterated. Our active learning process significantly improved the classifier's accuracy. Moreover, our BERT-based model outperformed non-neural models when trained on both tentatively labeled data and manually relabeled examples. To facilitate future research and benchmarking, we also produced and made publicly available our human annotated DT-tagged datasets. We conclude that directly identifying information is virtually non-existent in the multiple medical note types we investigated. Demographic traits are present in medical notes, but can be detected with high accuracy using a cost-effective human-in-the-loop active learning process, and redacted if desired. View details
    Preview abstract Floods are among the most common and deadly natural disasters in the world, and flood warning systems have been shown to be effective in reducing harm. Yet the majority of the world's vulnerable population does not have access to reliable and actionable warning systems, due to core challenges in scalability, computational costs, and data availability. In this paper we present two components of flood forecasting systems which were developed over the past year, providing access to these critical systems to 75 million people who didn't have this access before. View details
    Spectral Algorithm for Shared Low-rank Matrix Regressions
    Yotam Gigi
    Sella Nevo
    Ami Wiesel
    2020 IEEE 11th Sensor Array and Multichannel Signal Processing Workshop (SAM) (2020)
    Preview abstract We consider multiple matrix regression tasks that share common weights in order to reduce sample complexity. For this purpose, we introduce the common mechanism regression model which assumes a shared right low-rank component across all tasks, but allows an individual per-task left low-rank component. We provide a closed form spectral algorithm for recovering the common component and derive a bound on its error as a function of the number of related tasks and the number of samples available for each of them. Both the algorithm and its analysis are natural extensions of known results in the context of phase retrieval and low rank reconstruction. We demonstrate the efficacy of our approach for the challenging task of remote river discharge estimation across multiple river sites, where data for each task is naturally scarce. In this scenario sharing a low-rank component between the tasks translates to a shared spectral reflection of the water, which is a true underlying physical model. We also show the benefit of the approach in the setting of image classification where the common component can be interpreted as the shared convolution filters. View details
    Customization Scenarios for De-identification of Clinical Notes
    Danny Vainstein
    Gavin Edward Bee
    Jack Po
    Jutta Williams
    Kat Chou
    Ronit Yael Slyper
    Rony Amira
    Shlomo Hoory
    Tzvika Hartman
    BMC Medical Informatics and Decision Making (2020)
    Preview abstract Background: Automated machine-learning systems are able to de-identify electronic medical records, including free-text clinical notes. Use of such systems would greatly boost the amount of data available to researchers, yet their deployment has been limited due to uncertainty about their performance when applied to new datasets. Objective: We present practical options for clinical note de-identification, assessing performance of machine learning systems ranging from off-the-shelf to fully customized. Methods: We implement a state-of-the-art machine learning de-identification system, training and testing on pairs of datasets that match the deployment scenarios. We use clinical notes from two i2b2 competition corpora, the Physionet Gold Standard corpus, and parts of the MIMIC-III dataset. Results: Fully customized systems remove 97-99% of personally identifying information. Performance of off-the-shelf systems varies by dataset, with performance mostly above 90%. Providing a small labeled dataset or large unlabeled dataset allows for fine-tuning that improves performance over off-the-shelf systems. Conclusion: Health organizations should be aware of the levels of customization available when selecting a de-identification deployment solution, in order to choose the one that best matches their resources and target performance level. View details
    Preview abstract A streaming algorithm is said to be adversarially robust if its accuracy guarantees are maintained even when the data stream is chosen maliciously, by an adaptive adversary. We establish a connection between adversarial robustness of streaming algorithms and the notion of differential privacy. This connection allows us to design new adversarially robust streaming algorithms that outperform the current state-of-the-art constructions for many interesting regimes of parameters. View details
    Preview abstract In this work we provide theoretical guarantees for reward decomposition in deterministic MDPs. Reward decomposition is a special case of Hierarchical Reinforcement Learning, that allows one to learn many policies in parallel and combine them into a composite solution. Our approach builds on mapping this problem into a Reward Discounted Traveling Salesman Problem, and then deriving approximate solutions for it. In particular, we focus on approximate solutions that are local, i.e., solutions that only observe information about the current state. Local policies are easy to implement and do not require many computational resources as they do not perform planning. While local deterministic policies, like Nearest Neighbor, are being used in practice for hierarchical reinforcement learning, we propose three stochastic policies that guarantee better performance than any deterministic policy View details
    Preview abstract Automatic speech recognition (ASR) systems have dramatically improved over the last few years. ASR systems are most often trained from ‘typical’ speech, which means that underrepresented groups don’t experience the same level of improvement. In this paper, we present and evaluate finetuning techniques to improve ASR for users with non standard speech. We focus on two types of non standard speech: speech from people with amyotrophic lateral sclerosis (ALS) and accented speech. We train personalized models that achieve 62% and 35% relative WER improvement on these two groups, bringing the absolute WER for ALS speakers, on a test set of message bank phrases, to 10% for mild dysarthria and 20% for more serious dysarthria. We show that 76% of the improvement comes from only 5 min of training data. Finetuning a particular subset of layers (with many fewer parameters) often gives better results than finetuning the entire model. This is the first step towards building state of the art ASR models for dysarthric speech Index Terms: speech recognition, personalization, accessibility View details
    Preview abstract Optimization of a machine learning model is typically carried out by performing stochastic gradient updates on epochs that consist of randomly ordered training examples. This practice means that eachfraction of an epoch comprises an independent random sample of the training data that may not preserve informative structure present in the full data. We hypothesize that the training can be more effective, allowing each epoch to provide some of the benefits of multiple ones, with more principled, ``self-similar'' arrangements. Our case study is matrix factorization, commonly used to learn metric embeddings of entities such as videos or words from example associations. We construct arrangements that preserve the weighted Jaccard similarities of rows and columns and experimentally observe that our arrangements yield training acceleration of 3\%-30\% on synthetic and recommendation datasets. Principled arrangements of training examples emerge as a novel and potentially powerful performance knob for SGD that merits further exploration. View details
    Approximating Weighted and Priced Bribery in Scoring Rules
    Noam Hazon
    Journal of Artificial Intelligence Research, vol. 66 (2019), pp. 1057-1098
    Preview abstract The classic Bribery problem is to find a minimal subset of voters who need to change their vote to make some preferred candidate win. Its important generalizations consider voters who are weighted and also have different prices. We provide an approximate solution for these problems for a broad family of scoring rules (which includes Borda, t-approval, and Dowdall). Our algorithm is based on a randomized reduction from these Bribery generalizations to weighted coalitional manipulation (WCM). To solve this WCM instance, we apply the Birkhoff-von Neumann (BvN) decomposition to a fractional manipulation matrix. This allows us to limit the size of the possible ballot search space reducing it from exponential to polynomial, while still obtaining good approximation guarantees. Finding a solution in the truncated search space yields a new algorithm for WCM, which is of independent interest. View details
    Preview abstract Imagine a large firm with multiple departments that plans a large recruitment. Candidates arrive one by-one, and for each candidate the firm decides, based on her data (CV, skills, experience, etc), whether to summon her for an interview. The firm wants to recruit the best candidates while minimizing the number of interviews. We model such scenarios as a matching problem between items (candidates) and categories (departments): the items arrive one-by-one in an online manner, and upon processing each item the algorithm decides, based on its value and the categories it can be matched with, whether to retain or discard it (this decision is irrevocable). The goal is to retain as few items as possible while guaranteeing that the set of retained items contains an optimal matching. We consider two variants of this problem: (i) in the first variant it is assumed that the n items are drawn independently from an unknown distribution D. (ii) In the second variant it is assumed that before the process starts, the algorithm has an access to a training set of n items drawn independently from the same unknown distribution (e.g. data of candidates from previous recruitment seasons). We give tight bounds on the minimum possible number of retained items in each of these variants. These results demonstrate that one can retain exponentially less items in the second variant (with the training set). Our algorithms and analysis utilize ideas and techniques from statistical learning theory and from discrete algorithms. View details
    New Approximations for Coalitional Manipulation in Scoring Rules
    Noam Hazon
    Journal of Artificial Intelligence Research, vol. 64 (2019), pp. 109-145
    Preview abstract We study the problem of coalitional manipulation---where k manipulators try to manipulate an election on m candidates---for any scoring rule, with focus on the Borda protocol. We do so in both the weighted and unweighted settings. For these problems, recent approximation approaches have tried to minimize k, the number of manipulators needed to make some preferred candidate p win (thus assuming that the number of manipulators is not limited in advance). In contrast, we focus on minimizing the score margin of p which is the difference between the maximum score of a candidate and the score of p. We provide algorithms that approximate the optimum score margin, which are applicable to any scoring rule. For the specific case of the Borda protocol in the unweighted setting, our algorithm provides a superior approximation factor for lower values of k. Our methods are novel and adapt techniques from multiprocessor scheduling by carefully rounding an exponentially-large configuration linear program that is solved by using the ellipsoid method with an efficient separation oracle. We believe that such methods could be beneficial in other social choice settings as well. View details
    Preview abstract Named Entity Recognition (NER) has been mostly studied in the context of written text. Specifically, NER is an important step in de-identification (de-ID) of medical records, many of which are recorded conversations between a patient and a doctor. In such recordings, audio spans with personal information should be redacted, similar to the redaction of sensitive character spans in de-ID for written text. The application of NER in the context of audio de-identification has yet to be fully investigated. To this end, we define the task of audio de-ID, in which audio spans with entity mentions should be detected. We then present our pipeline for this task, which involves Automatic Speech Recognition (ASR), NER on the transcript text, and text-to-audio alignment. Finally, we introduce a novel metric for audio de-ID and a new evaluation benchmark consisting of a large labeled segment of the Switchboard and Fisher audio datasets and detail our pipeline's results on it. View details
    ML for Flood Forecasting at Scale
    Sella Nevo
    Ami Wiesel
    Guy Shalev
    Mor Schlesinger
    Oleg Zlydenko
    Ran El-Yaniv
    Yotam Gigi
    Zach Moshe
    Proceedings of the NIPS AI for Social Good Workshop (2018)
    Preview abstract Effective riverine flood forecasting at scale is hindered by a multitude of factors, most notably the need to rely on human calibration in current methodology, the limited amount of data for a specific location, and the computational difficulty of building continent/global level models that are sufficiently accurate. Machine learning (ML) is primed to be useful in this scenario: learned models often surpass human experts in complex high-dimensional scenarios, and the framework of transfer or multitask learning is an appealing solution for leveraging local signals to achieve improved global performance. We propose to build on these strengths and develop ML systems for timely and accurate riverine flood prediction. View details
    Preview abstract The classic bribery problem is to find a minimal subset of voters who need to change their vote to make some preferred candidate win.We find an approximate solution for this problem for a broad family of scoring rules (which includes Borda and t-approval), in the following sense: if there is a strategy which requires bribing k voters, we efficiently find a strategy which requires bribing at most k + Õ(√k) voters. Our algorithm is based on a randomized reduction from bribery to coalitional manipulation (UCM). To solve the UCM problem, we apply the Birkhoff-von Neumann (BvN) decomposition to a fractional manipulation matrix. This allows us to limit the size of the possible ballot search space reducing it from exponential to polynomial, while still obtaining good approximation guarantees. Finding the optimal solution in the truncated search space yields a new algorithm for UCM, which is of independent interest. View details
    Preview abstract We study the problem of controlling linear time-invariant systems with known noisy dynamics and adversarially chosen quadratic losses. We present the first efficient online learning algorithms in this setting that guarantee O(√T) regret under mild assumptions, where T is the time horizon. Our algorithms rely on a novel SDP relaxation for the steady-state distribution of the system. Crucially, and in contrast to previously proposed relaxations, the feasible solutions of our SDP all correspond to strongly stable'' policies that mix exponentially fast to a steady state. View details
    Towards Global Remote Discharge Estimation: Using the Few to Estimate The Many
    Yotam Gigi
    Guy Shalev
    Sella Nevo
    Zach Moshe
    Ami Wiesel
    Proceedings of the NeurIPS AI for Social Good Workshop (2018)
    Preview abstract Learning hydrologic models for accurate riverine flood prediction at scale is a challenge of great importance. One of the key difficulties is the need to rely on in-situ river discharge measurements, that can be quite scarce and unreliable, particularly in regions where floods cause the most damage every year. Accordingly, in this work we tackle the problem of river discharge estimation at different river locations. A core characteristic of the data at hand (e.g. satellite measurements) is that we have few measurements for many locations, all sharing the same physics that underlie the water discharge. We capture this phenomenon in a simple but powerful common mechanism regression (CMR) model that has a local component as well as a shared one that captures the global discharge mechanism. The resulting learning objective is non-convex, but we show that we can find its global optimum by leveraging the power of joining local measurements across sites. In particular, using a spectral initialization with provable near-optimal accuracy, we can find the optimum using standard descent methods. We demonstrate the efficacy of our approach for the problem of discharge estimation using simulations. View details
    Planning and Learning with Stochastic Action Sets
    Martin Mladenov
    Proceedings of the Twenty-seventh International Joint Conference on Artificial Intelligence (IJCAI-18), Stockholm (2018), pp. 4674-4682
    Preview abstract This is an extended version of the paper Planning and Learning with Stochastic Action Sets that appeared in the Proceedings of the Twenty-seventh International Joint Conference on Artificial Intelligence (IJCAI-18), pp.4674-4682, Stockholm (2018). In many practical uses of reinforcement learning (RL) the set of actions available at a given state is a random variable, with realizations governed by an exogenous stochastic process. Somewhat surprisingly, the foundations for such sequential decision processes have been unaddressed. In this work, we formalize and investigate MDPs with stochastic action sets (SAS-MDPs) to provide these foundations. We show that optimal policies and value functions in this model have a structure that admits a compact representation. From an RL perspective, we show that Q-learning with sampled action sets is sound. In model-based settings, we consider two important special cases: when individual actions are available with independent probabilities; and a sampling-based model for unknown distributions. We develop poly-time value and policy iteration methods for both cases; and in the first, we offer a poly-time linear programming solution. View details
    Preview abstract We present a joint audio-visual model for isolating a single speech signal from a mixture of sounds such as other speakers and background noise. Solving this task using only audio as input is extremely challenging and does not provide an association of the separated speech signals with speakers in the video. In this paper, we present a deep network-based model that incorporates both visual and auditory signals to solve this task. The visual features are used to "focus" the audio on desired speakers in a scene and to improve the speech separation quality. To train our joint audio-visual model, we introduce AVSpeech, a new dataset comprised of thousands of hours of video segments from the Web. We demonstrate the applicability of our method to classic speech separation tasks, as well as real-world scenarios involving heated interviews, noisy bars, and screaming children, only requiring the user to specify the face of the person in the video whose speech they want to isolate. Our method shows clear advantage over state-of-the-art audio-only speech separation in cases of mixed speech. In addition, our model, which is speaker-independent (trained once, applicable to any speaker), produces better results than recent audio-visual speech separation methods that are speaker-dependent (require training a separate model for each speaker of interest). View details
    Preview abstract We study the problem of Borda Unweighted Coalitional Manipulation, where k manipulators try to manipulate an election on m candidates under the Borda protocol. This problem is known to be NP-hard. While most recent approaches to approximation tried to minimize k, the number of manipulators needed to make the preferred candidate win (thus assuming that the number of manipulators is not limited in advance), we focus instead on minimizing the maximum score obtainable by a non-preferred candidate. We provide a randomized, additive O(k √(m log m) ) approximation to this value; in other words, if there exists a strategy enabling the preferred candidate to win by an Ω(k √(m log m) ) margin, our method, with high probability, will find a strategy enabling her to win (albeit with a possibly smaller margin). It thus provides a somewhat stronger guarantee compared to the previous methods, where the addition of an extra manipulator implied (with respect to the original k) a strategy that provides an Ω(m)-additive approximation to a runner-up's score: when k is o(√(m/log m)), our strategy provides a stronger approximation. Our algorithm can also be viewed as a (1+o(1))-multiplicative approximation since the value we approximate has a natural Ω(km) lower bound. Our methods are novel and adapt techniques from multiprocessor scheduling by carefully rounding an exponentially-large configuration linear program that is solved by using the ellipsoid method with an efficient separation oracle. We believe that such methods could be beneficial in approximating coalitional manipulation in other election protocols as well. View details
    Preview abstract Bluetooth Smart (also known as Bluetooth Low Energy) beacons broadcast their presence in order to enable proximity-based applications by observer devices. This results in a privacy and security exposure: broadcast devices are typically susceptible to tracking and spoofing based on the IDs used by the beacons. We introduce a scheme consisting of cloud-based Ephemeral Identifiers (EID) which allows only authorized parties to properly identify the beacons broadcast; it mitigates the basic tracking and security threats while keeping high utility. We outline a formal model of privacy which is obtained with our scheme, present its implementation, and discuss possible extensions. The proposal outlined here is the basis for Google’s Eddystone standard, supported by some thirty industry partners. We supply an open source implementation for all the components of the system. View details
    Network Utilization: The Flow View
    Ariel Shaqed (Scolnicov)
    IEEE INFOCOM 2013, IEEE, Turin, Italy
    Preview abstract Building and operating a large backbone network can take months or even years, and it requires a substantial investment. Therefore, there is an economical drive to increase the utilization of network resources (links, switches, etc.) in order to improve the cost efficiency of the network. At the same time, the utilization of network components has a direct impact on the performance of the network and its resilience to failure, and thus operational considerations are a critical aspect of the decision regarding the desired network load and utilization. However, the actual utilization of the network resources is not easy to predict or control. It depends on many parameters like the traffic demand and the routing scheme (or Traffic Engineering if deployed), and it varies over time and space. As a result it is very difficult to actually define real network utilization and to understand the reasons for this utilization. In this paper we introduce a novel way to look at the network utilization. Unlike traditional approaches that consider the average link utilization, we take the flow perspective and consider the network utilization in terms of the growth potential of the flows in the network. After defining this new Flow Utilization, and discussing how it differs from common definitions of network utilization, we study ways to efficiently compute it over large networks. We then show, using real backbone data, that Flow Utilization is very useful in identifying network state and evaluating performance of TE algorithms. View details
    Preview abstract We consider the algorithmic challenges behind a novel interface that simplifies consumer research of online reviews by surfacing relevant comparable review bundles: reviews for two or more of the items being researched, all generated in similar enough circumstances to provide for easy comparison. This can be reviews by the same reviewer, or by the same demographic category of reviewer, or reviews focusing on the same aspect of the items. But such an interface will work only if the review ecosystem often has comparable review bundles for common research tasks. Here, we develop and evaluate practical algorithms for suggesting additional review targets to reviewers to maximize comparable pair coverage, the fraction of co-researched pairs of items that have both been reviewed by the same reviewer (or more generally are comparable in one of several ways). We show the exact problem and many subcases to be intractable, and give a greedy online, linear-time 2-approximation for a very general setting, and an offline 1.583-approximation for a narrower setting. We evaluate the algorithms on the Google+ Local reviews dataset, yielding more than 10x gain in pair coverage from six months of simulated replacement of existing reviews by suggested reviews. Even allowing for 90% of reviewers ignoring the suggestions, the pair coverage grows more than 2x in the simulation. To explore other parts of the parameter space, we also evaluate the algorithms on synthetic models. View details
    Global alignment of molecular sequences via ancestral state reconstruction
    Alex Andoni
    Costis Daskalakis
    Sebastien Roch
    Stochastic Processes and Applications (2012)
    Preview
    Joint Cache Partition and Job Assignment on Multi-Core Processors
    WADS'13: Proceedings of the 13th international conference on Algorithms and Data Structures (2012)
    Preview abstract Multicore shared cache processors pose a challenge for designers of embedded systems who try to achieve minimal and predictable execution time of workloads consisting of several jobs. To address this challenge the cache is statically partitioned among the cores and the jobs are assigned to the cores so as to minimize the makespan. Several heuristic algorithms have been proposed that jointly decide how to partition the cache among the cores and assign the jobs. We initiate a theoretical study of this problem which we call the joint cache partition and job assignment problem. By a careful analysis of the possible cache partitions we obtain a constant approximation algorithm for this problem. For some practical special cases we obtain a 2-approximation algorithm, and show how to improve the approximation factor even further by allowing the algorithm to use additional cache. We also study possible improvements that can be obtained by allowing dynamic cache partitions and dynamic job assignments. We define a natural restriction of the well known scheduling problem on unrelated machines in which machines are ordered by “strength”. We call this restriction the ordered unrelated machines scheduling problem. We show that our joint cache partition and job assignment problem is harder than this scheduling problem. The ordered unrelated machines scheduling problem is of independent interest and we give a polynomial time algorithm for certain natural workloads. View details
    Quantum Money
    Scott Aaronson
    Edward Farhi
    David Gosset
    Jon Kelner
    Communications of the ACM, vol. 55 No. 8 (2012), pp. 84-92
    Preview
    Super-polynomial quantum speed-ups for boolean evaluation trees with hidden structure
    Bohua Zhan
    Shelby Kimmel
    Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ACM, New York, NY, USA (2012), pp. 249-265
    Preview
    Preview abstract Many practically deployed flow algorithms produce the output as a set of values associated with the network links. However, to actually deploy a flow in a network we often need to represent it as a set of paths between the source and destination nodes. In this paper we consider the problem of decomposing a flow into a small number of paths. We show that there is some fixed constant β >; 1 such that it is NP-hard to find a decomposition in which the number of paths is larger than the optimal by a factor of at most β. Furthermore, this holds even if arcs are associated only with three different flow values. We also show that straightforward greedy algorithms for the problem can produce much larger decompositions than the optimal one, on certain well tailored inputs. On the positive side we present a new approximation algorithm that decomposes all but an c-fraction of the flow into at most O(1/ϵ2) times the smallest possible number of paths. We compare the decompositions produced by these algorithms on real production networks and on synthetically generated data. Our results indicate that the dependency of the decomposition size on the fraction of flow covered is exponential. Hence, covering the last few percent of the flow may be costly, so if the application allows, it may be a good idea to decompose most but not all the flow. The experiments also reveal the fact that while for realistic data the greedy approach works very well, our novel algorithm which has a provable worst case guarantee, typically produces only slightly larger decompositions. View details
    Quantum money from knots
    Edward Farhi
    David Gosset
    Andrew Lutomirski
    Peter Shor
    Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ACM, New York, NY, USA (2012), pp. 276-289
    Preview
    Topology Discovery of Sparse Random Graphs With Few Participants
    Animashree Anandkumar
    Jonathan Kelner
    ACM International Conference on Measurement and Modeling of Computer Systems SIGMETRICS (2011), Best Paper Award
    Preview abstract We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. Our algorithms are simple to implement, computationally efficient, and exploit the locally tree-like property of sparse random graphs. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. View details
    An Efficient Partitioning Oracle for Bounded-Treewidth Graphs
    Alan Edelman
    Krzysztof Onak
    Huy Nguyen
    APPROX-RANDOM (2011), pp. 530-541
    Preview
    Matching with couples revisited
    Itai Ashlagi
    Mark Braverman
    Electronic Commerce (EC) (2011), pp. 335-336
    Preview
    Preview abstract We study markets of indivisible items in which price-based (Walrasian) equilibria often do not exist due to the discrete non-convex setting. Instead we consider Nash equilibria of the market viewed as a game, where players bid for items, and where the highest bidder on an item wins it and pays his bid. We first observe that pure Nash-equilibria of this game excatly correspond to price-based equilibiria (and thus need not exist), but that mixed-Nash equilibria always do exist, and we analyze their structure in several simple cases where no price-based equilibrium exists. We also undertake an analysis of the welfare properties of these equilibria showing that while pure equilibria are always perfectly efficient (“first welfare theorem”), mixed equilibria need not be, and we provide upper and lower bounds on their amount of inefficiency. View details
    Suggesting (More) Friends Using the Implicit Social Graph
    Maayan Roth
    Tzvika Barenholz
    Assaf Ben-David
    Guy Flysher
    Ilan Horn
    Ari Leichtberg
    Ron Merom
    International Conference on Machine Learning (ICML) (2011)
    Preview abstract Although users of online communication tools rarely categorize their contacts into groups such as "family", "co-workers", or "jogging buddies", they nonetheless implicitly cluster contacts, by virtue of their interactions with them, forming implicit groups. In this paper, we describe the implicit social graph which is formed by users' interactions with contacts and groups of contacts, and which is distinct from explicit social graphs in which users explicitly add other individuals as their "friends". We introduce an interaction-based metric for estimating a user's affinity to his contacts and groups. We then describe a novel friend suggestion algorithm that uses a user's implicit social graph to generate a friend group, given a small seed set of contacts which the user has already labeled as friends. We show experimental results that demonstrate the importance of both implicit group relationships and interaction-based affinity ranking in suggesting friends. Finally, we discuss two applications of the Friend Suggest algorithm that have been released as Gmail features. View details
    Leaky Pseudo-Entropy Functions
    Mark Braverman
    Yael Tauman Kalai
    ICS (2011), pp. 353-366
    Preview
    Quantum algorithms for testing properties of distributions
    Sergey Bravyi
    Aram Harrow
    IEEE Transactions on Information Theory (2011)
    Preview
    Probe scheduling for efficient detection of silent failures
    Haim Kaplan
    Yishay Mansour
    Yoav Tzur
    Perform. Eval., vol. 79 (2014), pp. 73-89
    Probe Scheduling for Efficient Detection of Silent Failures
    Haim Kaplan
    Yishay Mansour
    Yoav Tzur
    CoRR, vol. abs/1302.0792 (2013)