Nitesh Goyal
Research Areas
Authored Publications
Google Publications
Other Publications
Sort By
Preview abstract
Online harassment is a major societal challenge that impacts multiple communities. Some members of community, like female journalists and activists, bear significantly higher impacts since their
profession requires easy accessibility, transparency about their identity, and involves highlighting stories of injustice. Through
a multi-phased qualitative research study involving a focus group
and interviews with 27 female journalists and activists, we mapped
the journey of a target who goes through harassment. We introduce
PMCR framework, as a way to focus on needs for Prevention, Monitoring, Crisis and Recovery. We focused on Crisis and Recovery, and
designed a tool to satisfy a target’s needs related to documenting
evidence of harassment during the crisis and creating reports that
could be shared with support networks for recovery. Finally, we
discuss users’ feedback to this tool, highlighting needs for targets as
they face the burden and offer recommendations to future designers
and scholars on how to develop tools that can help targets manage
their harassment.
View details
Preview abstract
Machine learning models are commonly used to detect toxicity in online conversations. These models are trained on datasets annotated by human raters. We explore how raters' self-described identities impact how they annotate toxicity in online comments. We first define the concept of specialized rater pools: rater pools formed based on raters' self-described identities, rather than at random. We formed three such rater pools for this study--specialized rater pools of raters from the U.S. who identify as African American, LGBTQ, and those who identify as neither. Each of these rater pools annotated the same set of comments, which contains many references to these identity groups. We found that rater identity is a statistically significant factor in how raters will annotate toxicity for identity-related annotations. Using preliminary content analysis, we examined the comments with the most disagreement between rater pools and found nuanced differences in the toxicity annotations. Next, we trained models on the annotations from each of the different rater pools, and compared the scores of these models on comments from several test sets. Finally, we discuss how using raters that self-identify with the subjects of comments can create more inclusive machine learning models, and provide more nuanced ratings than those by random raters.
View details
Capturing Covertly Toxic Speech via Crowdsourcing
Alyssa Whitlock Lees
Daniel Borkan
Ian Kivlichan
Jorge M Nario
HCI, https://sites.google.com/corp/view/hciandnlp/home (2021) (to appear)
Preview abstract
We study the task of extracting covert or veiled toxicity labels from user comments. Prior research has highlighted the difficulty in creating language models that recognize nuanced toxicity such as microaggressions. Our investigations further underscore the difficulty in parsing such labels reliably from raters via crowdsourcing. We introduce an initial dataset, COVERTTOXICITY, which aims to identify such comments from a refined rater template, with rater associated categories. Finally, we fine-tune a comment-domain BERT model to classify covertly offensive comments and compare against existing baselines.
View details
Designing for Mobile Experience Beyond the Native Ad Click: Exploring Landing Page Presentation Style & Media Usage
Marc Bron
Mounia Lalmas
Andrew Haines
Henriette Cramer
Journal of the Association for Information Science and Technology (2018)
Preview abstract
Many free mobile applications are supported by advertising. Ads can greatly affect user perceptions and behavior. In mobile apps, ads often follow a “native” format: they are designed to conform in both format and style to the actual content and context of the application. Clicking on the ad leads users to a second destination, outside of the hosting app, where the unified experience provided by native ads within the app is not necessarily reflected by the landing page the user arrives at. Little is known about whether and how this type of mobile ads is impacting user experience. In this paper, we use both quantitative and qualitative methods to study the impact of two design decisions for the landing page of a native ad on the user experience: (i) native ad style (following the style of the application) versus a non-native ad style; and (ii) pages with multimedia versus static pages. We found consider-able variability in terms of user experience with mobile ad landing pages when varying presentation style and multimedia usage, especially interaction between presence of video and ad style (native or non-native). W e also discuss insights and recommendations for improving the user experience with mobile native ads.
View details
Intelligent Interruption Management using Electro Dermal Activity based Physiological Sensor for Collaborative Sensemaking
Susan R. Fussell
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., vol. 1 (2017), 52:1-52:21
RAMPARTS: Supporting Sensemaking with Spatially-Aware Mobile Interactions
Paweł Wozniak
Przemysław Kucharski
Lars Lischke
Sven Mayer
Morten Fjeld
Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems
Effects of Sensemaking Translucence on Distributed Collaborative Analysis
Susan R. Fussell
Effects of Sensemaking Translucence on Distributed Collaborative Analysis (2016)
Designing for Collaborative Sensemaking: Leveraging Human Cognition For Complex Tasks
Designing for Collaborative Sensemaking: Leveraging Human Cognition For Experts & non-Experts
AAAI HCOMP (2015)
Effects of implicit sharing in collaborative analysis
Gilly Leshed
Dan Cosley
Susan R. Fussell
Effects of Implicit Sharing in Collaborative Analysis, ACM (2014)
Effects of visualization and note-taking on sensemaking and analysis
Gilly leshed
Dan Cosley
Effects of Visualization and Note-taking on Sensemaking and Analysis (2013)
Leveraging partner's insights for distributed collaborative sensemaking
Gilly Leshed
Dan Cosley
Leveraging Partner's Insights for Distributed Collaborative Sensemaking (2013)
Cultural differences across governmental website design
William Miner
Nikhil Nawathe
Cultural Differences Across Governmental Website Design, ACM (2012)
Massively distributed authorship of academic papers
Bill Tomlinson
Joel Ross
Paul Andre
Eric Baumer
Donald Patterson
Joseph Corneli
Martin Mahaux
Syavash Nobarany
Marco Lazzari
Birgit Penzenstadler
Andrew Torrance
David Callele
Gary Olson
Marcus Stünder
Fabio Romancini Palamedi
Albert Ali Salah
Eric Morrill
Xavier Franch
Florian Floyd Mueller
Joseph'Jofish' Kaye
Rebecca W Black
Marisa L Cohn
Patrick C Shih
Johanna Brewer
Pirjo Näkki
Jeff Huang
Nilufar Baghaei
Craig Saper
Massively distributed authorship of academic papers, ACM (2012)
SPRING: speech and pronunciation improvement through games, for Hispanic children
Anuj Tewari
Matthew K. Chan
Tina Yau
John Canny
Ulrik Schroeder
SPRING: Speech and Pronunciation Improvement Through Games, for Hispanic Children (2010)