R.I.P.
๐ป
Ghosted
A Field Test of Bandit Algorithms for Recommendations: Understanding the Validity of Assumptions on Human Preferences in Multi-armed Bandits
April 16, 2023 ยท Entered Twilight ยท ๐ International Conference on Human Factors in Computing Systems
Repo contents: .gitignore, LICENSE, README.md, Scraping Comics.ipynb, human_bandit_evaluation, requirements.txt, run_scraping_comics.sh, scraping_comics.py, selected-comics
Authors
Liu Leqi, Giulio Zhou, Fatma Kฤฑlฤฑnรง-Karzan, Zachary C. Lipton, Alan L. Montgomery
arXiv ID
2304.09088
Category
cs.IR: Information Retrieval
Cross-listed
cs.HC,
cs.LG
Citations
4
Venue
International Conference on Human Factors in Computing Systems
Repository
https://github.com/HumainLab/human-bandit-evaluation
โญ 3
Last Checked
2 months ago
Abstract
Personalized recommender systems suffuse modern life, shaping what media we read and what products we consume. Algorithms powering such systems tend to consist of supervised learning-based heuristics, such as latent factor models with a variety of heuristically chosen prediction targets. Meanwhile, theoretical treatments of recommendation frequently address the decision-theoretic nature of the problem, including the need to balance exploration and exploitation, via the multi-armed bandits (MABs) framework. However, MAB-based approaches rely heavily on assumptions about human preferences. These preference assumptions are seldom tested using human subject studies, partly due to the lack of publicly available toolkits to conduct such studies. In this work, we conduct a study with crowdworkers in a comics recommendation MABs setting. Each arm represents a comic category, and users provide feedback after each recommendation. We check the validity of core MABs assumptions-that human preferences (reward distributions) are fixed over time-and find that they do not hold. This finding suggests that any MAB algorithm used for recommender systems should account for human preference dynamics. While answering these questions, we provide a flexible experimental framework for understanding human preference dynamics and testing MABs algorithms with human users. The code for our experimental framework and the collected data can be found at https://github.com/HumainLab/human-bandit-evaluation.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Information Retrieval
R.I.P.
๐ป
Ghosted
LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation
R.I.P.
๐ป
Ghosted
Graph Convolutional Neural Networks for Web-Scale Recommender Systems
๐
๐
Old Age
Neural Graph Collaborative Filtering
R.I.P.
๐ป
Ghosted
Self-Attentive Sequential Recommendation
R.I.P.
๐ป
Ghosted