R.I.P.
๐ป
Ghosted
Scaling Sequential Recommendation Models with Transformers
December 10, 2024 ยท Entered Twilight ยท ๐ Annual International ACM SIGIR Conference on Research and Development in Information Retrieval
Repo contents: LICENSE, MyRecbole, README.md, confs, logs, mlruns.sqlite.bz2, requirements_cpu.txt, requirements_gpu.txt, scaling-law.ipynb, scripts, setup.py, srt
Authors
Pablo Zivic, Hernan Vazquez, Jorge Sanchez
arXiv ID
2412.07585
Category
cs.LG: Machine Learning
Cross-listed
cs.AI
Citations
22
Venue
Annual International ACM SIGIR Conference on Research and Development in Information Retrieval
Repository
https://github.com/mercadolibre/srt
โญ 12
Last Checked
2 months ago
Abstract
Modeling user preferences has been mainly addressed by looking at users' interaction history with the different elements available in the system. Tailoring content to individual preferences based on historical data is the main goal of sequential recommendation. The nature of the problem, as well as the good performance observed across various domains, has motivated the use of the transformer architecture, which has proven effective in leveraging increasingly larger amounts of training data when accompanied by an increase in the number of model parameters. This scaling behavior has brought a great deal of attention, as it provides valuable guidance in the design and training of even larger models. Taking inspiration from the scaling laws observed in training large language models, we explore similar principles for sequential recommendation. We use the full Amazon Product Data dataset, which has only been partially explored in other studies, and reveal scaling behaviors similar to those found in language models. Compute-optimal training is possible but requires a careful analysis of the compute-performance trade-offs specific to the application. We also show that performance scaling translates to downstream tasks by fine-tuning larger pre-trained models on smaller task-specific domains. Our approach and findings provide a strategic roadmap for model training and deployment in real high-dimensional preference spaces, facilitating better training and inference efficiency. We hope this paper bridges the gap between the potential of transformers and the intrinsic complexities of high-dimensional sequential recommendation in real-world recommender systems. Code and models can be found at https://github.com/mercadolibre/srt
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted