Please use this identifier to cite or link to this item:
|Publication type:||Conference paper|
|Type of review:||Peer review (publication)|
|Title:||Evaluation of algorithms for interaction-sparse recommendations : neural networks don’t always win|
Monteiro, Joao Pedro
|Proceedings:||Proceedings of EDBT 2022|
|Conference details:||25th International Conference on Extending Database Technology, Edinburgh (online), 29 March - 1 April 2022|
|Publisher / Ed. Institution:||OpenProceedings|
|Subjects:||Recommender system; Machine learning; Neural network|
|Subject (DDC):||006: Special computer methods|
|Abstract:||In recent years, top-K recommender systems with implicit feedback data gained interest in many real-world business scenarios. In particular, neural networks have shown promising results on these tasks. However, while traditional recommender systems are built on datasets with frequent user interactions, insurance recommenders often have access to a very limited amount of user interactions, as people only buy a few insurance products. In this paper, we shed new light on the problem of top-K recommendations for interaction-sparse recommender problems. In particular, we analyze six different recommender algorithms, namely a popularity-based baseline and compare it against two matrix factorization methods (SVD++, ALS), one neural network approach (JCA) and two combinations of neural network and factorization machine approaches (DeepFM, NeuFM). We evaluate these algorithms on six different interaction-sparse datasets and one dataset with a less sparse interaction pattern to elucidate the unique behavior of interaction-sparse datasets. In our experimental evaluation based on real-world insurance data, we demonstrate that DeepFM shows the best performance followed by JCA and SVD++, which indicates that neural network approaches are the dominant technologies. However, for the remaining five datasets we observe a different pattern. Overall, the matrix factorization method SVD++ is the winner. Surprisingly, the simple popularity-based approach comes out second followed by the neural network approach JCA. In summary, our experimental evaluation for interaction-sparse datasets demonstrates that in general matrix factorization methods outperform neural network approaches. As a consequence, traditional well-established methods should be part of the portfolio of algorithms to solve real-world interaction-sparse recommender problems.|
|Fulltext version:||Published version|
|License (according to publishing contract):||CC BY-NC-ND 4.0: Attribution - Non commercial - No derivatives 4.0 International|
|Departement:||School of Engineering|
|Organisational Unit:||Institute of Applied Information Technology (InIT)|
|Published as part of the ZHAW project:||NQuest – Natural Language Query Exploration System|
|Appears in collections:||Publikationen School of Engineering|
Files in This Item:
|2022_Klingler-etal_Recommender-System_EDBT.pdf||440.43 kB||Adobe PDF|
Show full item record
Klingler, Y., Lehmann, C., Monteiro, J. P., Saladin, C., Bernstein, A., & Stockinger, K. (2022, March). Evaluation of algorithms for interaction-sparse recommendations : neural networks don’t always win. Proceedings of EDBT 2022. https://doi.org/10.21256/zhaw-24616
Klingler, Y. et al. (2022) ‘Evaluation of algorithms for interaction-sparse recommendations : neural networks don’t always win’, in Proceedings of EDBT 2022. OpenProceedings. Available at: https://doi.org/10.21256/zhaw-24616.
Y. Klingler, C. Lehmann, J. P. Monteiro, C. Saladin, A. Bernstein, and K. Stockinger, “Evaluation of algorithms for interaction-sparse recommendations : neural networks don’t always win,” in Proceedings of EDBT 2022, Mar. 2022. doi: 10.21256/zhaw-24616.
Klingler, Yasamin, et al. “Evaluation of Algorithms for Interaction-Sparse Recommendations : Neural Networks Don’t Always Win.” Proceedings of EDBT 2022, OpenProceedings, 2022, https://doi.org/10.21256/zhaw-24616.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.