Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-24616
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKlingler, Yasamin-
dc.contributor.authorLehmann, Claude-
dc.contributor.authorMonteiro, Joao Pedro-
dc.contributor.authorSaladin, Carlo-
dc.contributor.authorBernstein, Abraham-
dc.contributor.authorStockinger, Kurt-
dc.date.accessioned2022-03-17T09:33:20Z-
dc.date.available2022-03-17T09:33:20Z-
dc.date.issued2022-03-
dc.identifier.isbn978-3-89318-086-8de_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/24616-
dc.description.abstractIn recent years, top-K recommender systems with implicit feedback data gained interest in many real-world business scenarios. In particular, neural networks have shown promising results on these tasks. However, while traditional recommender systems are built on datasets with frequent user interactions, insurance recommenders often have access to a very limited amount of user interactions, as people only buy a few insurance products. In this paper, we shed new light on the problem of top-K recommendations for interaction-sparse recommender problems. In particular, we analyze six different recommender algorithms, namely a popularity-based baseline and compare it against two matrix factorization methods (SVD++, ALS), one neural network approach (JCA) and two combinations of neural network and factorization machine approaches (DeepFM, NeuFM). We evaluate these algorithms on six different interaction-sparse datasets and one dataset with a less sparse interaction pattern to elucidate the unique behavior of interaction-sparse datasets. In our experimental evaluation based on real-world insurance data, we demonstrate that DeepFM shows the best performance followed by JCA and SVD++, which indicates that neural network approaches are the dominant technologies. However, for the remaining five datasets we observe a different pattern. Overall, the matrix factorization method SVD++ is the winner. Surprisingly, the simple popularity-based approach comes out second followed by the neural network approach JCA. In summary, our experimental evaluation for interaction-sparse datasets demonstrates that in general matrix factorization methods outperform neural network approaches. As a consequence, traditional well-established methods should be part of the portfolio of algorithms to solve real-world interaction-sparse recommender problems.de_CH
dc.language.isoende_CH
dc.publisherOpenProceedingsde_CH
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/4.0/de_CH
dc.subjectRecommender systemde_CH
dc.subjectMachine learningde_CH
dc.subjectNeural networkde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.titleEvaluation of algorithms for interaction-sparse recommendations : neural networks don’t always winde_CH
dc.typeKonferenz: Paperde_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitInstitut für Informatik (InIT)de_CH
dc.identifier.doi10.48786/edbt.2022.42de_CH
dc.identifier.doi10.21256/zhaw-24616-
zhaw.conference.details25th International Conference on Extending Database Technology, Edinburgh (online), 29 March - 1 April 2022de_CH
zhaw.funding.euNode_CH
zhaw.originated.zhawYesde_CH
zhaw.pages.end486de_CH
zhaw.pages.start475de_CH
zhaw.publication.statusacceptedVersionde_CH
zhaw.publication.reviewPeer review (Publikation)de_CH
zhaw.title.proceedingsProceedings of EDBT 2022de_CH
zhaw.webfeedDatalabde_CH
zhaw.webfeedInformation Engineeringde_CH
zhaw.funding.zhawNQuest – Natural Language Query Exploration Systemde_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Appears in collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2022_Klingler-etal_Recommender-System_EDBT.pdfAccepted Version440.43 kBAdobe PDFThumbnail
View/Open
Show simple item record
Klingler, Y., Lehmann, C., Monteiro, J. P., Saladin, C., Bernstein, A., & Stockinger, K. (2022). Evaluation of algorithms for interaction-sparse recommendations : neural networks don’t always win [Conference paper]. Proceedings of EDBT 2022, 475–486. https://doi.org/10.48786/edbt.2022.42
Klingler, Y. et al. (2022) ‘Evaluation of algorithms for interaction-sparse recommendations : neural networks don’t always win’, in Proceedings of EDBT 2022. OpenProceedings, pp. 475–486. Available at: https://doi.org/10.48786/edbt.2022.42.
Y. Klingler, C. Lehmann, J. P. Monteiro, C. Saladin, A. Bernstein, and K. Stockinger, “Evaluation of algorithms for interaction-sparse recommendations : neural networks don’t always win,” in Proceedings of EDBT 2022, Mar. 2022, pp. 475–486. doi: 10.48786/edbt.2022.42.
KLINGLER, Yasamin, Claude LEHMANN, Joao Pedro MONTEIRO, Carlo SALADIN, Abraham BERNSTEIN und Kurt STOCKINGER, 2022. Evaluation of algorithms for interaction-sparse recommendations : neural networks don’t always win. In: Proceedings of EDBT 2022. Conference paper. OpenProceedings. März 2022. S. 475–486. ISBN 978-3-89318-086-8
Klingler, Yasamin, Claude Lehmann, Joao Pedro Monteiro, Carlo Saladin, Abraham Bernstein, and Kurt Stockinger. 2022. “Evaluation of Algorithms for Interaction-Sparse Recommendations : Neural Networks Don’t Always Win.” Conference paper. In Proceedings of EDBT 2022, 475–86. OpenProceedings. https://doi.org/10.48786/edbt.2022.42.
Klingler, Yasamin, et al. “Evaluation of Algorithms for Interaction-Sparse Recommendations : Neural Networks Don’t Always Win.” Proceedings of EDBT 2022, OpenProceedings, 2022, pp. 475–86, https://doi.org/10.48786/edbt.2022.42.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.