NEAR: Non-Supervised Explainability Architecture for Accurate Review-Based Collaborative Filtering

Reinald Adrian Pugoy, Hung Yu Kao

研究成果: Article同行評審


There is a critical issue in explainable recommender systems that compounds the challenges of explainability yet is rarely tackled: the lack of ground-truth explanation texts for training. It is unrealistic to expect every user-item pair in a dataset to have a corresponding target explanation. Hence, we pioneer the first non-supervised explainability architecture for review-based collaborative filtering (called NEAR) as our novel contribution to the theory of explanation construction in recommender systems. While maintaining excellent recommendation performance, our approach reformulates explainability as a non-supervised (i.e., unsupervised and self-supervised) explanation generation task. We formally define two explanation types, both of which NEAR can produce. An invariant explanation, fixed for all users, is based on the unsupervised extractive summary of an item's reviews via embedding clustering. Meanwhile, a variant explanation, personalized for a specific user, is a sentence-level text generated by our customized Transformer conditioned on every user-item-rating tuple and artificial ground-truth (self-supervised label) from one of the invariant explanation's sentences. Our empirical evaluation illustrates that NEAR's rating prediction accuracy is better than the other state-of-the-art baselines. Moreover, experiments and assessments show that NEAR-generated variant explanations are more personalized and distinct than those from other Transformer-based models, and our invariant explanations are preferred over those from other contemporary models in real life.

頁(從 - 到)750-765
期刊IEEE Transactions on Knowledge and Data Engineering
出版狀態Published - 2024 2月 1

All Science Journal Classification (ASJC) codes

  • 資訊系統
  • 電腦科學應用
  • 計算機理論與數學


深入研究「NEAR: Non-Supervised Explainability Architecture for Accurate Review-Based Collaborative Filtering」主題。共同形成了獨特的指紋。