TY - GEN
T1 - Fraud Detection Models and their Explanations for a Buy-Now-Pay-Later Application
AU - Shu, Joseph
AU - Shu, Lihchyun
AU - Chang, Wun Yan
AU - Su, Chiacheng
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/2/23
Y1 - 2024/2/23
N2 - Buy-now-pay-later (BNPL) has been increasingly adopted by young shoppers as well as older generations because applying for a BNPL service is hassle-free compared to credit card applications. However, preventing fraudulent transactions is indispensable. In this study, machine learning techniques have been used to create fraud detection models for a BNPL company. Of the four algorithms tested, Random Forest was the top performer, with an impressive F1 score of 0.92 and a low false positive rate of 0.072. To understand how decisions are made by the random forest classifier and build human trust, we use SHAP (SHapley Additive exPlanations), a popular explainable artificial intelligence (XAI) method, to interpret the model with both global and local explanations. It is discovered that the detection rules used by the company’s human experts involve features deemed important by SHAP. Furthermore, we discovered other features that are useful for determining whether a transaction is fraudulent or not, which human experts have not considered before. Through XAI explanations, humans can work together with prediction models to detect fraudulent transactions and learn from each other.
AB - Buy-now-pay-later (BNPL) has been increasingly adopted by young shoppers as well as older generations because applying for a BNPL service is hassle-free compared to credit card applications. However, preventing fraudulent transactions is indispensable. In this study, machine learning techniques have been used to create fraud detection models for a BNPL company. Of the four algorithms tested, Random Forest was the top performer, with an impressive F1 score of 0.92 and a low false positive rate of 0.072. To understand how decisions are made by the random forest classifier and build human trust, we use SHAP (SHapley Additive exPlanations), a popular explainable artificial intelligence (XAI) method, to interpret the model with both global and local explanations. It is discovered that the detection rules used by the company’s human experts involve features deemed important by SHAP. Furthermore, we discovered other features that are useful for determining whether a transaction is fraudulent or not, which human experts have not considered before. Through XAI explanations, humans can work together with prediction models to detect fraudulent transactions and learn from each other.
UR - http://www.scopus.com/inward/record.url?scp=85200512662&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85200512662&partnerID=8YFLogxK
U2 - 10.1145/3654522.3654588
DO - 10.1145/3654522.3654588
M3 - Conference contribution
AN - SCOPUS:85200512662
T3 - ACM International Conference Proceeding Series
SP - 439
EP - 445
BT - ICIIT 2024 - Proceedings of the 2024 9th International Conference on Intelligent Information Technology
PB - Association for Computing Machinery
T2 - 2024 9th International Conference on Intelligent Information Technology, ICIIT 2024
Y2 - 23 February 2024 through 25 February 2024
ER -