TY - JOUR
T1 - Trompt
T2 - 40th International Conference on Machine Learning, ICML 2023
AU - Chen, Kuan Yu
AU - Chiang, Ping Han
AU - Chou, Hsin Rung
AU - Chen, Ting Wei
AU - Chang, Darby Tien Hao
N1 - Publisher Copyright:
© 2023 Proceedings of Machine Learning Research. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Tabular data is arguably one of the most commonly used data structures in various practical domains, including finance, healthcare and e-commerce. However, based on a recently published tabular benchmark, we can see deep neural networks still fall behind tree-based models on tabular datasets (Grinsztajn et al., 2022). In this paper, we propose Trompt-which stands for Tabular Prompt-a novel architecture inspired by prompt learning of language models. The essence of prompt learning is to adjust a large pre-trained model through a set of prompts outside the model without directly modifying the model. Based on this idea, Trompt separates the learning strategy of tabular data into two parts for the intrinsic information of a table and the varied information among samples. Trompt is evaluated with the benchmark mentioned above. The experimental results demonstrate that Trompt outperforms state-of-the-art deep neural networks and is comparable to tree-based models (Figure 1).
AB - Tabular data is arguably one of the most commonly used data structures in various practical domains, including finance, healthcare and e-commerce. However, based on a recently published tabular benchmark, we can see deep neural networks still fall behind tree-based models on tabular datasets (Grinsztajn et al., 2022). In this paper, we propose Trompt-which stands for Tabular Prompt-a novel architecture inspired by prompt learning of language models. The essence of prompt learning is to adjust a large pre-trained model through a set of prompts outside the model without directly modifying the model. Based on this idea, Trompt separates the learning strategy of tabular data into two parts for the intrinsic information of a table and the varied information among samples. Trompt is evaluated with the benchmark mentioned above. The experimental results demonstrate that Trompt outperforms state-of-the-art deep neural networks and is comparable to tree-based models (Figure 1).
UR - http://www.scopus.com/inward/record.url?scp=85174400080&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85174400080&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85174400080
SN - 2640-3498
VL - 202
SP - 5036
EP - 5051
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 23 July 2023 through 29 July 2023
ER -