Relation-Aware Image Captioning with Hybrid-Attention for Explainable Visual Question Answering

Ying Jia Lin, Ching Shan Tseng, Hung Yu Kao

研究成果: Article同行評審

摘要

Recent studies leveraging object detection as the preliminary step for Visual Question Answering (VQA) ignore the relationships between different objects inside an image based on the textual question. In addition, the previous VQA models work like black-box functions, which means it is difficult to explain why a model provides such answers to the corresponding inputs. To address the issues above, we propose a new model structure to strengthen the representations for different objects and provide explainability for the VQA task. We construct a relation graph to capture the relative positions between region pairs and then create relation-aware visual features with a relation encoder based on graph attention networks. To make the final VQA predictions explainable, we introduce a multi-task learning framework with an additional explanation generator to help our model produce reasonable explanations. Simultaneously, the generated explanations are incorporated with the visual features using a novel Hybrid-Attention mechanism to enhance cross-modal understanding. Experiments show that the proposed method performs better on the VQA task than the several baselines. In addition, incorporation with the explanation generator can provide reasonable explanations along with the predicted answers.

原文English
頁(從 - 到)649-659
頁數11
期刊Journal of Information Science and Engineering
40
發行號3
DOIs
出版狀態Published - 2024 5月

All Science Journal Classification (ASJC) codes

  • 軟體
  • 人機介面
  • 硬體和架構
  • 圖書館與資訊科學
  • 計算機理論與數學

指紋

深入研究「Relation-Aware Image Captioning with Hybrid-Attention for Explainable Visual Question Answering」主題。共同形成了獨特的指紋。

引用此