Risk Prediction of Vehicle Collision Based on A Combined Neural Network of CNN and LSTM

  • 謝 辰陽

Student thesis: Doctoral Thesis

Abstract

According to the statistics from the National Police Agency Ministry of the Interior in 2018 there were 320 315 traffic accidents including 1 493 deaths in Taiwan With the development of autonomous vehicles (AV) vehicles can analyze the data captured by sensors equipped on them like LiDARs radars and cameras to assess the risk of road safety and take the necessary precautions Currently there are more and more people install a dashboard camera (dashcam) in their cars The dashcam cannot only clarify the responsibility of a traffic accident but also can monitor the surrounding conditions at any time while driving which can achieve the goal of road safety This study collected the video data of vehicle collision provided by the Tainan City Traffic Accident Investigation Committee including the video recorded by dashcam or closed-circuit television (CCTV) to simulate the sensor of autonomous vehicles and train the vehicle collision risk prediction models ResNet-50 network which is a kind of pre-trained convolutional neural network (CNN) is used to capture the image features of each frame in videos Long short-term memory (LSTM) network is good at processing time-series data is used to capture the temporal features of videos In this study five models based on CNN and LSTM with different structures and input data are built F1-score is used to evaluate the performance of models The results show that the Model 5 using both vehicle dynamic feature data and video clips data gets a 0 94 F1-score has the best performance and the collision risk can be detected to exceed the 0 5 threshold at 2 5 to 3 0 seconds before the collision occurred For the models only use the video data the performance of the Model 3 gets a 0 83 F1-score and the collision risk can be detected to exceed the 0 5 threshold at 3 0 seconds before collision
Date of Award2020
Original languageEnglish
SupervisorTa-Yin Hu (Supervisor)

Cite this

'