Research shows that oral cancer cells emit fluorescent substances under excitation at specific frequency bands By exploiting this feature it is possible to evaluate the potential existence of oral cancer by capturing oral fluorescence images Accordingly existing image-processing methods for oral cancer detection analyzes the features of the autofluorescence image and uses a quadratic discriminant analysis (QDA) classifier to classify the data as either cancer or non-cancer QDA requires a large volume of training data to achieve a satisfactory accuracy rate However it is difficult to collect oral cavity images from patients Furthermore QDA classification errors often occur since the training data set usually consist of just single-view image Accordingly this study proposes a Generative Adversarial Network (GAN) model for learning multiple view images from single view image The generated multiple-view images are then used to re-classify thereby improving the accuracy if the QDA classifier As a result the reconstructed multiple angle view oral images actually provide more effective and safety classification result for oral screening in the future
Date of Award | 2019 |
---|
Original language | English |
---|
Supervisor | Pau-Choo Chung (Supervisor) |
---|
Learning Complete Representation for Multi-view Oral Image Generation with Generative Adversarial Networks
育閔, 詹. (Author). 2019
Student thesis: Doctoral Thesis