Please use this identifier to cite or link to this item:
|Title:||Metaheuristic Approaches for Occlusion Invariant 3D Face Recognition Technique|
|Keywords:||Metaheuristic Approaches;3D Face;Face Reconstruction;Face Recognition;Deep Learning|
|Abstract:||Face identification is one of the non - intrusive biometric strategies. Over the decades, several investigators have sought to achieve state-of-the-art face-recognition frameworks. Recently, AlexNet has proven to be the beginning of the deep learning age. Many face-recognition issues have been overcome in the last eight years by using deep learning methods ranging from multilayer perceptron to convolutional neural networks. Transfer learning methods have also been handy where the dataset is not huge, and we have a small number of subjects. Researchers are focusing on 3D facial recognition and restoration for the last 3-4 years. There are various methods for 3D face reconstruction, namely, morphable model-based reconstruction, epipolar geometry-based reconstruction, and one-shot learning-based reconstruction, shape from shading-based reconstruction, as well as deep learning-based reconstruction. 3D face reconstruction techniques using voxels as well as facial landmarks have been proposed in this work. There are three face reconstruction techniques proposed in this work viz. 3D voxel-based face reconstruction using sequential deep learning, 3D landmarks-based face reconstruction, and voxel-based occlusion-invariant face recognition using game theory and simulated annealing. The three datasets, namely, Bosphorus Database, University of Milano Bicocca 3D Face Database, and Kinect Face Database, have been used for training and testing phases. Using the game theory and simulated annealing method, the overall classification accuracy obtained from voxel-based facial recognition is 86.1%. For occlusion invariant facial recognition, the average accuracy generated by the proposed technique is 75.5%. In facial recognition using 3D landmarks, the average accuracy achieved from the given methodology is 81.3%. In the case of 3D mesh face recognition, the average precision achieved from the methodology is 83.9%. Adversarial generation technique for triplet generation promotes minimal bias. The technique coupled with simulated annealing allows the proposed system to be resilient in a variety of areas using voxels. In the 3D voxel-based face reconstruction (3DVFR) method in the performance evaluation phase, all predictions are shown with and without reconstruction. In terms of gender identification, the accuracy is greater than 90% following restoration for all three datasets due to binary classification. In emotional identification, the accuracy for the Bosphorus dataset is 94.57% after restoration compared to 83.95% before restoration, i.e., 10.62% improvement in accuracy. In the case of University of Milano Bicocca 3D face database and Kinect Face database, there is a rise of 14.9% and 12.49%. In occlusion classification, there is a rise of 4.07%, 4.82%, and 7.05% for Bosphorus, University of Milano Bicocca 3D face database, and Kinect Face databases. The precision of the proposed architecture (3DVFR) is state-of-the-art with 90.01% in 3D face recognition using Bosphorus dataset voxels. In the case of University of Milano Bicocca 3D face database and Kinect Face database, the facial recognition performance is 78.21% and 85.68%, respectively. 3D landmark-based facial reconstruction method has been compared with three recently developed approaches over three well-known databases. The experimental findings indicate that the proposed method obtained an 8-10 % increase over the current techniques. The computation period derived from the proposed method is roughly 50% quicker than the other approaches. Furthermore, in ablation experiments, the efficiency of the proposed model is checked for occluded versus non-occluded faces, gender, emotion, and occlusion type recognition.|
|Appears in Collections:||Doctoral Theses@CSED|
Files in This Item:
|Sahil Sharma PhD Thesis.pdf||4.37 MB||Adobe PDF||View/Open Request a copy|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.