Development of Efficient Beamforming Techniques Using Machine Learning for 5G Technology

dc.contributor.authorBhadauria, Prateek
dc.contributor.supervisorKumar, Ravi
dc.contributor.supervisorSharma, Sanjay
dc.date.accessioned2024-10-23T10:58:55Z
dc.date.available2024-10-23T10:58:55Z
dc.date.issued2024-10-23
dc.description.abstractWireless applications experienced rapid growth, leading to the creation of new opportunities in fifth-generation (5G) network technology. New beamforming techniques emerged to handle the large volume of data traffic in vehicular communication. Robust beamforming techniques enhanced network performance in V2I (Vehicle to Infrastructure) based applications by reducing interference caused by the dynamic nature of vehicular scenarios. Adaptive beamforming techniques commonly dealt with the automatically adjust weight of steering vector by utilizing phase shifters and time delay properties in the direction of propagation. Hence, the concept of the direction of arrival (DoA) was used to predict beam parameters optimally through beam selection techniques. However, the design of such a network was deemed inappropriate, when applied to millimeter (mm) wave. The research developed a deep learning-based long short-term memory (LSTM) method using a minimum variance distortionless response (MVDR) beamformer. This approach aimed to predict optimal weights and provide a comparable performance in signal-to-interference noise ratio (SINR). The root mean square error (RMSE) showed a significant variation among different machine learning (ML) and deep learning (DL) algorithms. Network performance was enhanced by providing the nulls of the beamformer and anticipating interference characteristics in the initial phases of vehicular communication. The study embodied the work carried out for real-time estimation of beam vectors in the V2I scenario. The considered scenario contains a feature peculiar to the Indian subcontinent, i.e., the presence of a high number of pedestrians along with slow-moving traffic. The multifaceted problem of optimal beamforming in such a scenario has been attempted using the contemporary ML/DL paradigm. The long-term dependencies in the received beam vectors have been exploited and learned by an LSTM predictor with a moderate number of hidden units. A nonlinear autoregressive (NAR) predictor was also used to compare the LSTM model performance with a shallow architecture. The LSTM predictor was observed to perform better than the NAR model in terms of minimum square error (MSE). Furthermore, the performance of LSTM was reliable, with minimum outliers in a predefined set of simulation runs. Thus, the present work puts forth a new effort for efficient beamforming. 2 In this thesis, the prediction of interference is done using NAR and LSTM-based techniques. Comparable performance is measured by using different optimizers in both cases. The performance of LSTM method outperforms with respect to other neural network algorithms. The obtained SINR value of LSTM gives 40 dB as compared to other algorithms.en_US
dc.identifier.urihttp://hdl.handle.net/10266/6904
dc.language.isoenen_US
dc.subjectBeamformingen_US
dc.subject5Gen_US
dc.subjectMachine Learningen_US
dc.subjectV2Ien_US
dc.subjectLSTMen_US
dc.titleDevelopment of Efficient Beamforming Techniques Using Machine Learning for 5G Technologyen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Revised_Prateek_901706019.pdf
Size:
3.72 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.03 KB
Format:
Item-specific license agreed upon to submission
Description: