Please use this identifier to cite or link to this item:
http://hdl.handle.net/10266/6614
Title: | A Cross-modal System for Image Annotation and Retrieval |
Authors: | Kaur, Parminder |
Supervisor: | Pannu, Husanbir Singh Malhi, Avleen Kaur |
Keywords: | Machine learning;cross-modal;image and text;associative learning;data analysis |
Issue Date: | 26-Sep-2023 |
Publisher: | Thapar Instituet |
Abstract: | Human beings experience life through a spectrum of modes such as vision, taste, hearing, smell, and touch. These multiple modes are integrated for information processing in our brain using a complex network of neuron connections. Likewise for artificial intelligence to mimic the human way of learning and evolve into the next generation, it should elucidate multi-modal information fusion efficiently. Modality is a channel that conveys information about an object or an event such as image, text, video, and audio. A research problem is said to be multi-modal or cross-modal when it incorporates information from more than a single modality. Multi-modal systems involve one mode of data to be inquired for any (same or varying) modality outcome whereas cross-modal system strictly retrieves the information from a dissimilar modality. As the input–output queries belong to diverse modal families, their coherent comparison is still an open challenge with their primitive forms and subjective definition of content similarity. Lately, cross-modal retrieval has attained plenty of attention due to enormous multi-modal data generation every day in the form of audio, video, image, and text. One vital requirement of cross-modal retrieval is to reduce the heterogeneity gap among miscellaneous modalities so that one modality's results can be effectively retrieved from the other. So, a novel unsupervised cross-modal retrieval framework (association of image and text modalities) based on associative learning is proposed in this thesis where two traditional SOMs are trained separately for images and collateral text and then they are integrated together using the Hebbian learning network to facilitate the cross-modal retrieval process. Experimental outcomes on a popular Wikipedia dataset and the primary endoscopy data demonstrate that the presented technique outshines various existing state-of-the-art techniques. |
URI: | http://hdl.handle.net/10266/6614 |
Appears in Collections: | Doctoral Theses@CSED |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
PhDThesis_ParminderKaurCSED.pdf | 9.69 MB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.