Augmented Map Based Intelligent Navigation System

dc.contributor.authorKaur, Baljit
dc.contributor.supervisorBhattacharya, Jhilik
dc.date.accessioned2020-03-03T12:22:17Z
dc.date.available2020-03-03T12:22:17Z
dc.date.issued2020-02-28
dc.description.abstractThe presented research work aims at augmenting maps with scene information such that it can provide intelligent capabilities. The work hence exploits existing state-of-the-art methods and evolves them to create a robust information representation framework, used for task speci c augmentation purposes. For instance, the augmented map will be able to provide a visually impaired user (or a user new to the environment for example) a scene localization information, the amount of tra c or human presence information in the scene, and guidance to destination points. The scene maps are hence augmented with localization features, tra c density information and distance maps for this purpose. In this regard, the primary computer vision tasks like map generation, scene localization, and object detection and classi cation are revisited . Substantial attention is given to suitable algorithm developments for the scene localization and object detection sub-tasks. Also, the representation framework is created in a manner such that it can be re-used for di erent sub-tasks with minimal re-computational resource requirements. Throughout this work there are two supplementary goals. While on one hand the primary task is to generate augmented maps for achieving intelligent capabilities, equal amount of attention has been given in reusing and developing the tools for facilitating higher prediction con dence, lower inference time, greater robustness. Using maps, scene recognition is an important aspect for robot navigation and localization. A trajectory based map has been generated by navigating a mobile robot. The addition of new nodes in this map is based on an user interface. This map is further populated with deep CNN features extracted from the scenes. This work particularly explores scene localization problem using state of the art deep learning models. The success of these deep networks rely on large datasets. Replicating performance on domain speci c data using pre-trained networks is quite di cult in most cases primarily due to unavailability of large domain speci c datasets. Also pre-trained models incur larger memory requirements and greater inference time. Currently, a 2-step approach has been used for scene localization. In step 1, a zone matching based on deep features classi ed with integrated set based approach is used. This is further iterated with a capsule based landmark detection in the second step. Particular emphasis has been given on factors like reduced inference time, maximum reuse of information and greater prediction con- dence. For facilitating this, practices like netuning compressed networks, using soft target based training and extending a single background class to GAN based multiple dustbin classes are adapted. Vehicle detection and classi cation is another important task for street surveillance and iii scene perception for robot navigation or autonomous vehicles. This research work further focuses on tra c detection for real time applications using three components. The rst component includes designing convolutional feature map-based classi ers. The second component encourages use of multimodal feature fusion with edge features, scale space features and optical ow features. The third component focuses on training mechanisms and utilizes an e ective adaptive learning rate technique to deal with saddle points; and proposes an average covariance matrix based pre-conditioning approach. Special attention has also been given to accommodate blur features in real time. Generated maps have also been augmented with tra c density estimation. For augmentation, obtaining tra c density information for an area particularly focuses on itinerary perception subject to di erent environmental conditions. This refers to extraction of traf- c related information. The problem is modeled as a machine learning technique where the tra c distribution at di erent times (including same days,di erent days,di erent weather) are observed continuously using a service robot. This data is posed as a gaussian process for post estimation where a Region Of Interest(ROI) input, queried to a database of tra c density distributions, learned from the scenes at di erent points of time will generate an information pertaining to the region conditioned on environmental and timing events. Finally, case studies related to visually impaired navigation, tra c density estimation on particular routes at di erent instances, tourist guide for guiding an unfamiliar person to conveniently navigate from one place to another in particular organization and cow detector to detect and locate the cow's position, have been done. It should be noted that these are sample studies and can be extended as individual ones. The feature augmentation framework can be applied for similar activities such as surveillance etc.en_US
dc.identifier.urihttp://hdl.handle.net/10266/5938
dc.language.isoenen_US
dc.subjectAugmented Maps, Feature extraction, CNN, vision based navigation Intelligent systems, Deep Learning.en_US
dc.titleAugmented Map Based Intelligent Navigation Systemen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Baljitkaur_Thesis.pdf
Size:
28.44 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.03 KB
Format:
Item-specific license agreed upon to submission
Description: