Please use this identifier to cite or link to this item:
Title: HMM-based isolated and connected word speaker independent speech recognition using different acoustic models
Authors: Shagun
Supervisor: Verma, Karun
Kumar, Ravinder
Keywords: HMM;Isolated spoken word Recognition;Connected Spoken word Recognition;MFCC
Issue Date: 1-Aug-2013
Abstract: Speech recognition is the independent, computer‐driven transcription of spoken language into readable text in real time. It is the technology that allows a computer to identify the words that a person speaks into a microphone or telephone and convert it to written text. Having a machine to understand fluently spoken speech has driven speech research for more than 50 years. Although automatic speech recognition (ASR) technology is not yet at the point where machines understand all speech, in any acoustic environment, or by any person, it is used on a day‐to‐day basis in a number of applications and services. The goal of an ASR system is to accurately and efficiently convert a speech signal into a text message transcription of the spoken words independent of the speaker, environment or the device used to record the speech. Firstly, speech recognition engine converts the speech signal into a sequence of vectors which are measured throughout the duration of the speech signal. Then, using a syntactic decoder it generates a valid sequence of representations. In the work presented a HMM-based isolated and connected word speaker independent speech recognition system has been developed. Two different approaches are used for modeling the system. One is word-based approach as acoustic model and another one is triphone-based approach as acoustic model.
Description: Master of Engineering (CSE)
Appears in Collections:Masters Theses@CSED

Files in This Item:
File Description SizeFormat 
2230.pdf1.7 MBAdobe PDFThumbnail

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.