Please use this identifier to cite or link to this item:
Title: Performance Evaluation of Generalized ADALINE Neural Configuration with Variable Learning-rate Parameter
Authors: Savita
Supervisor: Kohli, Amit Kumar
Keywords: ADALINE;Neural
Issue Date: 29-Aug-2013
Abstract: Artificial Neural Network (ANN) is a mathematical model that is inspired by the structured and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes information using a connectionist approach to computation. Recent advances in the software and hardware technologies of neural networks have motivated new studies in the architecture and applications of these networks. Neural networks have potentially powerful characteristics, which can be utilized in the development of our research goal, namely, a true autonomous machine. Machine learning is a major step in this development. The immense computational power of modern digital machines has increased the feasibility of implementing closed-loop control systems in many applications. The main focus of this thesis work is completed on the performance evaluation ADALINE neural configuration for system identification of dynamical systems, noise cancellation and adaptive prediction. It is well known that ADALINE is slow in the convergence which is not appropriate for the online applications and identification of the time varying systems. Two techniques are proposed to speed up the convergence of learning, thus increase the tracking capability of the time varying system parameters. One idea is to introduce a momentum term to the weight adjustment during convergence period. The other technique is to train the generalized ADALINE network multiple epochs with data from a sliding window of the system’s input output data. We present an online identification method based on a generalized Adaptive Linear Element (ADALINE) neural network, called GADALINE, for linear time varying systems which can be described by a discrete time model. The fine tuned GADLINE is quite suitable for online system identification and real time adaptive control applications due to its low computational demand. GADALINE neural configuration with a variable learning-rate parameter is introduced. This parameter increases or decreases as the mean-square error increases or decreases, allowing the adaptive filter to track changes in the system as well as to produce small steady state error. Therefore, we have made an effort to do analysis of different techniques with variable learning- rate parameter implemented on GADALINE neural configuration. This work discusses the motivations behind the development of ANNs and describes the basic biological neuron and the artificial computational model. The thesis concludes a comparison of the results obtained for three different techniques. These three algorithms use variable learning-rate parameter, which are already proved to have better performance over algorithm with fixed learning-rate parameter. In the first algorithm learning-rate parameter adjustment is controlled by the square of the prediction error. The second algorithm adjusts the learning-rate parameter according to a gradient descent algorithm, designed to reduce the squared estimation error during each iteration. In the third algorithm the learning-rate parameter of the algorithm is adjusted according to the square of a time-averaging estimate of the autocorrelation of e(n) and e(n-1). Simulation results show that third algorithm gives the best result over other algorithms in a stationary environment for the same excess MSE under similar environments.
Description: M.E. (Electronics and Communication Engineering)
Appears in Collections:Masters Theses@ECED

Files in This Item:
File Description SizeFormat 
2375.pdf2.32 MBAdobe PDFThumbnail

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.