Please use this identifier to cite or link to this item:
|Title:||Improving the Accuracy of Recommender Systems Through Annealing|
|Keywords:||Recommender Systems;Annealing;Singular value decomposition|
|Abstract:||Collaborative filtering (CF) is the most popular approach in recommender systems (RS). It makes use of a user item rating matrix and recommends on the basis of preferences and tastes of other users. It faces a number of issues like cold start problem, shilling attack problem and sparse matrix problems. Matrix Factorization (MF) is an efficient approach to get rid of sparse matrix problems. It is a highly reliable and robust technique that helps to predict those ratings to a user for an item that are not yet rated by him. This is done by mapping items and users to a latent space based on a given number of latent features. Minimization in MF is done by either Alternating Least Squares (ALS) method or Stochastic Gradient Descent (SGD) technique. In this thesis, SGD is used to perform minimization on the matrix factorization function using the concept of singular value decomposition (SVD) to fill in missing entries in the sparse user item rating matrix. Using this base approach, a factor known as learning rate (η) is varied to determine the accuracy and convergence rate of recommender systems. This is done by using simulated annealing, which decrements the value of learning rate in each iteration and provides an optimal solution to minimize error in the system. In this thesis, five simulated annealing schedules, along with a new proposed annealing schedule have been chosen to discuss the effect of learning rate on the accuracy of a movie recommender system. These annealing schedules are- exponential annealing, inverse scaling logarithmic cooling, linear multiplicative cooling and quadratic multiplicative cooling. Our proposed annealing schedule is named as Square Root Cooling (SRA). The experimental results on Movielens dataset prove that by employing exponential annealing schedule as the learning rate, minimum mean absolute error can be attained for the system at a lower value of learning rate. For higher learning rate values, SRA works the best. Apache Mahout 0.9 is chosen as the platform for the research.|
|Appears in Collections:||Masters Theses@CSED|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.