Submitted by Awekonti t3_zqitxl in MachineLearning
Hey! Currently, I am reading the papers on Deep Learning based Recommender Systems. After around 20 papers, I realised the base idea of the papers is the same - recommendation task either Top-K recommendations or simply predicting the utility (i am not talking about those frameworks that simply model the auxiliary information). The papers have differences in base models (I am reading DNN/MLP, Autoencoder and Attentive models based), but the methodology is the same - replace the way to factorize the matrix to find the latent feature vectors of users/items/social relations, only some papers introduce custom loss function with regularisation terms (just to model the social network I would say). And all these models perform as "state-of-the-art". The question is where is this research field going/developing? All these findings/performance results are simply empirical with no theoretical evidence.
__lawless t1_j0ymvpl wrote
There are some methods that are matrix factorization free like sequential recombination models