Submitted by Awekonti t3_zqitxl in MachineLearning
Hey! Currently, I am reading the papers on Deep Learning based Recommender Systems. After around 20 papers, I realised the base idea of the papers is the same - recommendation task either Top-K recommendations or simply predicting the utility (i am not talking about those frameworks that simply model the auxiliary information). The papers have differences in base models (I am reading DNN/MLP, Autoencoder and Attentive models based), but the methodology is the same - replace the way to factorize the matrix to find the latent feature vectors of users/items/social relations, only some papers introduce custom loss function with regularisation terms (just to model the social network I would say). And all these models perform as "state-of-the-art". The question is where is this research field going/developing? All these findings/performance results are simply empirical with no theoretical evidence.
cnapun t1_j0z2van wrote
User behavior is pretty stochastic and not really well-captured in datasets available to academia. There's also the second class of papers that explores ranking more than candidate generation, which imo are usually more interesting, but also harder to find good data for in academia.
I take all results in papers discussing embeddings/two tower models (for retrieval) with a grain of salt because in my experience, the number one thing that matters for these in practice is negative sampling (but people rarely do ablations on this. see this paper that shows how metric learning hasn't really progressed as much as papers would have you think). They can still be good to read for ideas though