Submitted by hapliniste t3_yck1sx in MachineLearning

With the success of diffusion models in image generation, I was wondering if doing the same but with text embeddings would make sense.

Diffusing the embeddings so they end up being a bit off in term of vectors and position and learning to correct them. Also the iterative process of refining them during multiple pass.

Would that make any sense? I don't think I heard about research in this area.

47

Comments

You must log in or register to comment.

limpbizkit4prez t1_itnoeqq wrote

I've always applied an annealing schedule like that to LMs. Imo, it works incredibly well and generalizes great

3