Submitted by windoze t3_ylixp5 in MachineLearning
cautioushedonist t1_iuzeog4 wrote
Not as famous and might not qualify as a 'trick' but I'll mention "Geometric Deep Learning" anyway.
It tries to explain all the successful neural nets (CNN, RNN, Transformers) on a unified, universal mathematical framework. The most exciting extrapolation of this being that we'll be able to quickly discover new architectures using this framework.
and1984 t1_iv0qjbs wrote
TIL
BrisklyBrusque t1_iv6negg wrote
Is this different from the premise that neural networks are universal function approximators?
cautioushedonist t1_ivcx548 wrote
Yes, it's different.
Universal function approximation sort of guarantees/implies that you can approximate any mapping function given the right config/weights of neural nets. It doesn't really guide us to the correct config.
Viewing a single comment thread. View all comments