Viewing a single comment thread. View all comments

cautioushedonist t1_iuzeog4 wrote

Not as famous and might not qualify as a 'trick' but I'll mention "Geometric Deep Learning" anyway.

It tries to explain all the successful neural nets (CNN, RNN, Transformers) on a unified, universal mathematical framework. The most exciting extrapolation of this being that we'll be able to quickly discover new architectures using this framework.

Link - https://geometricdeeplearning.com/

16

BrisklyBrusque t1_iv6negg wrote

Is this different from the premise that neural networks are universal function approximators?

1

cautioushedonist t1_ivcx548 wrote

Yes, it's different.

Universal function approximation sort of guarantees/implies that you can approximate any mapping function given the right config/weights of neural nets. It doesn't really guide us to the correct config.

2