Submitted by shitboots t3_zdkpgb in MachineLearning
Paper: https://www.cs.toronto.edu/~hinton/FFA13.pdf
Twitter summary: https://twitter.com/martin_gorner/status/1599755684941557761
Abstract:
> The aim of this paper is to introduce a new learning procedure for neural networks and to demonstrate that it works well enough on a few small problems to be worth serious investigation. The Forward-Forward algorithm replaces the forward and backward passes of backpropagation by two forward passes, one with positive (i.e. real) data and the other with negative data which could be generated by the network itself. Each layer has its own objective function which is simply to have high goodness for positive data and low goodness for negative data. The sum of the squared activities in a layer can be used as the goodness but there are many other possibilities, including minus the sum of the squared activities. If the positive and negative passes can be separated in time, the negative passes can be done offline, which makes the learning much simpler in the positive pass and allows video to be pipelined through the network without ever storing activities or stopping to propagate derivatives.
lfotofilter t1_iz32jjy wrote
Geoff Hinton by now must know each of the 60,000 digits of MNIST like an old friend.