Viewing a single comment thread. View all comments

eigenham t1_j1uhtn7 wrote

A similar phenomenon happens because of batching in general though. More generally, the distribution of the samples in each batch determines what the cost function "looks like" (as a function approximation) to the gradient calculation. That sample (and thus function approximation) can be biased towards a single sample or a subset of samples. I think OP's question is still an interesting one for the general case.

1

derpderp3200 OP t1_j1vmr20 wrote

Similar but not identical? What effect do you mean?

But yeah, the way I see it, the network isn't navigating a single gradient towards "a good classifier" optima, but rather down whatever gradient is left after the otherwise-destructive inference of gradients of individual training examples, as opposed to a more "purposeful" extraction of features.

Which happens to result in a gradual movement towards being a decent classifier, but it strictly relies on balanced, large, and well-crafted datasets to balance the "pull vectors" out to "zero" so the convergence effect dominates, as well as incredibly high training costs.

I don't know how it would look, but surely a more "cooperative" learning process would learn faster if not better.

1