Apprehensive_Air8919 OP t1_ja96vdu wrote
Reply to comment by trajo123 in Why does my validation loss suddenly fall dramatically while my training loss does not? by Apprehensive_Air8919
nn.MSELoss(), I used sklearn train_test_split() with test_size being = 0.2. It is consistent behavior across any split i've seen. The wierd thing is that it only happens when I run very low lr
trajo123 t1_ja9aghn wrote
Very strange.
Are you sure your dataset is shuffled before the split? Have you tried different random seeds, different split ratios?
Or maybe there a bug in how you calculate the loss, but that should affect the training set as well...
So my best guess is you either don't have your data shuffled and the validation samples are "easier" or maybe it's something more trivial, like a bug in the plotting code. Or maybe that's the point where your model become self-aware :)
Apprehensive_Air8919 OP t1_jacst55 wrote
omg... I think I found the bug. I had used the depth estimation image as input for the model in the validation loop....................
trajo123 t1_jaekibz wrote
Apprehensive_Air8919 OP t1_jackmpu wrote
I just did a run with test_size being 0.5. The same thing happend. Wtf is going on :/
Viewing a single comment thread. View all comments