Submitted by AutoModerator t3_xznpoh in MachineLearning
Unusual_Variation_32 t1_isorxfm wrote
Hi everyone!
So I have one true/false question:
Does L2 regularization(Ridge) reduces both the training and test error? I assume no, since ridge regression won’t improve the error, but not 100% sure.
Can you explain this please?
seiqooq t1_it3zp9b wrote
It’s useful to think of regularization simply as offering a way to punish/reward a system for exhibiting some behavior during training. Barring overfitting, if this leads to improvements in training error, you can expect improvements in test error as well.
Viewing a single comment thread. View all comments