bimtuckboo
bimtuckboo t1_j02qida wrote
Reply to comment by alkaway in [P] Are probabilities from multi-label image classification networks calibrated? by alkaway
No it does not. It simply scales the probabilities to either all be closer to 0.5 or all be further from 0.5
bimtuckboo t1_j02jss1 wrote
Reply to [P] Are probabilities from multi-label image classification networks calibrated? by alkaway
Easiest way to find out is to make some calibration plots with your validation set. From there, depending on what the plots look like, there are some things you can do to improve the calibration post training. Look into temperature scaling and Platt scaling.
bimtuckboo t1_ixig3ln wrote
Reply to comment by vonabarak in [D] Am I stupid for avoiding high level frameworks? by bigbossStrife
The same I guess, never used it though
bimtuckboo t1_ixhr2e0 wrote
Reply to comment by programmerChilli in [D] Am I stupid for avoiding high level frameworks? by bigbossStrife
No idea why you are downvoted. High or low level is clearly relative and in this context PyTorch is clearly the low level option.
bimtuckboo t1_j0b1lti wrote
Reply to [D] Dealing with extremely imbalanced dataset by hopedallas
The issue described in the article you linked only becomes relevant when you are throwing away data (that you otherwise would have trained on) purely to rectify class imbalance. If you can't train on it anyway due to computational limitations, even if the classes were 50/50 balanced, then there is nothing else to be done.
Of course more data can often lead to better performance and if you find your model to be below par then you may want to explore ways to engineer around whatever computational limitations you are encountering so that you can train on more data. In that case you may want to revisit your approach to rectiifying the class imbalance but don't do it if you don't need to.
Ultimately, anytime you are developing a model and you don't know what to do next, check if the model's performance is acceptable as is. You might not need to do anything.