Submitted by denisn03 t3_y5chwp in MachineLearning
Tiny_Arugula_5648 t1_isj2xgz wrote
Well it doses depend on what typed of models you want to build and how much data you’ll be using… but the general rule of thumb is always go with the most powerful GPU and largest amount of ram you can afford.. having to little processing power means you’ll wait around much longer for everything (training, predicting) with to little ram many of the larger models out like BERT might not run at all..
Or just get a couple of colab accounts.. I get plenty of v100 & even a100 time, by switching between different accounts
Varterove_muke t1_iskhc9r wrote
How you get bypass Google "noticing" switching between accounts. I've tried it on one account to trained and saved model and transfer model to another account, when I tried to continue training, it bricked me for TPU runtime environment.
Tiny_Arugula_5648 t1_isx67jr wrote
Doubtful you got “bricked” or that Google caught you switching accounts… more likely TPUs are in a lot of demand and are expensive and the Colab service is a best effort to give you unused resources and there just wasn’t any TPUs available…
Viewing a single comment thread. View all comments