Submitted by dogonix t3_z6ylfq in MachineLearning
bloc97 t1_iy57elh wrote
With traditional gradient descent, probably not as most operators in modern NN architectures are bottlenecked by bandwidth and not much by compute. There's active research on alternative training methods but they mostly have difficulties in detecting malicious agents in the training pool. If you own all of the machines those algorithms work but when there is a non insignificant amount of malicious agents the training might fail or even produce models that have backdoors or have private training data leaked.
dogonix OP t1_iy78bqr wrote
Interesting. Do you have links to resources with more info about the issue you mention? Thanks.
Viewing a single comment thread. View all comments