Submitted by deluded_soul t3_1181g88 in MachineLearning
I am looking into any techniques one could use for very large datasets in machine learning. So I am talking about datasets with the following properties:
1: 3D Imaging dataset where each dataset is of the order of many terabytes.
2: Each 3D image is too big to fit in the GPU or CPU memory.
I am interested in educating myself on methods that people have used in classical ML and modern deep learning for such extremely large datasets.
In particular, how does one ensure one can capture long-range spatial interactions in such datasets and what computational techniques can one do to perform learning on such datasets?
Finally, if someone can point me to some open source examples of such ML systems (domain is not important) that I can learn from, I would be extremely grateful.open-source
__lawless t1_j9eu7n2 wrote
r/learnmachinelearning