Submitted by olmec-akeru t3_z6p4yv in MachineLearning
olmec-akeru OP t1_iy7a8fe wrote
Reply to comment by vikigenius in [D] What method is state of the art dimensionality reduction by olmec-akeru
So this may not be true: the surface of a Riemannian manifold is infinite, so you can encode infinite knowledge onto its surface. From there the diffeomorphic property allows one to traverse the surface and generate explainable, differentiable, vectors.
vikigenius t1_iy7kxuu wrote
Huh? Diffeomorphisms are dimensionality preserving. You can't have a diffeomorphism between Rn to R2 unless n=2. That's the only way your differential mapping is bijective
So I am not sure how the diffeomorphisms guarantee that you can have lossless dimensionality reduction.
What can happen is that if your data inherently lies on a lower dimensional manifold. For instance if you have A subset of Rn that has an inherent dimensionality of just 2 then you can trivially just represent it in 2 dimensions. For example if you have a 3d space where the 3rd dimension is an exact linear combination of the 1st and 2nd then it's inherent dimensionality is 2 and you can obviously losslessly reduce it to 2d.
But most definitely not all datasets have an inherent dimensionality of 2.
gooblywooblygoobly t1_iydf4z2 wrote
A super trivial example is that a (hyper)plane is a Riemannian manifold. Since we know that PCA is lossy, and PCA projects to a (hyper)plane, it can't be that projecting to manifolds are enough to perfectly preserve information.
Viewing a single comment thread. View all comments