Submitted by Dear-Vehicle-3215 t3_yk17qn in MachineLearning
Hello Guys,
In this period I am working on a dimensionality reduction task using a Convolutional Autoencoder. I am working on a 3d dataset, thus the model is a 3D AE (with attention).
Regarding the MSE metric, it seems working well (it can reconstruct the input quite well, even if I am trying to switch to a denoising task), but I would like to understand if the features extracted are somehow meaningful. I know that it depends on the downstream task, but I read in this paper about Contractive AE (https://icml.cc/2011/papers/455_icmlpaper.pdf) that the Frobenius norm of the jacobian matrix of the encoder is strongly correlated with the test error of the downstream task.
The problem is that I am having a hard time implementing this metric since I am not using an MLP autoencoder and because I am not using the sigmoid nonlinearity.
- Is it by chance that nobody talks about Contractive Autoencoder in Convolutional AE?
- Do you have any general advice about my objective (evaluating the quality of my features)?
Thank you very much in advance
agent229 t1_iur2oqk wrote
You should be able to have autograd calculate the jacobian for you in torch or tensorflow. Another thing I’ve done is a Monte Carlo version (sample near the encoding of a data point, propagate through decoder, inspect changes to output). Perhaps it would be useful to use tSNE to view the embeddings in 2D…