eyeswideshhh
eyeswideshhh t1_ivxqi6r wrote
Reply to [P] Survival analysis by No_Captain_856
No, how would you answer, question like "survival probablity of a patient, 'X' days from admission" with binary classification done at the end of study.
eyeswideshhh t1_iurj9lt wrote
Reply to comment by Dear-Vehicle-3215 in [D] About the evaluation of the features extracted by an Autoencoder by Dear-Vehicle-3215
I have never heard of this method, you can also try beta-VAE and joint-VAE
eyeswideshhh t1_iurafh8 wrote
Denoising/vanilla autoencoder does not impose any constraints on latent representation of encoder and thus may have highly entangled fearures , you can verify this with clustering.
eyeswideshhh t1_itbaz4q wrote
Turn all image into latent space run pca on these latant variable and look if there is predictable pattern , if there is then decode it.
eyeswideshhh t1_itbar63 wrote
Reply to comment by _Yeet_xoxo in [Research]Goofing off - ML model to make the ultimate gay porn by thhvancouver
After that run lstm/rnn on sequence of latent variable obtained from autoencoder to predict next latent variable and decode that maybe.
eyeswideshhh t1_j3mwcf3 wrote
Reply to [R] Diffusion language models by benanne
I had this exact thought of using VAE or BYOL etc to generate powerful representation for text/sentences and then train a diffusion model on continuous latent data.