huehue12132
huehue12132 t1_j43ikwm wrote
Reply to [D] Has ML become synonymous with AI? by Valachio
It doesn't really matter whether or not there are people *working* on other stuff -- the two terms are different by definition.
huehue12132 t1_is2r76e wrote
Reply to comment by hjmb in [D] Are the inference functions of models a "Linear map"? by [deleted]
You're right, of course. I think I just misunderstood what you were getting at with your answer when I made that comment.
huehue12132 t1_irzzwkl wrote
Reply to comment by hjmb in [D] Are the inference functions of models a "Linear map"? by [deleted]
There are many ML models that are not neural networks.
huehue12132 t1_ir4hops wrote
Reply to [R] Self-Programming Artificial Intelligence Using Code-Generating Language Models by Ash3nBlue
I like how every single reference is either 2016 or newer, OR Schmidhuber.
huehue12132 t1_j9e9xqf wrote
Reply to [D] On papers forcing the use of GANs where it is not relevant by AlmightySnoo
GANs can be useful as alternative/additional loss functions. E.g. the original pix2pix paper: https://arxiv.org/abs/1611.07004 Here, they have pairs (X, Y) available, so they could just train this as a regression task directly. However, they found better results using L1 loss plus a GAN loss.
Keep in mind that using something like squared error loss has a ton of assumptions underlying it (if you interpret training as maximum likelihood estimation) such as outputs being conditionally independent and following a Gaussian distribution. A GAN discriminator can represent a more complex/more appropriate loss function.
Note, I'm not saying that a lot of these papers might not add anything of value, but there are reasons to use GANs even if you have known input-output pairs.