Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The denoising part of a denoising autoencoder refers to the noise applied to its input

Agree, it converts a noisy image to a denoised image. But the odd thing is, when you put a noisy image into a StyleGAN2 encoder, you get latents which the decoder will turn into a de-noised image. So in practical use, you can take a trained StyleGAN2 encoder/decoder pair and use it as if it was a denoiser. For example https://arxiv.org/abs/2103.04192

> These differences lead to learned distributions in the latent space that are entirely different

I also agree there. The training for a denoising auto-encoder and for a GAN network is different, leading to different distributions which are sampled for generating the images. But the architecture is still very similar, meaning the limits of what can be learned should be the same.

> Beyond that the comparison just doesn't work, yes there are two networks but the discriminator doesn't play the role of the AE's encoder at all

Yes, the discriminator in a GAN won't work like an encoder. But if you look at how StyleGAN 1/2 are used in practice, people combine it with a so-called "projection", which is effectively an encoder to convert images to latents. So people use a pipeline of "image to latent encoder" + "latent to image decoder".

That whole pipeline is very similar to an auto-encoder. For example, here's an NVIDIA paper about how they round-trip from image to latent to image with StyleGAN: https://arxiv.org/abs/1912.04958 My interpretation of what they did in that paper is that they effectively trained a StyleGAN-like model with the image L2 loss typically used for training a denoising auto-encoder.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: