Generative adversarial networks and image-to-image translation | Luis Herranz
Yet another post about generative adversarial networks (GANs), pix2pix and CycleGAN. You can already find lots of webs with great introductions to GANs (such as
AI & Architecture
In this article, we release a part of our thesis, developed at Harvard, and submitted in May 2019. This piece is one building block of a larger body of work, investigating AI’s inception in…
vae autoencoder artwork
vae autoencoder artwork - Google Search
sampling from latent space faces
sampling from latent space faces - Google Search
Natural Language Generation with Neural Variational Models
In this thesis, we explore the use of deep neural networks for generation of natural language. Specifically, we implement two sequence-to-sequence neural variational models - variational autoencoders (VAE) and variational encoder-decoders (VED). VAEs for text generation are difficult to train due to issues associated with the Kullback-Leibler (KL) divergence term of the loss function vanishing to zero. We successfully train VAEs by implementing optimization heuristics such as KL weight…
Natural Language Generation with Neural Variational Models
In this thesis, we explore the use of deep neural networks for generation of natural language. Specifically, we implement two sequence-to-sequence neural variational models - variational autoencoders (VAE) and variational encoder-decoders (VED). VAEs for text generation are difficult to train due to issues associated with the Kullback-Leibler (KL) divergence term of the loss function vanishing to zero. We successfully train VAEs by implementing optimization heuristics such as KL weight…