Abstract
This talk introduces the core ideas behind Generative Adversarial Networks (GANs), explaining how a generator and discriminator compete to learn complex data distributions. It covers the theoretical principles behind divergence minimisation, the original GAN objective, and why practical training remains challenging despite strong theoretical foundations. A brief simulation demonstrates GANs’ ability to approximate distributions and opens questions about using GAN-generated data for statistical inference.