Authors: ENES EKEN
Abstract: Generative adversarial networks (GANs) can be used in a wide range of applications where drawing samples from a data probability distribution without explicitly representing it is essential. Unlike the deep convolutional neural networks (CNNs) trained for mapping an input to one of the multiple outputs, monitoring the overfitting and underfitting in GANs is not trivial since they are not classifying but generating a data. While training set and validation set accuracy give a direct sense of success in terms of overfitting and underfitting for CNNs during the training process, evaluating the GANs mainly depends on the visual inspection of the generated samples and generator/discriminator costs of the GANs. Unfortunately, visual inspection is far away of being objective and generator/discriminator costs are very nonintuitive. In this paper, a method was proposed for quantitatively determining the overfitting and underfitting in the GANs during the training process by calculating the approximate derivative of the Fréchet distance between generated data distribution and real data distribution unconditionally or conditioned on a specific class. Both of the distributions can be obtained from the distribution of the embedding in the discriminator network of the GAN. The method is independent of the design architecture and the cost function of the GAN and empirical results on MNIST and CIFAR-10 support the effectiveness of the proposed method.
Keywords: Generative adversarial networks, Fréchet inception distance, overfitting, underfitting
Full Text: PDF