In generative adversarial networks, generators and discriminators are trained simultaneously. If not trained properly, both networks can overwhelm each other. If the discriminator is trained more than it can easily detect fake and real images, the generator will not be able to produce real images. If the generator is heavily trained, the discriminator will not be able to distinguish between real and fake images. We can solve this problem by correctly setting the learning Xi rate of both networks.
When we train a discriminator, we don't train the generator, and when we train the generator, we don't train the discriminator. This allows the generator to be trained correctly. Now, let's take a look at each section of the GaN network.
Discriminator Network:
We use the MNIST digital dataset with an image shape of (28, 28, 1). Due to the small size of the image, we can use the MLP network as a discriminator instead of using a convolutional layer. To do this, we first need to reshape the input into a single vector of size (784, 1). Then I applied three dense layers of and 128 hidden units in each layer.
Generator Network:
To create the generator network, we first take random noise as input of shape (100, 1). Then I used three hidden layers of shapes and 1024. The output of the generator network is then reshaped to (28, 28, 1). I have batch normalization for each hidden layer. Batch normalization improves the quality of the training model and stabilizes the training process.
Combined Models:
In order to train the generator, we need to create a composite model in which we don't train the discriminator model. In the combinatorial model, random noise is used as input to the generator network, and then the output image passes through the discriminator network to obtain labels. Here, I'm marking the discriminator model as untrainable.
Training GaN Network:
Training a GaN network requires careful tuning of hyperparameters. If the model is not carefully trained, it will not be able to converge to produce good results. We will use the following steps to train this GaN network:
First, we'll normalize the input dataset (MNIST images).
Train the discriminator using real-world images (from the MNIST dataset).
The same number of noise vectors are sampled to produce the output of the generator network (the generator is not trained here).
Train the discriminator network using the images generated in the previous step.
Take a new random sample to train the builder with the combined model without training the discriminator.
Repeat steps 2-5 for a certain number of iterations. I've trained it 30000 iterations.
View the image generated by this GaN network.
AI assistant creation season