Image Generation
One of the most popular applications of GANs is in image generation. GANs can be used to generate realistic images from random noise, which has applications in art, design, and entertainment. GANs have also been used to create high-resolution images from low-resolution inputs, known as super-resolution.
Example:
Let’s build and train a simple GAN to generate images using the MNIST dataset:
import numpy as np
from tensorflow.keras.datasets import mnist
from tensorflow.keras.optimizers import Adam
import matplotlib.pyplot as plt
# Load and preprocess the MNIST dataset
(X_train, _), (_, _) = mnist.load_data()
X_train = (X_train - 127.5) / 127.5
X_train = np.expand_dims(X_train, axis=-1)
# Define the GAN
def build_gan(generator, discriminator):
discriminator.compile(optimizer=Adam(0.0002, 0.5), loss='binary_crossentropy', metrics=['accuracy'])
discriminator.trainable = False
gan = Sequential([generator, discriminator])
gan.compile(optimizer=Adam(0.0002, 0.5), loss='binary_crossentropy')
return gan
gan = build_gan(generator, discriminator)
# Training the GAN
def train_gan(gan, generator, discriminator, epochs=10000, batch_size=128):
for epoch in range(epochs):
# Train the discriminator
idx = np.random.randint(0, X_train.shape[0], batch_size)
real_images = X_train[idx]
noise = np.random.normal(0, 1, (batch_size, 100))
fake_images = generator.predict(noise)
real_labels = np.ones((batch_size, 1))
fake_labels = np.zeros((batch_size, 1))
d_loss_real = discriminator.train_on_batch(real_images, real_labels)
d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Train the generator
noise = np.random.normal(0, 1, (batch_size, 100))
valid_labels = np.ones((batch_size, 1))
g_loss = gan.train_on_batch(noise, valid_labels)
# Print the progress
if epoch % 1000 == 0:
print(f"Epoch {epoch} [D loss: {d_loss[0]}, acc.: {100 * d_loss[1]}] [G loss: {g_loss}]")
sample_images(generator, epoch)
# Sample and save generated images
def sample_images(generator, epoch, n=10):
noise = np.random.normal(0, 1, (n * n, 100))
generated_images = generator.predict(noise)
generated_images = 0.5 * generated_images + 0.5
fig, axs = plt.subplots(n, n, figsize=(10, 10))
cnt = 0
for i in range(n):
for j in range(n):
axs[i, j].imshow(generated_images[cnt, :, :, 0], cmap='gray')
axs[i, j].axis('off')
cnt += 1
plt.show()
train_gan(gan, generator, discriminator)
This code snippet defines and trains a simple GAN for image generation using the MNIST dataset.
Style Transfer
Another fascinating application of GANs is style transfer, where the style of one image is applied to the content of another. This technique is widely used in artistic and creative fields, enabling the transformation of photos into artworks in the style of famous painters.
Example:
While implementing a full style transfer algorithm is complex and beyond the scope of this introduction, you can explore existing pre-trained models like Neural Style Transfer (NST) using frameworks such as TensorFlow or PyTorch.