AI WITH THE BEST 
BLOG

BOOK NOW

Something from nothing: Where are we at with GANs?

by Chloé Braithwaite

Back in 2014, Ian Goodfellow introduced us to Generative Adversarial Networks , GAN for short, a kind of deep machine learning that could work unsupervised. Yoshua Bengio wrote the Deep Learning textbook with Goodfellow.

“In order to make computers intelligent, they need to have knowledge. But the question is where do they get that knowledge from?” asks Professor Yoshua Bengio, Head of Montreal Institute for Learning Algorithms (MILA) at the Université de Montreal.

“Previously, researchers tried to give knowledge directly. This was classical AI, where we tell computers about facts and rules, but unfortunately a lot of the things we know aren’t communicable, they’re intuitive.”

Essentially, there are things we know but can’t, or just don’t, explain.

“The progress we’ve made in the last few years is pretty amazing, but it’s mostly based on supervised learning where the humans tell computers how to interpret images, sounds or text. We believe that there are some limitations to the kind of learning we’re doing now that humans are able to deal with.”

The question is how to get computers to learn unsupervised, without hard-coding facts and figures.

“We don’t want to tell the computer what the underlying concepts are that explain [certain] data.”

Bengio stresses that he is uncertain what the best way forward is, gleefully acknowledging there could be multiple ways to developing autonomously learning AI. This lack of clear path has led to many innovations, with GANs leading the way in the generative learning space.

“One way of seeing if a computer understands the data we input is to ask the computer to generate new examples that should be coming from the same distribution.”

For example, maybe a program has seen thousands of images of different, new faces. In order to see if the computer has understood this training set of images, programmers can request the computer to generate a completely new, totally novel face, based off the data set. This task is harder than it might seem, for ‘natural’ images are very unlike computer generated ones.

Over the years, there have been many generative models in machine learning, but the success of Generative Adversarial Networks comes from their antagonistic nature.

“GANs are very different from traditional machine learning. We have these two networks, the generator and the discriminator, and each of them is actually optimizing a different thing, so the things that make one perform well make the other one get worse.”

“A nice thing with GANs, as well as many generative networks which have a latent space and internal representation, is that we can actually play with the representations and interpolate between representations, or add, subtract from representations.

“The first thing that we observed is that training GAN can be unstable. Sometimes the training goes well for some time and you can see that the images being generated are nicer and nicer and suddenly, things get bad and the training objective starts moving in strange ways.”

The principal reason for the instability is the same that makes it so successful as a generative model.

Other common hurdle researchers have started to face include mode collapse, where an image repeats itself many times; missing mode, where not enough diversity enters the generated dataset; and lack of clear quantitative measure of quality.

So, with such challenges to overcome, why continue working on GANs?

“Well, it’s so different from what has been done before in machine learning that, you know, it could bring us even better things in the future!”

Note: This article is based on the talk Bengio gave on AI With The Best Conference on April 29–30. He was a keynote speaker and this was his first time talking with With The Best.