Amazing GANs in PyCon TW 2017
I was in San Francisco for Google Next’17 and Drake told me: “you should talk in PyCon Taiwan.” It seemd to be a good idea except I had nothing to share. Then KKBOX needed a speaker and I said yes before I had the topic.
Back to 2016, Andrew Ng gave this advice in Bay Area Deep Learning School:
These is one Phd student process that I find incredibly reliable. And I am gonna say it, and you may or may not trust it. But I have seen this work so reliabe so many times that I hope you take my word for it. That this process reliably turns non machine learning researchers into very good machine learning researchers. which is … theres is no magic really. Read a lot of papers and work on replicating results.
It was the reason why I started to think about replacating results of papers. Then I found that Yann LeCun said:
Most of human and animal learning is unsupervised learning. If intelligence was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake, and reinforcement learning would be the cherry on the cake. We know how to make the icing and the cherry, but we don’t know how to make the cake.
I like cake so I start to read GAN papers which is about unsupervised learning. Before I had the topic for PyCon, I had already replicated DCGAN/EBGAN/XWGAN and gotten good results just like papers. Besides, I use Python only when doing deep learning. I had no choice, I had to talk about GANs. Fortunately, I enjoy the papers.