Unsupervised Representation Learning using Wasserstein Generative Adversarial Network

Authors

  • Iftakher Hasan Mohammed Tarek
  • Mohammed Mahmudur Rahman
  • Zulkifly Mohd Zaki

Keywords:

Deep Neural Network, Generative Adversarial Network, Wasserstein GAN, Convolutional Neural Network, Unsupervised Learning, Representation Learning

Abstract

In recent years, representational learning has attracted considerable attention. However, unsupervised representation learning has received less attention compared to supervised representation learning. This paper introduces a combination of a deep neural network (DNN) and a generative adversarial network (GAN) that can learn features through unsupervised learning. Essentially, the Generative Adversarial Network (GAN) is a deep learning architecture that engages two neural networks in a framework similar to a zero-sum game. Generating new, synthetic data that resembles a known data distribution is the aim of GANs. In June 2014, Ian Goodfellow and associates first developed the idea of Generative Adversarial Network (GAN). The research used a new type of GAN model which is called Wasserstein GAN. There are some distinct differences between traditional GAN and Wasserstein GAN. This paper highlights the differences and benefits of using Wasserstein GAN, as well as the architecture of Wasserstein GAN. This study trained the model on an image dataset to extract features, and subsequently tested it on another dataset, demonstrating that the GAN model learns a hierarchy of representation from object parts in the discriminator. The purpose of this paper is to use unsupervised learning like Convolutional Neural Network (CNN) and Wasserstein Generative Adversarial Network (WGAN) for feature extraction from unlabeled dataset.

References

E. L. Denton, S. Chintala, A. Szlam, R. Fergus, “Deep generative image models using a Laplacian pyramid of adversarial networks,” ArXiv abs/1506.05751, 2015.

A. Dosovitskiy, P. Fischer, J. T. Springenberg, M. Riedmiller and T. Brox, “Discriminative unsupervised feature learning with exemplar convolutional neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 9, pp. 1734-1747, 2016, https://doi.org/10.1109/TPAMI.2015.2496141.

A. Krizhevsky, I. Sutskever, G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Proceedings of the 25th International Conference on Neural Information Processing Systems, Curran Associates Inc.: Lake Tahoe, Nevada, vol. 1, 2012, pp. 1097-1105.

P. Vincent, et al., “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” J. Mach. Learn. Res., vol. 11, pp. 3371-3408, 2010.

Z. Zhu, et al., “Deep learning representation using autoencoder for 3D shape retrieval,” Neurocomputing, vol. 204, pp. 41-50, 2016. https://doi.org/10.1016/j.neucom.2015.08.127.

A. Coates, A. Y. Ng, “Learning feature representations with k-means,” In Neural Networks: Tricks of the Trade, pp. 561–580. Springer, 2012. https://doi.org/10.1007/978-3-642-35289-8_30.

M. D. Zeiler, R. Fergus, “Visualizing and understanding convolutional networks,” Proceedings of the International Conference on Computer Vision (ECCV’2014), Springer, 2014, pp. 818–833. https://doi.org/10.1007/978-3-319-10590-1_53.

A. Mordvintsev, C. Olah, M. Tyka, “Inceptionism: Going deeper into neural networks,” 2015. [Online]. Available at: http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html.

I. J. Goodfellow, et al., Generative Adversarial Networks, 2014.

A. Efros, T. K. Leung, et al., “Texture synthesis by non-parametric sampling,” In Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999, volume 2, pp. 1033–1038. https://doi.org/10.1109/ICCV.1999.790383.

W. T. Jones, R. Thouis, E. C. Pasztor, “Example-based super-resolution,” Computer Graphics and Applications IEEE, vol. 22, issue 2, pp. 56–65, 2002. https://doi.org/10.1109/38.988747.

J. Hays, and A. A. Efros, “Scene completion using millions of photographs,” ACM Transactions on Graphics (TOG), vol. 26, issue 3, p. 4, 2007. https://doi.org/10.1145/1276377.1276382.

J. Portilla, E. P. Simoncelli, “A parametric texture model based on joint statistics of complex wavelet coefficients,” International Journal of Computer Vision, vol. 40, issue 1, pp. 49–70, 2000.

A. Dosovitskiy, J. T. Springenberg, T. Brox, “Learning to generate chairs with convolutional neural networks,” arXiv preprint arXiv:1411.5928, 2014. https://doi.org/10.1109/CVPR.2015.7298761.

K. Gregor, I. Danihelka, A. Graves, D. Wierstra, “Draw: A recurrent neural network for image generation,” arXiv preprint arXiv:1502.04623, 2015.

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” CoRR, 2015. abs/1511.06434

P. Ebner, A. Eltelt, Audio inpainting with generative adversarial network, 2020.

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, F.-F. Li, “ImageNet: A large-scale hierarchical image database,” Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 2009, pp. 248-255, https://doi.org/10.1109/CVPR.2009.5206848.

A. Krizhevsky, “Learning multiple layers of features from tiny images,” pp. 32-33, 2009.

S. Ioffe, C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.

V. Nair, G. E. Hinton, “Rectified linear units improve restricted Boltzmann machines,” Proceedings of the 27th International Conference on Machine Learning (ICML-10), 2010, pp. 807–814.

B. Xu, et al., “Empirical evaluation of rectified activations in convolutional network,” ArXiv e-prints, 2015.

D. P. Kingma, J. Ba, “Adam: A Method for Stochastic Optimization,” CoRR, 2014. abs/1412.6980.

A. Coates, A. Y. Ng, “Selecting receptive fields in deep networks,” Proceedings of the 24th International Conference on Neural Information Processing Systems (NIPS'11), Curran Associates Inc., Red Hook, NY, USA, 2011, 2528–2536.

A. Dosovitskiy, P. Fischer, J. T. Springenberg, M. Riedmiller, T. Brox, “Discriminative unsupervised feature learning with exemplar convolutional neural networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, issue 9, pp. 1734–1747, 2016. https://doi.org/10.1109/TPAMI.2015.2496141.

Downloads

Published

2024-10-03

How to Cite

Tarek, I. H. M., Rahman, M. M., & Zaki, Z. M. (2024). Unsupervised Representation Learning using Wasserstein Generative Adversarial Network. International Journal of Computing, 23(3), 415-420. Retrieved from https://computingonline.net/computing/article/view/3660

Issue

Section

Articles