Sports Recognition using Convolutional Neural Network with Optimization Techniques from Images and Live Streams
DOI:
https://doi.org/10.47839/ijc.20.2.2176Keywords:
Sport classification, Multimedia content analysis, Deep learning, Pre-trained models, Convolutional Neural Networks, VGG16, Resnet50, Model-OptimizerAbstract
This paper deals with the issue of automated image and video recognition of sports. It is a category of appreciation of human behavior, which is a very difficult task in the present day to classify images and video clips into a categorized gallery. This research paper proposes a sports detection system using a deeper CNN model that combines a fully connected layer with fine-tuning. It is applied to classify five individual sports groups through images and videos. In this work, we use a video classification method based on the image. Extended Resnet50 and VGG16 two pre-trained Deep CNNs are applied to build this sports detection system. RMSProp, ADAM & SGD optimizers are used to train the extended CNN models for five Epochs on the proposed 5sports dataset by handpicking thousands of sports images from the internet to very smoothly classify the five different types of sports. Training accuracy of approximately 83% is observed for ResNet50 with an SGD optimizer for 5 sports classes and 95% is observed for 3 sports classes.
References
D. He, Y. Xia, T. Qin, L. Wang, N. Yu, T.-Y. Liu, et al., “Dual learning for machine translation,” in Advances in neural information processing systems, 2016, pp. 820-828.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778, https://doi.org/10.1109/CVPR.2016.90.
I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” The International Journal of Robotics Research, vol. 34, pp. 705-724, 2015, https://doi.org/10.1177/0278364914549607.
D. Amodei, S. Ananthanarayanan, R., Bai, J. Anubhai, “Deep Speech 2: End-to-end speech recognition in English and Mandarin,” Proceedings of the 33rd International Conference on Machine Learning, 2016, pp. 173-182. [Online]. Available at: http://proceedings.mlr.press/v48/amodei16.html.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks, ” Advances in Neural Information Processing Systems, 2012, pp. 1097-1105.
H. Qassim, A. Verma, and D. Feinzimer, “Compressed residual-VGG16 CNN model for big data places image recognition,” Proceedings of the 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), 2018, pp. 169-175, https://doi.org/10.1109/CCWC.2018.8301729.
A. Senocak, T.-H. Oh, J. Kim, and I. So Kweon, “Part-based player identification using deep convolutional representation and multi-scale pooling,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 1732-1739, https://doi.org/10.1109/CVPRW.2018.00225.
M. A. Russo, L. Kurnianggoro, and K.-H. Jo, “Classification of sports videos with combination of deep learning models and transfer learning,” Proceedings of the 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), 2019, pp. 1-5, https://doi.org/10.1109/ECACE.2019.8679371.
F. Radenović, G. Tolias, and O. Chum, “Fine-tuning CNN image retrieval with no human annotation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, pp. 1655-1668, 2018, https://doi.org/10.1109/TPAMI.2018.2846566.
A. Mikołajczyk and M. Grochowski, “Data augmentation for improving deep learning in image classification problem,” Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), 2018, pp. 117-122, https://doi.org/10.1109/IIPHDW.2018.8388338.
N. R. Gavai, Y. A. Jakhade, S. A. Tribhuvan, and R. Bhattad, “MobileNets for flower classification using TensorFlow,” Proceedings of the 2017 International Conference on Big Data, IoT and Data Science (BID), 2017, pp. 154-158, https://doi.org/10.1109/BID.2017.8336590.
T. Bi, D. Jarnikov, and J. Lukkien, “Supervised two-stage transfer learning on imbalanced dataset for sport classification,” Proceedings of the International Conference on Image Analysis and Processing, 2019, pp. 356-366, https://doi.org/10.1007/978-3-030-30642-7_32.
R. V. M. da Nóbrega, S. A. Peixoto, S. P. P. da Silva, and P. P. Rebouças Filho, “Lung nodule classification via deep transfer learning in CT lung images, ” Proceedings of the 2018 IEEE 31st International Symposium on Computer-Based Medical Systems (CBMS), 2018, pp. 244-249, https://doi.org/10.1109/CBMS.2018.00050.
I. S. Jayasinghe, D. Wijesekara, C. L. Senanayake, N. Kodagoda, S. I. Kahawandala, and K. Suriyawansa, “Video classification using pre-trained models in the convolutional neural networks,” Proceedings of the International Conference on Data Mining, 2019.
J. Sharma, O.-C. Granmo, M. Goodwin, and J. T. Fidje, “Deep convolutional neural networks for fire detection in images,” Proceedings of the International Conference on Engineering Applications of Neural Networks, 2017, pp. 183-193, https://doi.org/10.1007/978-3-319-65172-9_16.
R. Poojary and A. Pai, “Comparative study of model optimization techniques in fine-tuned CNN models,” Proceedings of the 2019 International Conference on Electrical and Computing Technologies and Applications (ICECTA), 2019, pp. 1-4, https://doi.org/10.1109/ICECTA48151.2019.8959681.
P. Dhankhar, “ResNet-50 and VGG-16 for recognizing facial emotions,” International Journal of Innovations in Engineering and Technology (IJIET), vol. 13, pp. 126-130, 2019.
S. Frizzi, R. Kaabi, M. Bouchouicha, J.-M. Ginoux, E. Moreau, and F. Fnaiech, “Convolutional neural network for video fire and smoke detection,” Proceedings of the 42nd IEEE Annual Conference of the (IECON'2016), 2016, pp. 877-882, https://doi.org/10.1109/IECON.2016.7793196.
D. Wang, L. Zhang, K. Xu, and Y. Wang, “Acoustic scene classification based on dense convolutional networks incorporating multi-channel features, ” Journal of Physics: Conference Series, p. 012037, 2019, https://doi.org/10.1088/1742-6596/1169/1/012037.
Z. Wu, C. Shen, and A. Van Den Hengel, “Wider or deeper: Revisiting the resnet model for visual recognition,” Pattern Recognition, vol. 90, pp. 119-133, 2019, https://doi.org/10.1016/j.patcog.2019.01.006.
Z. Zhang, “Improved Adam optimizer for deep neural networks,” Proceedings of the 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS), 2018, pp. 1-2, https://doi.org/10.1109/IWQoS.2018.8624183.
A. M. Taqi, A. Awad, F. Al-Azzo, and M. Milanova, “The impact of multi-optimizers and data augmentation on TensorFlow convolutional neural network performance,” Proceedings of the 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), 2018, pp. 140-145, https://doi.org/10.1109/MIPR.2018.00032.
J. Yang and G. Yang, “Modified convolutional neural network based on dropout and the stochastic gradient descent optimizer,” Algorithms, vol. 11, p. 28, 2018, https://doi.org/10.3390/a11030028.
Downloads
Published
How to Cite
Issue
Section
License
International Journal of Computing is an open access journal. Authors who publish with this journal agree to the following terms:• Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
• Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
• Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.