Computer Modelling of Textures on Images with Human Skin Wound Areas
DOI:
https://doi.org/10.47839/ijc.23.3.3669Keywords:
wound segmentation, wound classification, autoencoder, variational autoencoder, Siamese networks, contrastive learning, image feature extractionAbstract
In this paper, we concentrate on the images of the wounds on the human skin and propose to consider each image as a set of smaller pieces – crops or patches containing different textures. We overview, develop and compare deep learning feature extraction methods to model image crops as 200-dimensional feature vectors using various artificial neural network architectures: convolutional autoencoders, variational convolutional autoencoders, and Siamese convolutional networks trained in the contrastive learning manner. Also, we develop a custom convolutional encoder and decoder, use them in the aforementioned architectures and compare them with the ResNet encoder and decoder alternatives. Finally, we train and evaluate k-nearest neighbors and Multi-Layer Perceptron classifiers on the features extracted with the model above options to discriminate skin, wound, and background image patches. Classification evaluation results on the features, extracted with the Siamese network, show the best test accuracy for all implementations without a significant shift between model versions (accuracy > 93%); variational autoencoders show random results for all options (accuracy around 33%), and convolutional autoencoders reached good results (accuracy > 77%) but with a noticeable difference between the custom and ResNet versions; the latter is better. Custom encoder and decoder implementations are faster and smaller than the ResNet alternatives but may be less stable on larger datasets, which still needs investigation. Possible applications of the feature vectors include an area of interest extraction during wound segmentation or classification and usage as patch embeddings while training vision transformer architectures.
References
Z. Sanchez, A. Alva, M. Zimic, and C. del Carpio, “An algorithm for characterizing skin moles using image processing and machine learning,” International Journal of Electrical and Computer Engineering (IJECE), vol. 11, no. 4, p. 3539, Aug. 2021, doi: https://doi.org/10.11591/ijece.v11i4.pp3539-3550.
O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Lecture Notes in Computer Science, vol. 9351, pp. 234–241, 2015, doi: https://doi.org/10.1007/978-3-319-24574-4_28.
C. Wang et al., “FUSeg: The Foot Ulcer Segmentation Challenge,” arXiv:2201.00414 [cs, eess], Jan. 2022, Available: https://arxiv.org/abs/2201.00414
Héctor Carrión, M. Jafari, M. D. Bagood, H.-L. Yang, Roslyn Rivkah Isseroff, and M. M. Gomez, “Automatic wound detection and size estimation using deep learning algorithms,” vol. 18, no. 3, pp. e1009852–e1009852, Mar. 2022, doi: https://doi.org/10.1371/journal.pcbi.1009852.
T. Deng, S. Gulati, A. Kumar, W. Rodriguez, Benoit Dawant, and A. Langerman, “Automated detection of surgical wounds in videos of open neck procedures using a mask R-CNN,” Feb. 2021, doi: https://doi.org/10.1117/12.2580908.
G. Scebba et al., “Detect-and-segment: A deep learning approach to automate wound image segmentation,” Informatics in Medicine Unlocked, vol. 29, p. 100884, 2022, doi: https://doi.org/10.1016/j.imu.2022.100884.
C. Wang et al., “Fully automatic wound segmentation with deep convolutional neural networks,” Scientific Reports, vol. 10, no. 1, Dec. 2020, doi: https://doi.org/10.1038/s41598-020-78799-w.
N. Jaworski, Ihor Farmaha, Uliana Marikutsa, Taras Farmaha, and Vasyl Savchyn, “Implementation features of wounds visual comparison subsystem,” International Conference on Perspective Technologies and Methods in MEMS Design, Apr. 2018, doi: https://doi.org/10.1109/memstech.2018.8365714.
U. Şevik, E. Karakullukçu, T. Berber, Y. Akbaş, and S. Türkyılmaz, “Automatic classification of skin burn colour images using texture-based feature extraction,” IET Image Processing, vol. 13, no. 11, pp. 2018–2028, Sep. 2019, doi: https://doi.org/10.1049/iet-ipr.2018.5899.
Z. Wu, Y. Xiong, S. Yu, and D. Lin, “Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination,” arXiv:1805.01978 [cs], May 2018, Available: https://arxiv.org/abs/1805.01978
R. D. Hjelm et al., “Learning deep representations by mutual information estimation and maximization,” arXiv:1808.06670 [cs, stat], Feb. 2019, Accessed: Apr. 23, 2023. [Online]. Available: https://arxiv.org/abs/1808.06670
P. Bachman, R. D. Hjelm, and W. Buchwalter, “Learning Representations by Maximizing Mutual Information Across Views,” arXiv:1906.00910 [cs, stat], Jul. 2019, Accessed: Mar. 27, 2021. [Online]. Available: https://arxiv.org/abs/1906.00910
N. Pal and T. T. Johnson, “Work In Progress: Safety and Robustness Verification of Autoencoder-Based Regression Models using the NNV Tool,” Electronic Proceedings in Theoretical Computer Science, vol. 361, pp. 79–88, Jul. 2022, doi: https://doi.org/10.4204/eptcs.361.8.
D.-M. Tsai and P.-H. Jen, “Autoencoder-based anomaly detection for surface defect inspection,” Advanced Engineering Informatics, vol. 48, p. 101272, Apr. 2021, doi: https://doi.org/10.1016/j.aei.2021.101272.
J. K. Chow, Z. Su, J. Wu, P. S. Tan, X. Mao, and Y. H. Wang, “Anomaly detection of defects on concrete structures with the convolutional autoencoder,” Advanced Engineering Informatics, vol. 45, p. 101105, Aug. 2020, doi: https://doi.org/10.1016/j.aei.2020.101105.
L. Theis, W. Shi, A. Cunningham, and F. Huszár, “Lossy Image Compression with Compressive Autoencoders,” arXiv:1703.00395 [cs, stat], Mar. 2017, Accessed: Apr. 23, 2023. [Online]. Available: https://arxiv.org/abs/1703.00395
Z. Cheng, H. Sun, M. Takeuchi, and J. Katto, “Deep Convolutional AutoEncoder-based Lossy Image Compression,” 2018 Picture Coding Symposium (PCS), Jun. 2018, doi: https://doi.org/10.1109/pcs.2018.8456308.
N. Khare, Poornima Singh Thakur, P. Khanna, and A. Ojha, “Analysis of Loss Functions for Image Reconstruction Using Convolutional Autoencoder,” Communications in computer and information science, pp. 338–349, Jan. 2022, doi: https://doi.org/10.1007/978-3-031-11349-9_30.
T. Spinner, J. Körner, J. Görtler, and O. Deussen, “Towards an Interpretable Latent Space – An Intuitive Comparison of Autoencoders with Variational Autoencoders,” thilospinner.com, Oct. 22, 2018. https://thilospinner.com/towards-an-interpretable-latent-space/ (accessed Apr. 23, 2023).
Li Deng, “The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web],” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 141–142, Nov. 2012, doi: https://doi.org/10.1109/msp.2012.2211477.
K. Pearson, “On Lines and Planes of Closest Fit to Systems of Points in Space,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. 2, no. 11, pp. 559–572, Nov. 1901, doi: https://doi.org/10.1080/14786440109462720.
D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes,” arXiv.org, 2013. https://arxiv.org/abs/1312.6114
D. P. Kingma and M. Welling, “An Introduction to Variational Autoencoders,” Foundations and Trends® in Machine Learning, vol. 12, no. 4, pp. 307–392, 2019, doi: https://doi.org/10.1561/2200000056.
K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum Contrast for Unsupervised Visual Representation Learning,” arXiv:1911.05722 [cs], Mar. 2020, Available: https://arxiv.org/abs/1911.05722
A. Dosovitskiy, P. Fischer, J. T. Springenberg, M. Riedmiller, and T. Brox, “Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks,” arXiv:1406.6909 [cs], Jun. 2015, Accessed: Apr. 23, 2023. [Online]. Available: https://arxiv.org/abs/1406.6909
A. van den Oord, Y. Li, and O. Vinyals, “Representation Learning with Contrastive Predictive Coding,” arXiv:1807.03748 [cs, stat], Jan. 2019, Available: https://arxiv.org/abs/1807.03748
Y. Sun, K. Fu, Z. Wang, C. Zhang, and J. Ye, “Road Network Metric Learning for Estimated Time of Arrival,” arXiv:2006.13477 [cs, stat], Jun. 2020, Accessed: Apr. 23, 2023. [Online]. Available: https://arxiv.org/abs/2006.13477
T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A Simple Framework for Contrastive Learning of Visual Representations,” arXiv:2002.05709 [cs, stat], Jun. 2020, Available: https://arxiv.org/abs/2002.05709
G. Koch, R. Zemel, and R. Salakhutdinov, “Siamese Neural Networks for One-shot Image Recognition.” Available: https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf
“SFA - Human Skin Image Database - SEL/EESC/USP,” www1.sel.eesc.usp.br. http://www1.sel.eesc.usp.br/sfa/
I. Goodfellow, Y. Bengio, and A. Courville, “Deep Learning,” Deeplearningbook.org, 2016. https://www.deeplearningbook.org/
K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, Jun. 2016, doi: https://doi.org/10.1109/cvpr.2016.90.
W. Luo, Y. Li, R. Urtasun, and R. Zemel, “Understanding the Effective Receptive Field in Deep Convolutional Neural Networks,” arXiv:1701.04128 [cs], Jan. 2017, Available: https://arxiv.org/abs/1701.04128
F. Yu and V. Koltun, “Multi-Scale Context Aggregation by Dilated Convolutions,” arXiv:1511.07122 [cs], Apr. 2016, Available: https://arxiv.org/abs/1511.07122
A. Araujo, W. Norris, and J. Sim, “Computing Receptive Fields of Convolutional Neural Networks,” Distill, vol. 4, no. 11, Nov. 2019, doi: https://doi.org/10.23915/distill.00021.
A. Odena, V. Dumoulin, and C. Olah, “Deconvolution and Checkerboard Artifacts,” Distill, vol. 1, no. 10, Oct. 2016, doi: https://doi.org/10.23915/distill.00003.
F. Wang and H. Liu, “Understanding the Behaviour of Contrastive Loss.” Accessed: Apr. 23, 2023. [Online]. Available: https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Understanding_the_Behaviour_of_Contrastive_Loss_CVPR_2021_paper.pdf
L. McInnes, J. Healy, and J. Melville, “UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction,” arXiv.org, 2018. https://arxiv.org/abs/1802.03426
Downloads
Published
How to Cite
Issue
Section
License
International Journal of Computing is an open access journal. Authors who publish with this journal agree to the following terms:• Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
• Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
• Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.