Automatic image segmentation using Region-Based convolutional networks for Melanoma skin cancer detection
Main Article Content
Abstract
Melanoma is one of the most aggressive skin cancers, however, its early detection can significantly increase probabilities to cure it. Unfortunately, it is one of the most difficult skin cancers to detect, its detection relies mainly on the dermatologist’s expertise and experience with Melanoma. This research deals with targeting most of the common Melanoma stains or spots that could potentially evolve to Melanoma skin cancer. Region-based Convolutional Neural Networks were used as the model to detect and segment images of the skin area of interest. The neural network model is focused on providing instance segmentation rather than only a boxbounding object detection. The Mask R-CNN model was implemented to provide a solution for small trained datasets scenarios. Two pipelines were implemented, the first one was with only the Region-Based Convolutional Neural Network and the other one was a combined pipeline with a first stage using Mask R-CNN and then getting the result to use as feedback in a second stage implementing Grabcut, which is another segmentation method based on graphic cuts. Results demonstrated through Dice Similarity Coefficient and Jaccard Index that Mask R-CNN alone performed better in proper segmentation than Mask R-CNN + Grabcut model. In both models’ results, variation was very small when the training dataset size changed between 160, 100, and 50 images. In both of the pipelines, the models were capable of running the segmentation correctly, which illustrates that focalization of the zone is possible with very small datasets and the potential use of automatic segmentation to assist in Melanoma detection.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Los autores conservan los derechos de autor y ceden a la revista el derecho de la primera publicación y pueda editarlo, reproducirlo, distribuirlo, exhibirlo y comunicarlo en el país y en el extranjero mediante medios impresos y electrónicos. Asimismo, asumen el compromiso sobre cualquier litigio o reclamación relacionada con derechos de propiedad intelectual, exonerando de responsabilidad a la Editorial Tecnológica de Costa Rica. Además, se establece que los autores pueden realizar otros acuerdos contractuales independientes y adicionales para la distribución no exclusiva de la versión del artículo publicado en esta revista (p. ej., incluirlo en un repositorio institucional o publicarlo en un libro) siempre que indiquen claramente que el trabajo se publicó por primera vez en esta revista.
References
T. C. Mitchell, G. Karakousis and L. Schuchter. “Melanoma” in Abeloff’s Clinical Oncology, 6th ed. J. E. Niederhuber, J. O. Armitage, J. H. Doroshow, M. B. Kastan MD and J. E. Tepper, Eds. Elservier, 2020, ch. 66, sec. 3, pp. 1034-1051
C. Garb et al., Cutaneous Melanoma. Springer, 2020, doi: 10.1007/978-3-030-05070-2
H. L. Kaufman y J. M. Mehnert, Eds. Melanoma. Cham: Springer International Publishing, 2016, doi: https://doi. org/10.1007/978-3-319-22539-5
J. Teuwen, N. Moriakov, “Convolutional neural networks” in Handbook of Medical Image Computing and Computer-Assisted Intervention. S. Zhou, D. Rueckert, G. Fichtinger, Eds. Academic Press, 2020, ch. 20, pp. 481-501, doi: https://doi.org/10.1016/C2017-0-04608-6
S. Kiranyaz, O. Avci, O. Abdeljaber, T. Ince, M. Gabbouj y D. J. Inman, “1D convolutional neural networks and applications: A survey”, Mechanical Systems and Signal Processing, vol. 151, p. 107398, 2021, doi: https:// doi.org/10.1016/j.applthermaleng.2014.05.008
L. Chenning, Y. Ting, Z. Qian, and X. Haowei, “Object-based Indoor Localization using Region-based Convolutional Neural Networks,” 2018 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), 2018, pp. 1-6, doi: https://doi.org/10.1109/ICSPCC.2018.8567795.
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “ Rich feature hierarchies for accurate object detection and semantic segmentation,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2014, doi: 10.1109/CVPR.2014.81
Tschandl, P. “The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions”, vol 3. Harvard Dataverse, 2018, doi: https://doi.org/10.7910/DVN/DBW86T (2018)
“Make Sense”. Make Sense. Available: https://www.makesense.ai/
T. Lin et al. “Microsoft COCO Common objects in Contexts”. D. Fleet, T. Pajdla, B. Schiele B., T. Tuytelaars, Eds. Computer Vision – ECCV 2014. Lecture Notes in Computer Science, vol 8693. Springer, Cham, 201, doi:: https://doi.org/10.1007/978-3-319-10602-1_48
G. Yao, S. Wu, H. Yang, S. Li. “GrabCut Image Segmentation Based on Local Sampling.” in Business Intelligence and Information Technology. BIIT 2021. Lecture Notes on Data Engineering and Communications Technologies, Vol 107. A. Hassanien, Y. Xu, Z. Zhao, S. Mohammed, Z. Fan, Eds. Springer, Cham, 2022, ch. 5, pp. 356-365, doi: https://doi.org/10.1007/978-3-030-92632-8_34
D. Ren, Z. Jia, J. Yang and N. K. Kasabov, “A Practical GrabCut Color Image Segmentation Based on Bayes Classification and Simple Linear Iterative Clustering,” in IEEE Access, vol. 5, pp. 18480-18487, 2017, doi:
1109/ACCESS.2017.2752221.