Abstract
Hair and ruler mark structures in dermoscopic images are an obstacle preventing accurate image segmentation and detection of critical network features. Recognition and removal of hairs from images can be challenging, especially for hairs that are thin, overlapping, faded, or of similar color as skin or overlaid on a textured lesion. This paper proposes a novel deep learning (DL) technique to detect hair and ruler marks in skin lesion images. Our proposed ChimeraNet is an encoder-decoder architecture that employs pretrained EfficientNet in the encoder and squeeze-and-excitation residual (SERes) structures in the decoder. We applied this approach at multiple image sizes and evaluated it using the publicly available HAM10000 (ISIC2018 Task 3) skin lesion dataset. Our test results show that the largest image size (448 x 448) gave the highest accuracy of 98.23 and Jaccard index of 0.65 on the HAM10000 (ISIC 2018 Task 3) skin lesion dataset, exhibiting better performance than for two well-known deep learning approaches, U-Net and ResUNet-a. We found the Dice loss function to give the best results for all measures. Further evaluated on 25 additional test images, the technique yields state-of-the-art accuracy compared to 8 previously reported classical techniques. We conclude that the proposed ChimeraNet architecture may enable improved detection of fine image structures. Further application of DL techniques to detect dermoscopy structures is warranted.
Recommended Citation
N. Lama et al., "Chimeranet: U-Net for Hair Detection in Dermoscopic Skin Lesion Images," Journal of Digital Imaging, Springer, Jan 2022.
The definitive version is available at https://doi.org/10.1007/s10278-022-00740-6
Department(s)
Electrical and Computer Engineering
Second Department
Chemistry
Keywords and Phrases
Deep learning; Dermoscopy; Hair removal; Image segmentation; Melanoma; Transfer learning
International Standard Serial Number (ISSN)
1618-727X; 0897-1889
Document Type
Article - Journal
Document Version
Final Version
File Type
text
Language(s)
English
Rights
© 2023 Springer, All rights reserved.
Publication Date
01 Jan 2022
Comments
National Science Foundation, Grant CA101639-02A2