A dermatologist-like automatic classification system is developed in this paper to recognize
nine different classes of pigmented skin lesions (PSLs), using a separable vision transformer
(SVT) technique to assist clinical experts in early skin cancer detection. In the past, researchers
have developed a few systems to recognize nine classes of PSLs. However, they often require
enormous computations to achieve high performance, which is burdensome to deploy on resourceconstrained
devices. In this paper, a new approach to designing SVT architecture is developed
based on SqueezeNet and depthwise separable CNN models. The primary goal is to find a deep
learning architecture with few parameters that has comparable accuracy to state-of-the-art (SOTA)
architectures. This paper modifies the SqueezeNet design for improved runtime performance by
utilizing depthwise separable convolutions rather than simple conventional units. To develop this
Assist-Dermo system, a data augmentation technique is applied to control the PSL imbalance problem.
Next, a pre-processing step is integrated to select the most dominant region and then enhance the
lesion patterns in a perceptual-oriented color space. Afterwards, the Assist-Dermo system is designed
to improve efficacy and performance with several layers and multiple filter sizes but fewer filters and
parameters. For the training and evaluation of Assist-Dermo models, a set of PSL images is collected
from different online data sources such as Ph2, ISBI-2017, HAM10000, and ISIC to recognize nine
classes of PSLs. On the chosen dataset, it achieves an accuracy (ACC) of 95.6%, a sensitivity (SE) of
96.7%, a specificity (SP) of 95%, and an area under the curve (AUC) of 0.95. The experimental results
show that the suggested Assist-Dermo technique outperformed SOTA algorithms when recognizing
nine classes of PSLs. The Assist-Dermo system performed better than other competitive systems
and can support dermatologists in the diagnosis of a wide variety of PSLs through dermoscopy. The
Assist-Dermo model code is freely available on GitHub for the scientific community. |