Revolutionizing Skin Cancer Classification: Unveiling the Potential of Information Science and Transfer Learning Architectures for Enhanced Diagnosis

Main Article Content

Shailja Pandey

Abstract

The deadliest kind of skin cancer is malignant melanoma. To help doctors make more precise diagnoses of skin malignancies, dermoscopy uses noninvasive high-resolution imaging. Melanoma is a malignant skin cancer that grows rapidly and aggressively. Malignant melanoma continues to rank among the world's most rapidly expanding malignancies because of this trait. After spreading to other organs or tissues, the likelihood of a positive response to therapy drops to 5%, and the likelihood of survival after 10 years drops to around 10%. There is currently no therapeutic option that involves surgical removal after it has metastasized. Therefore, it is of the utmost importance to diagnose malignant melanoma early. The false negative ratio for melanoma is the highest of all skin malignancies. One of the most popular and effective approaches to medical picture processing right now is Deep Neural Networks (DNN). While Deep Learning has shown promising results, there are still obstacles to overcome when applying it to these kinds of problems. These include issues like data volatility, noise sensitivity, and insufficiently large training datasets. With an emphasis on medical (clinical) image difficulties, this study offers strategies to help deep-learning models handle these concerns, specifically when it comes to skin cancer diagnosis. This research delves into the topic of melanoma classification using transfer learning and compares several state-of-the-art Convolutional Neural Network (CNN) designs. Many convolutional neural network (CNN) models have been pre-trained on the extensive ImageNet dataset; these models are among those that have been considered for this analysis. Dermoscopy pictures of the skin taken from the publicly available dataset are used to refine and test these models. To increase the dataset size, images are enhanced. The results show that various CNN architectures have varied strengths and weaknesses when it comes to this categorization job. The VGG16 model outperformed all others with a test accuracy of 85.76 percent and a train accuracy of 90.1 percent. This research sheds light on how to use dermoscopy pictures for effective deep-learning skin cancer screening and detection. There are more chances to adjust the model's hyperparameters and increase the variety and amount of the training data.

Article Details

Section
Articles