Skin Cancer classification using Convolutional Capsule Network (CapsNet)

Satapathy, Suresh Chandra; Cruz, Meenalosini ; Namburu, Anupama ; Chakkaravarthy, Sibi ; Pittendreigh, Matthew

Abstract

Researchers are proficient in preprocessing skin images but fail in identifying efficient classifiers for classifying skin cancer due to the complex variety of lesion sizes, colors, and shapes. As such, no single classifier is sufficient for classifying skin cancer legions. Convolutional Neural Networks (CNNs) have played an important role in deep learning, as CNNs have proven successful in classification tasks across many fields. However, present day models available for skin cancer classification suffer from not taking important spatial relations between features into consideration. They classify effectively only if certain features are present in the test data, ignoring their relative spatial relation with each other, which results in false negatives. They also lack rotational invariance, meaning that the same legion viewed at different angles may be assigned to different classes, leading to false positives. The Capsule Network (CapsNet) is designed to overcome the above-mentioned problems. Capsule Networks use modules or capsules other than pooling as an alternative to translational invariance. The Capsule Network uses layer-based squashing and dynamic routing. It uses vector-output capsules and max-pooling with routing by agreement, unlike scale-output feature detectors of traditional CNNs. All of which assist in avoiding false positives and false negatives. The Capsule Network architecture is created with many convolution layers and one capsule layer as the final layer.  Hence, in the proposed work, skin cancer classification is performed based on CapsNet architecture which can work well with high dimensional hyperspectral images of skin.


Keyword(s)

Capsule Network; CNN; Computer Aided Diagnosis; Skin Cancer Detection; Skin Cancer Classification


Full Text: PDF (downloaded 547 times)

Refbacks

  • There are currently no refbacks.
This abstract viewed 536 times