This project involved collaborating with a team of medical researchers from a well-known university in Germany. The goal was to use deep learning and convolutional neural networks to detect brain tumors from microscope slide images.
The goal was to build a CNN model that learns to classify microscopic slide images into two categories: normal and tumor.
The most challanging thing about this project was handling the huge data imbalance. The data consisted of 100 images of normal tissue, while there were only 1 tumor. This caused the model to overfit on the normal category and predict it every time.
We designed a CNN using pytorch and fastai library and collected a bunch of slide images of brain tissue from the internet and used semi-supervised learning to pretrain the model on an unlabelled dataset. Later, we designed a custom loss function that equally learns from an unbalanced dataset. Further, we empirically tested the impact of various regularization techniques and decided the one with best results.
We used metrics like weighted accuracy and ROC curves to evaluate the model performance, as they are a better indicator instead of normal accuracy. Also, we used evaluating metrics from the medical field, like sensitivity and specificity, to compare our results with the gold standards. Once we were able to successfully train a model, then we used explainable AI algorithms to visualize the regions that were used by the model to predict the class and interpret the model predictions.
When the model compared results with our gold standard, it measured specificity and sensitivity of over 90. These numbers were better than a human doctor's. In addition, the regions identified by the model turned out to be very important for doctors when making predictions. As a result, we deployed our trained model as an API endpoint that returns an image with highlighted regions and predicted diagnosis.
FastAI, Pytorch, Matplotlib, Django