Over the years, we have seen very powerful models being built to distinguish between objects. These models keep getting better in terms of performance and latency day by day but have we ever wondered what exactly these models pick up from images used to train them to make practically flawless predictions. There are undoubtedly features in images we feed into these models that they look at to make predictions and that is what we seek to explore in this article. Not long ago, researchers at Stanford university released a paper https://arxiv.org/abs/1901.07031 on how they are using deep learning to push the edge of Pneumonia diagnosis. Their work really got me fascinated so I tried it out in Pytorch and I am going to show you how I implemented this work using a different dataset on Kaggle.
This is a companion discussion topic for the original entry at https://blog.paperspace.com/detecting-and-localizing-pneumonia-from-chest-x-ray-scans-with-pytorch/