The aim of this three-part series has been to shed light on the landscape and development of deep learning models that have defined the field and improved our ability to solve challenging problems. In Part 1 we covered models developed from 2012-2014, namely AlexNet, VGG16, and GoogleNet. In Part 2 we saw more recent models from 2015-2016: ResNet, InceptionV3, and SqueezeNet. Now that we've covered the popular architectures and models of the past, we'll move on to the state of the art.
This is a companion discussion topic for the original entry at https://blog.paperspace.com/popular-deep-learning-architectures-densenet-mnasnet-shufflenet