So far we've built two major components of our neural architecture search (NAS) pipeline. In the second part of this series we created a model generator which takes encoded sequences and creates MLP models out of them, compiles them, transfers weights from previously trained architectures, and trains the new ones. In the third part of this series we looked at building the controller for our neural architecture search tool, which sampled architecture sequences using an LSTM controller. We also looked at how we can incorporate an accuracy predictor in the controller.
This is a companion discussion topic for the original entry at https://blog.paperspace.com/neural-architecture-search-reinforce-gradient