Neural Architecture Search Controller Implementation | Paperspace Blog

In the first part of this series we saw an overview of neural architecture search, including a state of the art review of the literature. In Part 2 we then saw how to turn our encoded sequences into MLP models. We also looked at training these models and transferring weights layer-by-layer for one-shot learning, and saving these weights as well.


This is a companion discussion topic for the original entry at https://blog.paperspace.com/neural-architecture-search-controllers