Implementing Gradient Descent in Python, Part 4: Using Any Number of Neurons

In this tutorial we extend our implementation of gradient descent to work with a single hidden layer with any number of neurons.


This is a companion discussion topic for the original entry at https://blog.paperspace.com/part-4-generic-python-implementation-of-gradient-descent-for-nn-optimization

Thank you for this series, very informative yet simple to understand.
I’ve some questions when building a gradient descent model
1- The number of hidden layers is based on what?
2- The number of neurons inside each hidden layer is based on what?

Hello Hatem,

I am sorry for the late reply as there is no notification for replies to my posts.

These 2 questions can be easily answered for simple problems. Please check this tutorial for how to design a neural network [but for simple problems]: https://towardsdatascience.com/beginners-ask-how-many-hidden-layers-neurons-to-use-in-artificial-neural-networks-51466afa0d3e

As the problem complexity increases, it is not easy to say there is an exact number for the layers/neurons.

Regards,
Ahmed