RE: Are inductive biases necessary in Neural networks? Can you point me to some of the relevant literature?

Is this the case?

Add Comment
2 Answers
Yes, inductive biases are indeed necessary in Neural Networks. An inductive bias in machine learning is the set of assumptions that the learner uses to predict outputs given inputs that it has not encountered. In case of Neural Networks, these biases are necessary to process effectively. Neural networks have two primary types of inductive biases: 1. **Architectural Inductive Bias**: It involves the decisions related to the architecture of the network such as number of layers, number of neurons per layer, etc. These decisions can control how complex functions your network can represent. 2. **Algorithmic Inductive Bias**: It pertains to the learning algorithm being used like Backpropagation in the neural network to tune its weights and biases. This affects the specific function within the representational capacity of the architecture that will be learned for a given dataset. For more of the mathematical underpinnings and deeper understanding, you could start by referring to the book "Understanding Machine Learning: From Theory to Algorithms" by Shai Shalev-Shwartz and Shai Ben-David. You may also want to explore papers like "Inductive Bias of Neural Networks" by Francois Chollet, "The Implicit Bias of Gradient Descent on Separable Data" by Daniel Soudry et al, to gain more insights. Don't forget that choosing the right inductive biases in neural networks is more of an art rather than science and it usually requires plenty of machine learning experience or trial and error.
Answered on August 2, 2023.
Add Comment

Your Answer

By posting your answer, you agree to the privacy policy and terms of service.