<> Forward Propagation: The hidden layer, between the input layer and the output layer of the network, receives inputs with weights. We calculate the output of the activation at each node at each hidden layer, and this propagates to the next layer until we reach the final output layer. We go forward from the inputs to the final output layer, which is known as the forward propagation.
<> Back Propagation: It sends error information from the network's last layer to all of the weights within the network. It's a technique for fine-tuning the weights of a neural network based on the previous epoch's (i.e., iteration) error rate. By fine-tuning the weights, you may lower error rates and improve the model's generalization, making it more dependable. The process of backpropagation can be broken down into the following steps: It can generate output by propagating training data through the network. It, then, computes the error derivative for output activations using the target and output values. It can backpropagate to compute the derivative of the error in the previous layer's output activation, and so on for all hidden layers. It calculates the error derivative for weights using the previously obtained derivatives and all hidden layers. The weights are updated based on the error derivatives obtained from the next layer.
Posted Date:- 2022-02-15 11:13:26
Explain the importance of LSTM.
What is Exploding Gradient Descent?
What is Vanishing Gradient? And how is this harmful?
What are some issues faced while training an RNN?
Explain the different Layers of CNN.
List a few advantages of TensorFlow?
Name a few deep learning frameworks
What are the Hperparameteres? Name a few used in any Neural Network.
What’s the difference between a feed-forward and a backpropagation neural network?
Why is Weight Initialization important in Neural Networks?
Which is Better Deep Networks or Shallow ones? and Why?
What Is Data Normalization And Why Do We Need It?
What are the different parts of a multi-layer perceptron?
What is a Multi-Layer-Perceptron
What are the shortcomings of a single layer perceptron?
What are the steps for using a gradient descent algorithm?
What are the benefits of mini-batch gradient descent?
What is the significance of a Cost/Loss function?
Explain Learning of a Perceptron.
What are the activation functions?
What is the role of weights and bias?
What is Perceptron? And How does it Work?
Do you think Deep Learning is Better than Machine Learning? If so, why?
Which deep learning algorithm is the best for face detection?
Explain Stochastic Gradient Descent. How is it different from Batch Gradient Descent ?
Explain Batch Gradient Descent.
In a Convolutional Neural Network (CNN), how can you fix the constant validation accuracy?
Explain the difference between a shallow network and a deep network.
What is a tensor in deep learning?
What are the advantages of transfer learning?
Explain transfer learning in the context of deep learning.
What do you mean by hyperparameters in the context of deep learning?
Explain Data Normalisation. What is the need for it?
Explain Forward and Back Propagation in the context of deep learning.
What do you understand about gradient clipping in the context of deep learning?
What do you mean by end-to-end learning?
What are the different types of deep neural networks?
Explain what a deep neural network is.
What are the disadvantages of neural networks?
What are the advantages of neural networks?
What are the applications of deep learning?
Differentiate between AI, Machine Learning and Deep Learning.