Join the conversation
![](https://codanics.com/wp-content/uploads/2024/04/IMG_20240410_175137-1-scaled.jpg)
Done
Reply
![](https://codanics.com/wp-content/uploads/2024/02/2.jpg)
The main discussion in this lecture is key points of multi-layer perceptron i,e forward propagation, backward propagation, learning rate and activation function
Reply
![](https://codanics.com/wp-content/uploads/2024/05/My_profile_pic.jpg)
I learned in this lecture Multi-layer Perceptron (MLPs) 1-Input Layer ( receives the input signal, which is data ) 2-Hidden Layer ( one or more hidden layers ) 3-Output Layer ( produces the final output ) I also learned how it works so key points of working are 1-Key point no. one is Forward Propagation (signals travel from the input layer forward to the output) ( activation function used in the hidden and output layers only ) 2-Key point no. two is Back Propagation (updating the weights of neurons through the backpropagation method ) 3-Key point no. three is Learning Rate ( It controls how much the weights are adjusted during backpropagation )
Reply
![](https://codanics.com/wp-content/uploads/2023/10/e24e3cc0-f9cd-49cc-9e5e-e7fea619bd42.jpg)
I learned in this video Multi-layer Perceptron (MLPs) 1-Input Layer ( receives the input signal, which is data ) 2-Hidden Layer ( one or more hidden layers ) 3-Output Layer ( produces the final output ) I also learned how it works so key points of working are 1-Key point no. one is Forward Propagation (signals travel from the input layer forward to the output) ( activation function used in the hidden and output layers only ) 2-Key point no. two is Back Propagation (updating the weights of neurons through the back propagation method ) 3-Key point no. three is Learning Rate ( It controls how much the weights are adjusted during backpropagation )
Reply
![](https://codanics.com/wp-content/uploads/2023/10/9dd24f5a-b137-440a-baba-1855925152a0.jpg)
AOA, I learned in this lecture about multi-layer perceptron (MLP), a type of neural network that consists of more than one layer of neurons. Unlike a single-layer perceptron, which can only learn linearly separable patterns, a multi-layer perceptron can learn more complex, non-linear functions. This makes it a fundamental model in the fields of deep learning and neural networks.And also learned structure of Multi-layer perceptrons, which are1-Input Layer ( receives the input signal, which is data )
2-Hidden Layer ( one or more hidden layers )
3-Output Layer ( produces the final output )And I also learned how it works so key points of working are1-Key point no. one is Forward Propagation (signals travel from the input layer forward to the output) ( activation function used in the hidden and output layers only )2-Key point no. two is Back Propagation (updating the weights of neurons through back propagation method )3-Key point no. three is Learning Rate ( It controls how much the weights are adjusted during back propagation )4-Key point no. four ( by mistake, three was written in the lecture) is Activation function ( functions like Sigmoid, ReLU or Tanh are allowing the MLP to learn more complex patterns.)ALLAH PAK aap ko sahat o aafiyat wali lambi umar ata kray aor ap ko dono jahan ki bhalian naseeb farmaey aur aap ke walid-e-mohtram ko karwat karwat jannat ata farmaye,Ameen.
Reply