Join the conversation

The activation functions we have covered so far are:1. Linear activation function - for regression problem (mostly used in the output layer)
2. Sigmoid/logistic function - for binary classification problem (mostly used in the output layer)
3. Tanh (Hyperbolic Tangent) - for output between 0 to infinity (mostly used in the hidden layer)
4. ReLU (Rectified Linear Unit) and its extensions - (mostly widely used in the hidden layer)
5. Softmax - for multiclass/ Multi-Label Classification - (mostly used in the output layer)In the hidden layer: ReLU (or its extensions), tanh, (Sigmoid for specific use case)In the output layer: Linear Activation function, Sigmoid, Softmax
Reply

## Summary of Common Activation Functions
ReLU: CNNs, Transformers, general-purpose hidden layers.
Tanh: RNNs, LSTMs, GRUs, data reconstruction.
Sigmoid: Binary classification, gates in RNNs/LSTMs, multi-label classification.
Softmax: Multiclass classification, attention mechanisms.
Linear: Regression tasks.

done this lecture
Reply

done sir jazakallah
Reply

Done
Reply

What is the difference between multilabel and multiclass classification in deep learning?
Multiclass classification assigns one exclusive class label to each instance, while multilabel classification allows for multiple labels to be assigned to the same instance, reflecting the complex relationships and diversity often found in real-world datasets.
Reply

I have done this lecture.
Reply

The important thing in this lecture is to learn by self about ELUs (Exponential Linear Units).
Reply

Done
Reply

Done
Reply