Course Content
Day-2: How to use VScode (an IDE) for Python?
0/1
Day-3: Basics of Python Programming
This section will train you for Python programming language
0/4
Day-4: Data Visualization and Jupyter Notebooks
You will learn basics of Data Visualization and jupyter notebooks in this section.
0/1
Day-5: MarkDown language
You will learn whole MarkDown Language in this section.
0/1
Day-10: Data Wrangling and Data Visualization
Data Wrangling and Visualization is an important part of Exploratory Data Analysis, and we are going to learn this.
0/1
Day-11: Data Visualization in Python
We will learn about Data Visualization in Python in details.
0/2
Day-12,13: Exploratory Data Analysis (EDA)
EDA stands for Exploratory Data Analysis. It refers to the initial investigation and analysis of data to understand the key properties and patterns within the dataset.
0/2
Day-15: Data Wrangling Techniques (Beginner to Pro)
Data Wrangling in python
0/1
Day-26: How to use Conda Environments?
We are going to learn conda environments and their use in this section
0/1
Day-37: Time Series Analysis
In this Section we will learn doing Time Series Analysis in Python.
0/2
Day-38: NLP (Natural Language Processing)
In this section we learn basics of NLP
0/2
Day-39: git and github
We will learn about git and github
0/1
Day-40: Prompt Engineering (ChatGPT for Social Media Handling)
Social media per activae rehna hi sab kuch hy, is main ap ko wohi training milay ge.
0/1
Python ka Chilla for Data Science (40 Days of Python for Data Science)
About Lesson

Here are some common types of activation functions used in TensorFlow:

💪 Rectified Linear Unit (ReLU): f(x) = max(0, x)

  • Most widely used activation function. Works well for both shallow and deep networks.

🙂 Sigmoid: f(x) = 1/(1+e-x)

  • Squashes the output to range between 0-1. Used for probability predictions in the output layer.

🤙 Tanh (Hyperbolic Tangent): f(x) = (e^x – e^-x)/(e^x + e^-x)

  • Squashes output to range -1 to 1. ReLU often works better than tanh in hidden layers.

🤓 Softmax: f(x)_i = e^{x_i} / ∑_j e^{x_j}

  • Used for multi-class classification where the outputs represent class probabilities.

😎 Leaky ReLU: f(x) = max(αx, x) where α is a small positive value like 0.01

  • Solves the “dying ReLU” problem where a unit may stop learning if its input is negative.

😎 ELU (Exponential Linear Unit): f(x) = x for x > 0, f(x) = α(e^x – 1) for x < 0

  • Slightly better than ReLU by allowing negative values close to zero.

So in summary, ReLU, sigmoid, tanh are commonly used in hidden layers while softmax is popular in output layers for classification tasks.

Join the conversation
SAQIB ALI 3 weeks ago
Output Layer...
Reply
Saad Khalid Abbasi 2 months ago
Output Layer
Reply
Faizan Ahmad 5 months ago
Types of Activation Functions that I know of so far in life: Sigmoid Activation Function: 1/(1+e^-x) Tanh Activation Function: (e^x - e^-x)/(e^x + e^-x) Relu Activation Funtion: max(0,x) Leaky Relu Activation Function: max(0.1x,x) Parametric Relu Activation Function: max(ax,x) Softmax Activation Function: e(i) / sum of e(i...)
Reply
Faizan Ahmad 5 months ago
Output Layer ...
Reply
asfar zafar 11 months ago
output layer
Reply