Course Content
Day-2: How to use VScode (an IDE) for Python?
0/1
Day-3: Basics of Python Programming
This section will train you for Python programming language
0/4
Day-4: Data Visualization and Jupyter Notebooks
You will learn basics of Data Visualization and jupyter notebooks in this section.
0/1
Day-5: MarkDown language
You will learn whole MarkDown Language in this section.
0/1
Day-10: Data Wrangling and Data Visualization
Data Wrangling and Visualization is an important part of Exploratory Data Analysis, and we are going to learn this.
0/1
Day-11: Data Visualization in Python
We will learn about Data Visualization in Python in details.
0/2
Day-12,13: Exploratory Data Analysis (EDA)
EDA stands for Exploratory Data Analysis. It refers to the initial investigation and analysis of data to understand the key properties and patterns within the dataset.
0/2
Day-15: Data Wrangling Techniques (Beginner to Pro)
Data Wrangling in python
0/1
Day-26: How to use Conda Environments?
We are going to learn conda environments and their use in this section
0/1
Day-37: Time Series Analysis
In this Section we will learn doing Time Series Analysis in Python.
0/2
Day-38: NLP (Natural Language Processing)
In this section we learn basics of NLP
0/2
Day-39: git and github
We will learn about git and github
0/1
Day-40: Prompt Engineering (ChatGPT for Social Media Handling)
Social media per activae rehna hi sab kuch hy, is main ap ko wohi training milay ge.
0/1
Python ka Chilla for Data Science (40 Days of Python for Data Science)
About Lesson

๐Ÿค” Activation Functions are an important part of Neural Networks!

๐Ÿง  Each neuron in a neural network performs a simple operation on the numbers it receives as input. But for the network to learn, the output must vary slightly with the inputs.

๐Ÿ’ก This is where Activation Functions come in! They introduce nonlinearity to allow the network to learn complex patterns.

๐Ÿ˜€ The most common one is the Rectified Linear Unit (ReLU). It outputs the input directly if it is positive, but outputs 0 if the input is negative: f(x) = max(0,x)

๐Ÿ˜ฎ This lets outputs grow as inputs increase, but doesn’t grow without limit like other functions. It adds just the right amount of nonlinearity!

๐Ÿค“ sigmoid and tanh are also popular. Sigmoid squashes numbers between 0-1: f(x)=1/(1+e-x) Tanh maps to -1 to 1: f(x)=2ฯƒ(x)-1

๐Ÿ˜Ž These “squash” outputs to control growth and prevent exploding or vanishing values during training.

๐Ÿฅณ With activation functions introducing nonlinearity, neural networks can learn incredibly complex patterns just like our amazing brains! ๐Ÿง 

Join the conversation
Muhammad Shahzad 9 months ago
Deep Learning is a blood Machine Learning is veins Artificial intelligence is a heart Algorithms is structure like a body.
Reply