Course Content
Day-2: How to use VScode (an IDE) for Python?
0/1
Day-3: Basics of Python Programming
This section will train you for Python programming language
0/4
Day-4: Data Visualization and Jupyter Notebooks
You will learn basics of Data Visualization and jupyter notebooks in this section.
0/1
Day-5: MarkDown language
You will learn whole MarkDown Language in this section.
0/1
Day-10: Data Wrangling and Data Visualization
Data Wrangling and Visualization is an important part of Exploratory Data Analysis, and we are going to learn this.
0/1
Day-11: Data Visualization in Python
We will learn about Data Visualization in Python in details.
0/2
Day-12,13: Exploratory Data Analysis (EDA)
EDA stands for Exploratory Data Analysis. It refers to the initial investigation and analysis of data to understand the key properties and patterns within the dataset.
0/2
Day-15: Data Wrangling Techniques (Beginner to Pro)
Data Wrangling in python
0/1
Day-26: How to use Conda Environments?
We are going to learn conda environments and their use in this section
0/1
Day-37: Time Series Analysis
In this Section we will learn doing Time Series Analysis in Python.
0/2
Day-38: NLP (Natural Language Processing)
In this section we learn basics of NLP
0/2
Day-39: git and github
We will learn about git and github
0/1
Day-40: Prompt Engineering (ChatGPT for Social Media Handling)
Social media per activae rehna hi sab kuch hy, is main ap ko wohi training milay ge.
0/1
Python ka Chilla for Data Science (40 Days of Python for Data Science)
About Lesson

Entropy in machine learning and information theory refers to the uncertainty or randomness in a random variable or dataset. Here are the key things to know about entropy:

  • It is a measure of impurity in a collection of examples. Pure nodes have zero entropy while mixed nodes have high entropy.

  • In binary classification, entropy is maximum (1) when the class distribution is 50-50 and minimum (0) when all examples belong to a single class.

  • The formula for calculating entropy (E) of a collection S with probability of elements p(x) is:

E(S) = -Σ p(x) log2 p(x)

  • It is used as a metric for decision tree algorithms like ID3, C4.5, CART etc. to select the best attribute to split the node on.

  • The attribute with the greatest information gain (highest reduction in entropy) after splitting is chosen as the splitting criterion.

  • Low entropy nodes/partitions mean examples are well classified with high certainty.

  • Thus reducing entropy helps generate Pure leaf nodes, improving predictive ability.

So in summary, entropy quantifies the uncertainty in data, and reducing it helps machine learning algorithms make smarter predictions with higher confidence.

Join the conversation
Muhammad Tufail 5 months ago
Before this, I was confused in Entropy
Reply
Ali Haider 7 months ago
+ ma
Reply
asfar zafar 9 months ago
plus mein
Reply