Course Content
Day-2: How to use VScode (an IDE) for Python?
0/1
Day-3: Basics of Python Programming
This section will train you for Python programming language
0/4
Day-4: Data Visualization and Jupyter Notebooks
You will learn basics of Data Visualization and jupyter notebooks in this section.
0/1
Day-5: MarkDown language
You will learn whole MarkDown Language in this section.
0/1
Day-10: Data Wrangling and Data Visualization
Data Wrangling and Visualization is an important part of Exploratory Data Analysis, and we are going to learn this.
0/1
Day-11: Data Visualization in Python
We will learn about Data Visualization in Python in details.
0/2
Day-12,13: Exploratory Data Analysis (EDA)
EDA stands for Exploratory Data Analysis. It refers to the initial investigation and analysis of data to understand the key properties and patterns within the dataset.
0/2
Day-15: Data Wrangling Techniques (Beginner to Pro)
Data Wrangling in python
0/1
Day-26: How to use Conda Environments?
We are going to learn conda environments and their use in this section
0/1
Day-37: Time Series Analysis
In this Section we will learn doing Time Series Analysis in Python.
0/2
Day-38: NLP (Natural Language Processing)
In this section we learn basics of NLP
0/2
Day-39: git and github
We will learn about git and github
0/1
Day-40: Prompt Engineering (ChatGPT for Social Media Handling)
Social media per activae rehna hi sab kuch hy, is main ap ko wohi training milay ge.
0/1
Python ka Chilla for Data Science (40 Days of Python for Data Science)
About Lesson

Cross validation is an important technique used in machine learning model evaluation and selection. Here’s a brief overview:

  • It is used to evaluate how the results of a statistical model will generalize to an independent dataset.

  • The dataset is divided into k number of groups known as folds. Typically k=5 or 10.

  • One fold is used as the validation set to evaluate the model, while the remaining k-1 folds are used to train the model.

  • This process is repeated k times, each time using a different fold as the validation set.

  • The validation results are then averaged over all k trials to get an overall cross-validation estimate of how the model is expected to perform.

  • This helps address overfitting – models that perform well only due to a particular dataset split.

  • Common types include k-fold CV, leave-one-out CV, stratified CV etc. depending on the problem.

  • It provides an almost unbiased estimation of model performance on unseen data without a separate hold-out test set.

  • Popular in model selection to choose hyperparameters that generalize better to new examples.

So in summary, cross validation helps address overfitting and identify how well a model can classify or predict unknown examples. It is a standard evaluation technique in ML.

Join the conversation
Sheikh Irfan Ullah Khan 8 months ago
Kg
Reply
Huzaifa Tahir 9 months ago
kg
Reply