Course Content
How and Why to Register
Dear, to register for the 6 months AI and Data Science Mentorship Program, click this link and fill the form give there: https://shorturl.at/fuMX6
0/2
Day-17: Complete EDA on Google PlayStore Apps
0/1
Day-25: Quiz Time, Data Visualization-4
0/1
Day-27: Data Scaling/Normalization/standardization and Encoding
0/2
Day-30: NumPy (Part-3)
0/1
Day-31: NumPy (Part-4)
0/1
Day-32a: NumPy (Part-5)
0/1
Day-32b: Data Preprocessing / Data Wrangling
0/1
Day-37: Algebra in Data Science
0/1
Day-56: Statistics for Data Science (Part-5)
0/1
Day-69: Machine Learning (Part-3)
0/1
Day-75: Machine Learning (Part-9)
0/1
Day-81: Machine Learning (Part-15)-Evaluation Metrics
0/2
Day-82: Machine Learning (Part-16)-Metrics for Classification
0/1
Day-85: Machine Learning (Part-19)
0/1
Day-89: Machine Learning (Part-23)
0/1
Day-91: Machine Learning (Part-25)
0/1
Day-93: Machine Learning (Part-27)
0/1
Day-117: Deep Learning (Part-14)-Complete CNN Project
0/1
Day-119: Deep Learning (Part-16)-Natural Language Processing (NLP)
0/2
Day-121: Time Series Analysis (Part-1)
0/1
Day-123: Time Series Analysis (Part-3)
0/1
Day-128: Time Series Analysis (Part-8): Complete Project
0/1
Day-129: git & GitHub Crash Course
0/1
Day-131: Improving Machine/Deep Learning Model’s Performance
0/2
Day-133: Transfer Learning and Pre-trained Models (Part-2)
0/1
Day-134 Transfer Learning and Pre-trained Models (Part-3)
0/1
Day-137: Generative AI (Part-3)
0/1
Day-139: Generative AI (Part-5)-Tensorboard
0/1
Day-145: Streamlit for webapp development and deployment (Part-1)
0/3
Day-146: Streamlit for webapp development and deployment (Part-2)
0/1
Day-147: Streamlit for webapp development and deployment (Part-3)
0/1
Day-148: Streamlit for webapp development and deployment (Part-4)
0/2
Day-149: Streamlit for webapp development and deployment (Part-5)
0/1
Day-150: Streamlit for webapp development and deployment (Part-6)
0/1
Day-151: Streamlit for webapp development and deployment (Part-7)
0/1
Day-152: Streamlit for webapp development and deployment (Part-8)
0/1
Day-153: Streamlit for webapp development and deployment (Part-9)
0/1
Day-154: Streamlit for webapp development and deployment (Part-10)
0/1
Day-155: Streamlit for webapp development and deployment (Part-11)
0/1
Day-156: Streamlit for webapp development and deployment (Part-12)
0/1
Day-157: Streamlit for webapp development and deployment (Part-13)
0/1
How to Earn using Data Science and AI skills
0/1
Day-160: Flask for web app development (Part-3)
0/1
Day-161: Flask for web app development (Part-4)
0/1
Day-162: Flask for web app development (Part-5)
0/1
Day-163: Flask for web app development (Part-6)
0/1
Day-164: Flask for web app development (Part-7)
0/2
Day-165: Flask for web app deployment (Part-8)
0/1
Day-167: FastAPI (Part-2)
0/1
Day-168: FastAPI (Part-3)
0/1
Day-169: FastAPI (Part-4)
0/1
Day-170: FastAPI (Part-5)
0/1
Day-171: FastAPI (Part-6)
0/1
Day-174: FastAPI (Part-9)
0/1
Six months of AI and Data Science Mentorship Program
    Join the conversation
    Muhammad_Faizan 2 weeks ago
    Normalizing data involves transforming it to a standard scale without distorting differences in the ranges of values. 1. Min-Max Scaling (Normalization) This technique scales the data to a fixed range, usually 0 to 1 or -1 to 1. 2. Z-Score Standardization (Standardization) This technique transforms the data to have a mean of 0 and a standard deviation of 1. 3. Robust Scaler This technique uses the median and the interquartile range (IQR) for scaling, making it robust to outliers. 4. Max Abs Scaler This technique scales the data by its maximum absolute value, preserving sparsity in data (i.e., useful for sparse data like text data represented as TF-IDF). 5. Decimal Scaling This technique involves moving the decimal point of values of the feature. The number of decimal points moved depends on the maximum absolute value of the feature.
    Reply
    Muhammad_Faizan 2 weeks ago
    Choosing the Right Normalization Method:Min-Max Scaling: Useful when you want data within a specific range (e.g., 0 to 1). It is sensitive to outliers. Z-Score Standardization: Preferred when the data has a Gaussian (normal) distribution. It is less sensitive to outliers than Min-Max Scaling. Robust Scaler: Best when dealing with data that has many outliers. Max Abs Scaler: Suitable for data that is sparse or has large variations in scale.
    Muhammad_Faizan 2 weeks ago
    The normal distribution is crucial in data science and data analysis for several reasons: Central Limit Theorem (CLT): Statistical Inference: Simplification and Approximation: Prediction and Error Analysis: Natural Phenomena: Probabilistic Interpretations: Parameter Estimation:
    Reply
    Muhammad Rameez 1 month ago
    Done
    Reply
    Rana Anjum Sharif 2 months ago
    Done
    Reply
    Liaqat Ali 4 months ago
    Yes
    Reply
    kashan malik 5 months ago
    DONE SIR
    Reply
    Shahid Umar 7 months ago
    I like it
    Reply
    Sibtain Ali 7 months ago
    Why normal distribution is important in data science/data analysis? In summary, normal distribution is important in data science because it is a fundamental concept that is used in many statistical analyses, including hypothesis testing, regression analysis, and confidence intervals.
    Reply
    Sibtain Ali 7 months ago
    How to normalize data? There are several ways to normalize data, but one of the most common methods is min-max normalization.
    tayyab Ali 7 months ago
    Q1. Why normal distribution is important in data science/ data analysis? The normal distribution, also known as the Gaussian distribution or bell curve, is essential in data science and data analysis for several reasons Common Occurrence in Nature: Central Limit Theorem (CLT): Statistical Inference: Parameter Estimation: Z-Scores and Percentiles: Machine Learning Algorithms: Quality Control and Six Sigma:
    Reply
    tayyab Ali 7 months ago
    Q2. How to normalize the data? There are several ways to normalize data, but one of the most common methods is min-max normalization.
    Najeeb Ullah 7 months ago
    learn distributions of data done
    Reply
    0% Complete