Course Content
How and Why to Register
Dear, to register for the 6 months AI and Data Science Mentorship Program, click this link and fill the form give there: https://shorturl.at/fuMX6
0/2
Day-17: Complete EDA on Google PlayStore Apps
0/1
Day-25: Quiz Time, Data Visualization-4
0/1
Day-27: Data Scaling/Normalization/standardization and Encoding
0/2
Day-30: NumPy (Part-3)
0/1
Day-31: NumPy (Part-4)
0/1
Day-32a: NumPy (Part-5)
0/1
Day-32b: Data Preprocessing / Data Wrangling
0/1
Day-37: Algebra in Data Science
0/1
Day-56: Statistics for Data Science (Part-5)
0/1
Day-69: Machine Learning (Part-3)
0/1
Day-75: Machine Learning (Part-9)
0/1
Day-81: Machine Learning (Part-15)-Evaluation Metrics
0/2
Day-82: Machine Learning (Part-16)-Metrics for Classification
0/1
Day-85: Machine Learning (Part-19)
0/1
Day-89: Machine Learning (Part-23)
0/1
Day-91: Machine Learning (Part-25)
0/1
Day-93: Machine Learning (Part-27)
0/1
Day-117: Deep Learning (Part-14)-Complete CNN Project
0/1
Day-119: Deep Learning (Part-16)-Natural Language Processing (NLP)
0/2
Day-121: Time Series Analysis (Part-1)
0/1
Day-123: Time Series Analysis (Part-3)
0/1
Day-128: Time Series Analysis (Part-8): Complete Project
0/1
Day-129: git & GitHub Crash Course
0/1
Day-131: Improving Machine/Deep Learning Model’s Performance
0/2
Day-133: Transfer Learning and Pre-trained Models (Part-2)
0/1
Day-134 Transfer Learning and Pre-trained Models (Part-3)
0/1
Day-137: Generative AI (Part-3)
0/1
Day-139: Generative AI (Part-5)-Tensorboard
0/1
Day-145: Streamlit for webapp development and deployment (Part-1)
0/3
Day-146: Streamlit for webapp development and deployment (Part-2)
0/1
Day-147: Streamlit for webapp development and deployment (Part-3)
0/1
Day-148: Streamlit for webapp development and deployment (Part-4)
0/2
Day-149: Streamlit for webapp development and deployment (Part-5)
0/1
Day-150: Streamlit for webapp development and deployment (Part-6)
0/1
Day-151: Streamlit for webapp development and deployment (Part-7)
0/1
Day-152: Streamlit for webapp development and deployment (Part-8)
0/1
Day-153: Streamlit for webapp development and deployment (Part-9)
0/1
Day-154: Streamlit for webapp development and deployment (Part-10)
0/1
Day-155: Streamlit for webapp development and deployment (Part-11)
0/1
Day-156: Streamlit for webapp development and deployment (Part-12)
0/1
Day-157: Streamlit for webapp development and deployment (Part-13)
0/1
How to Earn using Data Science and AI skills
0/1
Day-160: Flask for web app development (Part-3)
0/1
Day-161: Flask for web app development (Part-4)
0/1
Day-162: Flask for web app development (Part-5)
0/1
Day-163: Flask for web app development (Part-6)
0/1
Day-164: Flask for web app development (Part-7)
0/2
Day-165: Flask for web app deployment (Part-8)
0/1
Day-167: FastAPI (Part-2)
0/1
Day-168: FastAPI (Part-3)
0/1
Day-169: FastAPI (Part-4)
0/1
Day-170: FastAPI (Part-5)
0/1
Day-171: FastAPI (Part-6)
0/1
Day-174: FastAPI (Part-9)
0/1
Six months of AI and Data Science Mentorship Program
    Join the conversation
    Rana Anjum Sharif 3 weeks ago
    Done
    Reply
    Muhammad Rameez 4 weeks ago
    Done
    Reply
    hasaan khan 4 months ago
    This is code with Titanic data set:import pandas as pd import numpy as np from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error import seaborn as sns# Load the Titanic dataset (assuming it's in a CSV file named 'titanic.csv') titanic_data = sns.load_dataset("titanic")# Preprocess the data # Let's consider 'Age' and 'Fare' as predictor variables and 'Survived' as the target variable X = titanic_data[['age', 'fare']] y = titanic_data['survived']# Handle missing values X.fillna(X.mean(), inplace=True)# Split the data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)# Perform polynomial feature transformation poly_features = PolynomialFeatures(degree=2) X_train_poly = poly_features.fit_transform(X_train) X_test_poly = poly_features.transform(X_test)# Fit the polynomial regression model poly_reg = LinearRegression() poly_reg.fit(X_train_poly, y_train)# Make predictions y_train_pred = poly_reg.predict(X_train_poly) y_test_pred = poly_reg.predict(X_test_poly)# Evaluate the model train_rmse = np.sqrt(mean_squared_error(y_train, y_train_pred)) test_rmse = np.sqrt(mean_squared_error(y_test, y_test_pred))print("Train RMSE:", train_rmse) print("Test RMSE:", test_rmse)
    Reply
    tayyab Ali 5 months ago
    I have done this lecture with 100% practice.
    Reply
    Shahid Umar 5 months ago
    python code for polynomial regression line drawing
    Reply
    Sibtain Ali 5 months ago
    AIC and BIC are statistical measures used in model selection and comparison in the field of statistics and machine learning. Both AIC (Akaike Information Criterion) and BIC (Bayesian Information Criterion) are used to evaluate the goodness of fit of different models based on the data and penalize overly complex models.
    0% Complete