Course Content
How and Why to Register
Dear, to register for the 6 months AI and Data Science Mentorship Program, click this link and fill the form give there: https://shorturl.at/fuMX6
0/2
Day-17: Complete EDA on Google PlayStore Apps
0/1
Day-25: Quiz Time, Data Visualization-4
0/1
Day-27: Data Scaling/Normalization/standardization and Encoding
0/2
Day-30: NumPy (Part-3)
0/1
Day-31: NumPy (Part-4)
0/1
Day-32a: NumPy (Part-5)
0/1
Day-32b: Data Preprocessing / Data Wrangling
0/1
Day-37: Algebra in Data Science
0/1
Day-56: Statistics for Data Science (Part-5)
0/1
Day-69: Machine Learning (Part-3)
0/1
Day-75: Machine Learning (Part-9)
0/1
Day-81: Machine Learning (Part-15)-Evaluation Metrics
0/2
Day-82: Machine Learning (Part-16)-Metrics for Classification
0/1
Day-85: Machine Learning (Part-19)
0/1
Day-89: Machine Learning (Part-23)
0/1
Day-91: Machine Learning (Part-25)
0/1
Day-93: Machine Learning (Part-27)
0/1
Day-117: Deep Learning (Part-14)-Complete CNN Project
0/1
Day-119: Deep Learning (Part-16)-Natural Language Processing (NLP)
0/2
Day-121: Time Series Analysis (Part-1)
0/1
Day-123: Time Series Analysis (Part-3)
0/1
Day-128: Time Series Analysis (Part-8): Complete Project
0/1
Day-129: git & GitHub Crash Course
0/1
Day-131: Improving Machine/Deep Learning Model’s Performance
0/2
Day-133: Transfer Learning and Pre-trained Models (Part-2)
0/1
Day-134 Transfer Learning and Pre-trained Models (Part-3)
0/1
Day-137: Generative AI (Part-3)
0/1
Day-139: Generative AI (Part-5)-Tensorboard
0/1
Day-145: Streamlit for webapp development and deployment (Part-1)
0/3
Day-146: Streamlit for webapp development and deployment (Part-2)
0/1
Day-147: Streamlit for webapp development and deployment (Part-3)
0/1
Day-148: Streamlit for webapp development and deployment (Part-4)
0/2
Day-149: Streamlit for webapp development and deployment (Part-5)
0/1
Day-150: Streamlit for webapp development and deployment (Part-6)
0/1
Day-151: Streamlit for webapp development and deployment (Part-7)
0/1
Day-152: Streamlit for webapp development and deployment (Part-8)
0/1
Day-153: Streamlit for webapp development and deployment (Part-9)
0/1
Day-154: Streamlit for webapp development and deployment (Part-10)
0/1
Day-155: Streamlit for webapp development and deployment (Part-11)
0/1
Day-156: Streamlit for webapp development and deployment (Part-12)
0/1
Day-157: Streamlit for webapp development and deployment (Part-13)
0/1
How to Earn using Data Science and AI skills
0/1
Day-160: Flask for web app development (Part-3)
0/1
Day-161: Flask for web app development (Part-4)
0/1
Day-162: Flask for web app development (Part-5)
0/1
Day-163: Flask for web app development (Part-6)
0/1
Day-164: Flask for web app development (Part-7)
0/2
Day-165: Flask for web app deployment (Part-8)
0/1
Day-167: FastAPI (Part-2)
0/1
Day-168: FastAPI (Part-3)
0/1
Day-169: FastAPI (Part-4)
0/1
Day-170: FastAPI (Part-5)
0/1
Day-171: FastAPI (Part-6)
0/1
Day-174: FastAPI (Part-9)
0/1
Six months of AI and Data Science Mentorship Program
    Join the conversation
    Najeeb Ullah 7 months ago
    done
    Reply
    Muhammad Faizan 9 months ago
    I learned about how to select the best models and how to do the hyperparameter Tuning on datasets.
    Reply
    Rana Anjum Sharif 11 months ago
    Done
    Reply
    Muhammad Rameez 11 months ago
    Done
    Reply
    hasaan khan 1 year ago
    from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.svm import SVR from sklearn.ensemble import RandomForestRegressor# Define a list of models with their respective parameter grids models = [ ('LinearRegressor', Pipeline([ ('preprocessor', StandardScaler()), ('model', LinearRegression()) ]), {}), ('DecisionTreeRegressor', Pipeline([ ('preprocessor', StandardScaler()), ('model', DecisionTreeRegressor()) ]), {'model__max_depth': [None, 50, 100], 'model__criterion': ['mse', 'mae']}), ('SVR', Pipeline([ ('preprocessor', StandardScaler()), ('model', SVR()) ]), {'model__kernel': ['rbf', 'sigmoid'], 'model__C': [0.1, 1, 0.01]}), ('RandomForestRegressor', Pipeline([ ('preprocessor', StandardScaler()), ('model', RandomForestRegressor()) ]), {'model__max_depth': [None, 5, 10]}) ]best_model = None best_score = 0# Loop through each model and its parameter grid for model_name, model, param_grid in models: # Perform grid search if a parameter grid is provided if param_grid: grid_search = GridSearchCV(model, param_grid, cv=5) grid_search.fit(X_train, y_train)best_params = grid_search.best_params_ best_model_score = grid_search.best_score_# print(f"Best parameters for {model_name}: {best_params}") # print(f"Best score for {model_name}: {best_model_score}")if best_model_score > best_score: best_model = grid_search.best_estimator_ best_score = best_model_score else: # Fit the model directly without grid search model.fit(X_train, y_train) score = model.score(X_train, y_train)print(f"Score for {model_name}: {score}")if score > best_score: best_model = model best_score = scoreprint(f"The best model is {best_model}")
    Reply
    tayyab Ali 1 year ago
    I have done this lecture with 100% practice.
    Reply
    Sibtain Ali 1 year ago
    I have done this video with 100% practice.
    Reply
    hasaan khan 1 year ago
    from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.svm import SVR from sklearn.ensemble import RandomForestRegressor# Define a list of models with their respective parameter grids models = [ ('LinearRegressor', Pipeline([ ('preprocessor', StandardScaler()), ('model', LinearRegression()) ]), {}), ('DecisionTreeRegressor', Pipeline([ ('preprocessor', StandardScaler()), ('model', DecisionTreeRegressor()) ]), {'model__max_depth': [None, 50, 100], 'model__criterion': ['mse', 'mae']}), ('SVR', Pipeline([ ('preprocessor', StandardScaler()), ('model', SVR()) ]), {'model__kernel': ['rbf', 'sigmoid'], 'model__C': [0.1, 1, 0.01]}), ('RandomForestRegressor', Pipeline([ ('preprocessor', StandardScaler()), ('model', RandomForestRegressor()) ]), {'model__max_depth': [None, 5, 10]}) ]best_model = None best_score = 0# Loop through each model and its parameter grid for model_name, model, param_grid in models: # Perform grid search if a parameter grid is provided if param_grid: grid_search = GridSearchCV(model, param_grid, cv=5) grid_search.fit(X_train, y_train)best_params = grid_search.best_params_ best_model_score = grid_search.best_score_# print(f"Best parameters for {model_name}: {best_params}") # print(f"Best score for {model_name}: {best_model_score}")if best_model_score > best_score: best_model = grid_search.best_estimator_ best_score = best_model_score else: # Fit the model directly without grid search model.fit(X_train, y_train) score = model.score(X_train, y_train)print(f"Score for {model_name}: {score}")if score > best_score: best_model = model best_score = scoreprint(f"The best model is {best_model}")
    0% Complete