Course Content
How and Why to Register
Dear, to register for the 6 months AI and Data Science Mentorship Program, click this link and fill the form give there: https://shorturl.at/fuMX6
0/2
Day-17: Complete EDA on Google PlayStore Apps
0/1
Day-25: Quiz Time, Data Visualization-4
0/1
Day-27: Data Scaling/Normalization/standardization and Encoding
0/2
Day-30: NumPy (Part-3)
0/1
Day-31: NumPy (Part-4)
0/1
Day-32a: NumPy (Part-5)
0/1
Day-32b: Data Preprocessing / Data Wrangling
0/1
Day-37: Algebra in Data Science
0/1
Day-56: Statistics for Data Science (Part-5)
0/1
Day-69: Machine Learning (Part-3)
0/1
Day-75: Machine Learning (Part-9)
0/1
Day-81: Machine Learning (Part-15)-Evaluation Metrics
0/2
Day-82: Machine Learning (Part-16)-Metrics for Classification
0/1
Day-85: Machine Learning (Part-19)
0/1
Day-89: Machine Learning (Part-23)
0/1
Day-91: Machine Learning (Part-25)
0/1
Day-93: Machine Learning (Part-27)
0/1
Day-117: Deep Learning (Part-14)-Complete CNN Project
0/1
Day-119: Deep Learning (Part-16)-Natural Language Processing (NLP)
0/2
Day-121: Time Series Analysis (Part-1)
0/1
Day-123: Time Series Analysis (Part-3)
0/1
Day-128: Time Series Analysis (Part-8): Complete Project
0/1
Day-129: git & GitHub Crash Course
0/1
Day-131: Improving Machine/Deep Learning Model’s Performance
0/2
Day-133: Transfer Learning and Pre-trained Models (Part-2)
0/1
Day-134 Transfer Learning and Pre-trained Models (Part-3)
0/1
Day-137: Generative AI (Part-3)
0/1
Day-139: Generative AI (Part-5)-Tensorboard
0/1
Day-145: Streamlit for webapp development and deployment (Part-1)
0/3
Day-146: Streamlit for webapp development and deployment (Part-2)
0/1
Day-147: Streamlit for webapp development and deployment (Part-3)
0/1
Day-148: Streamlit for webapp development and deployment (Part-4)
0/2
Day-149: Streamlit for webapp development and deployment (Part-5)
0/1
Day-150: Streamlit for webapp development and deployment (Part-6)
0/1
Day-151: Streamlit for webapp development and deployment (Part-7)
0/1
Day-152: Streamlit for webapp development and deployment (Part-8)
0/1
Day-153: Streamlit for webapp development and deployment (Part-9)
0/1
Day-154: Streamlit for webapp development and deployment (Part-10)
0/1
Day-155: Streamlit for webapp development and deployment (Part-11)
0/1
Day-156: Streamlit for webapp development and deployment (Part-12)
0/1
Day-157: Streamlit for webapp development and deployment (Part-13)
0/1
How to Earn using Data Science and AI skills
0/1
Day-160: Flask for web app development (Part-3)
0/1
Day-161: Flask for web app development (Part-4)
0/1
Day-162: Flask for web app development (Part-5)
0/1
Day-163: Flask for web app development (Part-6)
0/1
Day-164: Flask for web app development (Part-7)
0/2
Day-165: Flask for web app deployment (Part-8)
0/1
Day-167: FastAPI (Part-2)
0/1
Day-168: FastAPI (Part-3)
0/1
Day-169: FastAPI (Part-4)
0/1
Day-170: FastAPI (Part-5)
0/1
Day-171: FastAPI (Part-6)
0/1
Day-174: FastAPI (Part-9)
0/1
Six months of AI and Data Science Mentorship Program
    Join the conversation
    Shahid Umar 4 months ago
    Assignments for (Q1 = What are the differences between sigmoid and softmax activation functions? and Q2 = How do we know, which activation function should be used in hidden or output layers?) are submitted on discord server.
    Reply
    waqas Ahmed 5 months ago
    1:- Sigmoid activation function is used in binary class classification where our output should be either 0 or 1 while softmax activation function is used for multi class classification where our output can be classify into more then 2 classes softmax ensure that sum of probability accross all calsses gives 1. 2:- choosing activation function depends of nature of problem as for hidden layers we mostly use Relu function and for output layer we use Sigmoid/Logistic or Softmax
    Reply
    Saman Fatima 5 months ago
    Sigmoid is primarily used for binary classification, while softmax is used for multi-class classification. Sigmoid produces independent probabilities for each class, making it suitable for binary problems. Softmax ensures that the probabilities across all classes sum to 1, making it suitable for problems with multiple classes.Common activation functions for hidden layers include ReLU (Rectified Linear Unit), Leaky ReLU, and variants like Parametric ReLU (PReLU).For binary classification problems in the output layer, the sigmoid activation function is commonly used. For multi-class classification problems, the softmax activation function is typically used in the output layer to generate a probability distribution across multiple classes.
    Reply
    Mehak Iftikhar 5 months ago
    Assignment 2: How do we know, which activation function should we use in hidden or output layer?Answer:Hidden Layers: ReLU (Rectified Linear Unit): Most common choice for hidden layers in modern neural networks. Computationally efficient, avoids vanishing gradients, and aids in faster training. Use ReLU as a starting point and try other options if needed.Tanh (Hyperbolic Tangent): Similar to sigmoid but with a wider range of output (-1 to 1). Can be useful in some cases but generally less preferred than ReLU due to potential for vanishing gradients.Leaky ReLU: Variant of ReLU that addresses the "dying ReLU" problem by allowing a small, non-zero gradient for negative inputs. Can improve performance in some cases.Parametric ReLU (PReLU): Learns the slope of the negative part of the activation function during training. Can further improve performance over Leaky ReLU.Output Layer: Sigmoid: Binary classification (two possible classes). Outputs a probability between 0 and 1 for each class. Softmax: Multi-class classification (more than two classes). Outputs a probability distribution over all classes, where the probabilities sum to 1. Linear: Regression tasks are where you want to predict a continuous value.
    Reply
    Mehak Iftikhar 5 months ago
    Assignment 1: What are the differences between SoftMax and sigmoid functions? Answer:1. Number of Outputs: Sigmoid: Produces a single output value between 0 and 1, often used for binary classification (e.g., predicting whether an email is spam or not). Softmax: Produces multiple output values that sum up to 1, representing probabilities for each possible class in multi-class classification (e.g., predicting the category of an image among several options).2. Mathematical Formulas: Sigmoid: σ(x) = 1 / (1 + e^(-x)) Softmax: σ(x_i) = e^(x_i) / ∑_j e^(x_j) (where j iterates over all possible classes)3. Common Use Cases: Sigmoid: Logistic regression for binary classification. Output layer of artificial neurons for binary decisions. Activation function in deep neural networks. Softmax: Output layer of multi-class classification models. Language modeling is for predicting the next words in a sequence. Machine translation for generating probability distributions over possible translations.4. Output Ranges: Sigmoid: Outputs a single value between 0 and 1. Softmax: Outputs multiple values between 0 and 1, which sum up to 1, representing a probability distribution over classes.5. Saturation: Sigmoid: Saturates at 0 and 1, meaning its gradients become very small for large positive or negative inputs, potentially slowing down learning in neural networks. Softmax: Less prone to saturation, making it more suitable for multi-class classification tasks.In summary: Use sigmoid for binary classification or modeling the probabilities of individual events. Use softmax for multi-class classification or modeling probability distributions over multiple options.
    Reply
    Aasher Kamal 5 months ago
    Differences between sigmoid and softmax1.Use Cases:Sigmoid: Primarily used for binary classification problems where there are two classes. Softmax: Used for multi-class classification problems where there are more than two classes.2.Output Range:Sigmoid: Produces output values between 0 and 1 for each neuron independently. Softmax: Produces output values between 0 and 1, but ensures that the sum of all output values across neurons is 1, representing a probability distribution.
    Reply
    Danish Ammar 5 months ago
    Done
    Reply
    0% Complete