Series

Mathematics of Gradient Descent

A mathematical adventure into Gradient Descent


  • Bijon Setyawan Raya

  • February 13, 2022

    10 mins


    A mathematical adventure into Gradient Descent

    Gradient Descent Algorithm (6 Parts)


    Introduction#

    Remember in the Linear Regression post, I mentioned that we can use a Gradient Descent algorithm to minimize the loss function, in this case the Mean Squared Error valuem, of a Linear Regression model. Let's see how it works at a glance.

    Assume we have random data points like in the following graph.

    We then make a regression line based on the following equation.

    y=β0+β1x y = \beta_0 + \beta_1 x

    At this point, we want to find the best value for β0\beta_0 and β1\beta_1. Assuming that we can't come up with the best answer, we can just randomly assign some random numbers to them. Now, let's set β0=0\beta_0 = 0 and β1=5\beta_1 = -5, and we are going to get a regression line in the following graph.

    The problem is that we can't simply guess random numbers and plug them in to β0\beta_0 and β1\beta_1 over and over again. Clearly, we need a way to automate this.

    Let's make a 2D graph showing the MSE graph where β0=0\beta_0 = 0 and 10β113-10 \leq \beta_1 \leq 13.

    In the graph above, when β0=0\beta_0 = 0 and β1=5\beta_1 = -5, the MSE value is 217.42217.42 shown as a red dot.

    I intentionally make the range wider to show everyone that the cost function (MSE) looks like an exponential function. It's not easy to tell where the minimum point is at this moment, but it could be around 3β13-3 \leq \beta_1 \leq 3. Therefore, we are not going to guess the suitable value β1\beta_1 one by one.

    The ideal scenario is that we can have the red dot going down the valley slowly by itself.

    However, we don't want the red dot to bounce here and there. If so, it means that the learning rate is too high. Don't worry if you are not sure what learning rate is since I have a seperate post explaning what it is.

    In the next section, we are going to discuss how we can help our little friend The Little Red Dot going down the valley. Brace yourself because it is about to go down. I swear, no pun intended.

    Since we want to minimize the MSE value of the Linear Regression model, we want to find the best value for β0\beta_0 and β0\beta_0 so that the regression line that is located as close to most data points as possible.

    Let's express everything we want do in mathematical expressions.

    The cost function

    J(β0,β1)=1Ni=1N(f(x)yi)2 J(\beta_0, \beta_1) = \frac{1}{N} \sum_{i=1}^N (f(x) - y_i)^2

    The objective function

    minβ0,β1J(β0,β1) \min_{\beta_0, \beta_1} J(\beta_0, \beta_1)

    The update rules

    Θi=βiαβiJ(β0,β1) \Theta_{i} = \beta_i - \alpha \cdot \frac{\partial}{\partial \beta_i} J(\beta_0, \beta_1)

    where βi\beta_i is the parameters we want to update and α\alpha is the learning rate.

    Bear with me! This is going to involve a lot of Maths, especially Calculus.

    Let's simplify the cost function that we want to minimize.

    βiJ(β0,β1)=βi(1Ni=1N(f(x)yi)2)=1Nβii=1N(f(x)yi)2 \begin{aligned} \frac{\partial}{\partial \beta_i} J(\beta_0, \beta_1) &= \frac{\partial}{\partial \beta_i} (\frac{1}{N} \sum_{i=1}^N (f(x) - y_i)^2) \\ &= \frac{1}{N} \frac{\partial}{\partial \beta_i} \sum_{i=1}^N (f(x) - y_i)^2 \end{aligned}

    Solving the equality above with Power Rule, we then have

    βiJ(β0,β1)=2Ni=1N(f(x)yi)βi(f(x)yi) \frac{\partial}{\partial \beta_i} J(\beta_0, \beta_1) = \frac{2}{N} \sum_{i=1}^N (f(x) - y_i) \frac{\partial}{\partial \beta_i} (f(x) - y_i)

    Since we want to update the β0\beta_0 and β1\beta_1 coefficients, we need to find the partial derivative of the cost function with respect to those coefficients.

    β0J(β0,β1)=2Ni=1N(f(x)yi)β0(β0+β1xyi)=2Ni=1N(f(x)yi) \begin{aligned} \frac{\partial}{\partial \beta_0} J(\beta_0, \beta_1) &= \frac{2}{N} \sum_{i=1}^N (f(x) - y_i) \frac{\partial}{\partial \beta_0} (\beta_0 + \beta_1 x - y_i) \\ &= \frac{2}{N} \sum_{i=1}^N (f(x) - y_i) \end{aligned}
    β1J(β0,β1)=2Ni=1N(f(x)yi)x \frac{\partial}{\partial \beta_1} J(\beta_0, \beta_1) = \frac{2}{N} \sum_{i=1}^N (f(x) - y_i) x

    We can remove the scalar 22 from the two equations above by dividing the cost function, in this case the MSE equation, by 22. Multiplying the cost function with a scalar will not affect how it's going to reach the minimum MSE value.

    This new modified cost function is called One Half Mean Squared Error.

    J(β0,β1)=12Ni=1N(y^iyi)2 J(\beta_0, \beta_1) = \frac{1}{2N} \sum_{i=1}^N (\hat{y}_i - y_i)^2

    Deriving the new cost function above, the scalar 22 will be cancelled out from the partial derivations with respect to β0\beta_0 and β1\beta_1,

    β0J(β0,β1)=1Ni=1N(f(x)yi)β1J(β0,β1)=1Ni=1N(f(x)yi)x \begin{aligned} \frac{\partial}{\partial \beta_0} J(\beta_0, \beta_1) = \frac{1}{N} \sum_{i=1}^N (f(x) - y_i) \\ \frac{\partial}{\partial \beta_1} J(\beta_0, \beta_1) = \frac{1}{N} \sum_{i=1}^N (f(x) - y_i) x \end{aligned}

    Plugging each of the equation above into the update rules with respect to those coefficients, we get

    β0=β0α1Ni=1N(f(x)yi)β1=β1α1Ni=1N(f(x)yi)x \begin{aligned} \beta_0 = \beta_0 - \alpha \cdot \frac{1}{N} \sum_{i=1}^N (f(x) - y_i) \\ \beta_1 = \beta_1 - \alpha \cdot \frac{1}{N} \sum_{i=1}^N (f(x) - y_i) x \end{aligned}

    The two equations above will help us to approximate the minimum value of the cost function by updating β0\beta_0 and β1\beta_1 over time.

    Here are the takeaway keypoints from this entire post.

    1. Gradient Descent is a technique that helps a model get better at future predictions.
    2. The cost function
      J(β0,β1)=12Ni=1N(y^iyi)2 J(\beta_0, \beta_1) = \frac{1}{2N} \sum_{i=1}^N (\hat{y}_i - y_i)^2
    3. The update rule for β0\beta_0
      β0=β0α1Ni=1N(f(x)yi) \beta_0 = \beta_0 - \alpha \cdot \frac{1}{N} \sum_{i=1}^N (f(x) - y_i)
    4. The update rule for β1\beta_1
      β1=β1α1Ni=1N(f(x)yi)x \beta_1 = \beta_1 - \alpha \cdot \frac{1}{N} \sum_{i=1}^N (f(x) - y_i)x
    5. The weights β0\beta_0 and β1\beta_1 are updated after all the data points are processed.
    6. The learning rate α\alpha is a hyperparameter that controls the speed of the gradient descent on any slopes.
    7. High learning rate may cause the model's MSE to bounce here and there or to converge at a suboptimial minimum.
    8. Low learning rate will cause the model to stuck in a suboptimial minimum.

    In the next part, we are going to implement Batch Gradient Descent using the mathematical equations that we have derived. See you in the next one.

    Related Posts