Gradient Descent Algorithms
It's about to go down! 👇
Series
Series
A mathematical adventure into Gradient Descent
Bijon Setyawan Raya
February 13, 2022
10 mins
Introduction
Linear Regression
Mathematics of Gradient Descent
Batch Gradient Descent
Mini Batch Gradient Descent
Stochastic Gradient Descent
Remember in the Linear Regression post, I mentioned that we can use a Gradient Descent algorithm to minimize the loss function, in this case the Mean Squared Error valuem, of a Linear Regression model. Let's see how it works at a glance.
Assume we have random data points like in the following graph.
We then make a regression line based on the following equation.
At this point, we want to find the best value for and . Assuming that we can't come up with the best answer, we can just randomly assign some random numbers to them. Now, let's set and , and we are going to get a regression line in the following graph.
The problem is that we can't simply guess random numbers and plug them in to and over and over again. Clearly, we need a way to automate this.
Let's make a 2D graph showing the MSE graph where and .
In the graph above, when and , the MSE value is shown as a red dot.
I intentionally make the range wider to show everyone that the cost function (MSE) looks like an exponential function. It's not easy to tell where the minimum point is at this moment, but it could be around . Therefore, we are not going to guess the suitable value one by one.
The ideal scenario is that we can have the red dot going down the valley slowly by itself.
However, we don't want the red dot to bounce here and there. If so, it means that the learning rate is too high. Don't worry if you are not sure what learning rate is since I have a seperate post explaning what it is.
In the next section, we are going to discuss how we can help our little friend The Little Red Dot going down the valley. Brace yourself because it is about to go down. I swear, no pun intended.
Since we want to minimize the MSE value of the Linear Regression model, we want to find the best value for and so that the regression line that is located as close to most data points as possible.
Let's express everything we want do in mathematical expressions.
The cost function
The objective function
The update rules
where is the parameters we want to update and is the learning rate.
Bear with me! This is going to involve a lot of Maths, especially Calculus.
Let's simplify the cost function that we want to minimize.
Solving the equality above with Power Rule, we then have
Since we want to update the and coefficients, we need to find the partial derivative of the cost function with respect to those coefficients.
We can remove the scalar from the two equations above by dividing the cost function, in this case the MSE equation, by . Multiplying the cost function with a scalar will not affect how it's going to reach the minimum MSE value.
This new modified cost function is called One Half Mean Squared Error.
Deriving the new cost function above, the scalar will be cancelled out from the partial derivations with respect to and ,
Plugging each of the equation above into the update rules with respect to those coefficients, we get
The two equations above will help us to approximate the minimum value of the cost function by updating and over time.
Here are the takeaway keypoints from this entire post.
In the next part, we are going to implement Batch Gradient Descent using the mathematical equations that we have derived. See you in the next one.