Gradient Descent Algorithms
It's about to go down! 👇
Series
Series
Linear Regression + Batch Gradient Descent in Python
Bijon Setyawan Raya
February 16, 2022
15 mins
Introduction
Linear Regression
Mathematics of Gradient Descent
Batch Gradient Descent
Mini Batch Gradient Descent
Stochastic Gradient Descent
In this example, we are going to include the Iris Dataset from UCI Machine Learning Repository imported from scikit-learn
.
There are two features in the dataset that we are going to analyse, namely sepal_length
and petal_width
shown in the highlighted lines.
from sklearn.datasets import load_iris
iris = load_iris()
features = iris.data
target = iris.target
sepal_length = np.array(features[:, 0])
petal_width = np.array(features[:, 3])
species_names = list()
for i in target:
if i == 0:
species_names.append('setosa')
elif i == 1:
species_names.append('versicolor')
else:
species_names.append('virginica')
Before we implement Batch Gradient Descent in Python, we need to set a baseline to compare against our own implementation.
So, we are going to train our dataset into the Linear Regression built-in function made by scikit-learn
.
First, let's fit our dataset to LinearRegression()
model that we imported from sklearn.linear_model
.
linreg = LinearRegression()
linreg.fit(
X = sepal_length.reshape(-1,1),
y = petal_width.reshape(-1,1)
)
print("Intercept: ",linreg.intercept_[0])
# Intercept: -3.200215
print("First coefficient:", linreg.coef_[0][0])
# First coeficient: 0.75291757
Once we have the intercept and the coefficient values, let's make a regression line to see if the line is close to most data points.
sns.scatterplot(
x = sepal_length,
y = petal_width,
hue = species_names
)
plt.plot(
sepal_length,
linreg.intercept_[0] +
linreg.coef_[0][0] * features[:, 0],
color='red'
)
Clearly, the line is indeed very close to the most data points and we want to see the MSE of this regression line.
linreg_predictions = linreg.predict(sepal_length.reshape(-1,1))
linreg_mse = mean_squared_error(linreg_predictions, petal_width)
print(f"The MSE is {linreg_mse}")
# The MSE is 0.19101500769427357
From the result we got from sklearn
, the best regression line is
with MSE value around . The equation above is going to be our base line for this experiment to determine how good our own Gradient Descent implementation.
Remember that in the first part of this series, we have customized the cost function, which is the MSE, simply by multiplying it by half and named it One Half Mean Squared Error.
We have also acquired two equations that are reponsible for updating and . Namely,
Now, let's translate these three equations into Python code.
def bgd(x, y, epochs, df, alpha = 0.01):
intercept, coefficient = 2.0, -7.5
# first sum_error
predictions = predict(intercept, coefficient, x)
sum_error = np.sum((predictions - y) ** 2) / (2 * len(x))
df.loc[0] = [intercept, coefficient, sum_error]
for epoch in range(1, epochs):
predictions = predict(intercept, coefficient, x)
b0_error = (1/len(x)) * np.sum(predictions - y)
b1_error = (1/len(x)) * np.sum((predictions - y) * x)
intercept = intercept - alpha * b0_error
coefficient = coefficient - alpha * b1_error
sum_error = sum_error + np.sum((predictions - y) ** 2) / (2 * len(x))
df.loc[epoch] = [intercept, coefficient, sum_error]
sum_error = 0
return df
The highlighted codes is where the parameters update happens.
Once the parameters were updated, we then calculate the cost function for each iteration and save it into the dataframe we created.
bgd_loss = pd.DataFrame(columns=['intercept', 'coefficient', 'sum_error'])
bgd_loss = bgd(sepal_length, petal_width, epochs = 10_000, df = bgd_loss)
Below is the figure of the regression lines tend to look like at the 1,000th, the 5,000th, and the 10,000th iterations.
After 10,000 iterations, the MSE value of our own Gradient Descent is which is quite close to our baseline, which is .
Combining everything, here is how the regression line changes over time.
Let's animate the movement of the regression lines.
Now, let's see how the movement of the intercept and the coefficient variabels on a contour map.
Here are some keypoints for Batch Gradient Descent:
Since I wanted to present you an easy and succint explanation of Batch Gradient Descent, so I decided to not to include all the codes for the sake of simplicity. If you want to know how to implement it in more details, please click here to check the Python notebook on Kaggle.