Series

Gradient Descent Algorithms

It's about to go down! 👇


  • Bijon Setyawan Raya

  • January 11, 2022

    2 mins


    It's about to go down! 👇

    Gradient Descent Algorithms (6 Parts)


    In this series, we are going to learn one of the well-known optimization algorithms, namely Gradient Descent algorithm. However, we will go through the series gradually, so that the readers will not get overwhelmed with lots of details, especially the mathematical notations and formulas.

    In the beginning, we are going to discuss what Linear Regression is and how to use it. Even though it's an easy concept and many resources can be found, it's still a good idea to refresh your memory before delving into Gradient Descent. After that, we are going to learn the inner working of Gradient Descent. Then, we will start implementing the simplest form of Gradient Descent, which is Batch Gradient Descent. By tweaking abit the formula and adding some complexities into the algorithm little by little, we can get all the way to developing Adam and many of its variations.

    Here is the list of algorithms that we are going to cover:

    1. Batch Gradient Descent
    2. Mini Batch Gradient Descent
    3. Stochastic Gradient Descent
    4. SGD with Momentum
    5. SGD with Nesterov & Momentum
    6. Many more ...

    By the end of this series, we will be able to write optimization algorithms from scratch. Since this is my first attempt in writing scientific papers, please send me your feedback and suggestions via email. Also, if you notice some mistakes or type in my posts, please do let me know.

    Related Posts