05 May 2024

# Adam

The best version of all adaptive learning rate optimization algorithms

04 May 2024

# RMSprop

Reducing the aggresive learning rate decay in Adagrad using the twin sibling of Adadelta

03 May 2024

# Adadelta

Reducing the aggresive learning rate decay in Adagrad

01 May 2024

# Adagrad

Parameter updates with unique learning rate for each parameter

30 April 2024

# SGD with Nesterov

A more conscience version of Stochastic Gradient Descent with Momentum

27 April 2024

# SGD with Momentum

Fast convergence using Stochastic Gradient Descent with Momentum