This is a visualization of the Newton-Raphson algorithm in 2-D. Specifically, this visualization shows how Newton’s method generates optimization steps from the 2-D quadratic functions used to approximate the objective function at every iteration.

The contour plots show the heights of quadratic surfaces over the plane. Red indicates positive height, blue indicates negative height, and gray represents zero (arbitrary units). Since constant terms don’t change solutions to optimization problems, they can be set to zero without loss of generality.

Moving the mouse cursor over the plot creates green and gray arrows. The green arrows show the Newton step vector for the cursor’s position, and the gray arrows show the gradient vector. Newton’s method always finds critical points of quadratic functions in one step, which is why the green arrow always points to the origin.

The sliders control the eigenvalues of the real symmetric Hessian matrix defining the quadratic objective function. When either one or both eigenvalues is negative, the green arrow can point uphill, revealing a well-known problem with applying this algorithm to nonconvex functions.

Please see this post on LinkedIn for a more complete description of this visualization.