3  Introduction to scipy.optimize

SciPy provides algorithms for optimization, integration, interpolation, eigenvalue problems, algebraic equations, differential equations, statistics and many other classes of problems. SciPy is a collection of mathematical algorithms and convenience functions built on NumPy. It adds significant power to Python by providing the user with high-level commands and classes for manipulating and visualizing data.

SciPy optimize provides functions for minimizing (or maximizing) objective functions, possibly subject to constraints. It includes solvers for nonlinear problems (with support for both local and global optimization algorithms), linear programing, constrained and nonlinear least-squares, root finding, and curve fitting.

In this notebook, we will learn how to use the scipy.optimize module to solve optimization problems. See: https://docs.scipy.org/doc/scipy/tutorial/optimize.html

Note
  • This content is based on information from the scipy.optimize package.
  • The scipy.optimize package provides several commonly used optimization algorithms. A detailed listing is available in scipy.optimize (can also be found by help(scipy.optimize)).

Common functions and objects, shared across different SciPy optimize solvers, are shown in Table 3.1.

Table 3.1: Common functions and objects, shared across different SciPy optimize solvers
Function or Object Description
show_options([solver, method, disp]) Show documentation for additional options of optimization solvers.
OptimizeResult Represents the optimization result.
OptimizeWarning Warning issued by solvers.

We will introduce unconstrained minimization of multivariate scalar functions in this chapter. The minimize function provides a common interface to unconstrained and constrained minimization algorithms for multivariate scalar functions in scipy.optimize. To demonstrate the minimization function, consider the problem of minimizing the Rosenbrock function of N variables:

\[ f(J) = \sum_{i=1}^{N-1} 100 (x_{i+1} - x_i^2)^2 + (1 - x_i)^2 \]

The minimum value of this function is 0, which is achieved when (x_i = 1).

Note that the Rosenbrock function and its derivatives are included in scipy.optimize. The implementations shown in the following sections provide examples of how to define an objective function as well as its Jacobian and Hessian functions. Objective functions in scipy.optimize expect a numpy array as their first parameter, which is to be optimized and must return a float value. The exact calling signature must be f(x, *args), where x represents a numpy array, and args is a tuple of additional arguments supplied to the objective function.

3.1 Derivative-free Optimization Algorithms

Section 3.1.1 and Section 3.1.2 present two approaches that do not need gradient information to find the minimum. They use function evaluations to find the minimum.

3.1.1 Nelder-Mead Simplex Algorithm

method='Nelder-Mead': In the example below, the minimize routine is used with the Nelder-Mead simplex algorithm (selected through the method parameter):

import numpy as np
from scipy.optimize import minimize

def rosen(x):
    """The Rosenbrock function"""
    return sum(100.0 * (x[1:] - x[:-1]**2.0)**2.0 + (1 - x[:-1])**2.0)

x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
res = minimize(rosen, x0, method='nelder-mead',
               options={'xatol': 1e-8, 'disp': True})

print(res.x)
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 339
         Function evaluations: 571
[1. 1. 1. 1. 1.]

The simplex algorithm is probably the simplest way to minimize a well-behaved function. It requires only function evaluations and is a good choice for simple minimization problems. However, because it does not use any gradient evaluations, it may take longer to find the minimum.

3.1.2 Powell’s Method

Another optimization algorithm that needs only function calls to find the minimum is Powell’s method, which can be selected by setting the method parameter to 'powell' in the minimize function.

To demonstrate how to supply additional arguments to an objective function, let’s consider minimizing the Rosenbrock function with an additional scaling factor \(a\) and an offset \(b\):

\[ f(J, a, b) = \sum_{i=1}^{N-1} a (x_{i+1} - x_i^2)^2 + (1 - x_i)^2 + b \]

You can achieve this using the minimize routine with the example parameters \(a=0.5\) and \(b=1\):

def rosen_with_args(x, a, b):
    """The Rosenbrock function with additional arguments"""
    return sum(a * (x[1:] - x[:-1]**2.0)**2.0 + (1 - x[:-1])**2.0) + b

x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
res = minimize(rosen_with_args, x0, method='nelder-mead',
               args=(0.5, 1.), options={'xatol': 1e-8, 'disp': True})

print(res.x)
Optimization terminated successfully.
         Current function value: 1.000000
         Iterations: 319
         Function evaluations: 525
[1.         1.         1.         1.         0.99999999]

As an alternative to using the args parameter of minimize, you can wrap the objective function in a new function that accepts only x. This approach is also useful when it is necessary to pass additional parameters to the objective function as keyword arguments.

def rosen_with_args(x, a, *, b):  # b is a keyword-only argument
    return sum(a * (x[1:] - x[:-1]**2.0)**2.0 + (1 - x[:-1])**2.0) + b

def wrapped_rosen_without_args(x):
    return rosen_with_args(x, 0.5, b=1.)  # pass in `a` and `b`

x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
res = minimize(wrapped_rosen_without_args, x0, method='nelder-mead',
               options={'xatol': 1e-8,})

print(res.x)
[1.         1.         1.         1.         0.99999999]

Another alternative is to use functools.partial.

from functools import partial

partial_rosen = partial(rosen_with_args, a=0.5, b=1.)
res = minimize(partial_rosen, x0, method='nelder-mead',
               options={'xatol': 1e-8,})

print(res.x)
[1.         1.         1.         1.         0.99999999]

3.2 Gradient-based optimization algorithms

3.2.1 An Introductory Example: Broyden-Fletcher-Goldfarb-Shanno Algorithm (BFGS)

This section introduces an optimization algorithm that uses gradient information to find the minimum. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm (selected by setting method='BFGS') is an optimization algorithm that aims to converge quickly to the solution. This algorithm uses the gradient of the objective function. If the gradient is not provided by the user, it is estimated using first-differences. The BFGS method typically requires fewer function calls compared to the simplex algorithm, even when the gradient needs to be estimated.

Example 3.1 (BFGS) To demonstrate the BFGS algorithm, let’s use the Rosenbrock function again. The gradient of the Rosenbrock function is a vector described by the following mathematical expression:

\[\begin{align} \frac{\partial f}{\partial x_j} = \sum_{i=1}^{N} 200(x_i - x_{i-1}^2)(\delta_{i,j} - 2x_{i-1}\delta_{i-1,j}) - 2(1 - x_{i-1})\delta_{i-1,j} \\ = 200(x_j - x_{j-1}^2) - 400x_j(x_{j+1} - x_j^2) - 2(1 - x_j) \end{align}\]

This expression is valid for interior derivatives, but special cases are:

\[ \frac{\partial f}{\partial x_0} = -400x_0(x_1 - x_0^2) - 2(1 - x_0) \]

\[ \frac{\partial f}{\partial x_{N-1}} = 200(x_{N-1} - x_{N-2}^2) \]

Here’s a Python function that computes this gradient:

def rosen_der(x):
    xm = x[1:-1]
    xm_m1 = x[:-2]
    xm_p1 = x[2:]
    der = np.zeros_like(x)
    der[1:-1] = 200*(xm-xm_m1**2) - 400*(xm_p1 - xm**2)*xm - 2*(1-xm)
    der[0] = -400*x[0]*(x[1]-x[0]**2) - 2*(1-x[0])
    der[-1] = 200*(x[-1]-x[-2]**2)
    return der

You can specify this gradient information in the minimize function using the jac parameter as illustrated below:

res = minimize(rosen, x0, method='BFGS', jac=rosen_der,
               options={'disp': True})

print(res.x)
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 25
         Function evaluations: 30
         Gradient evaluations: 30
[1.00000004 1.0000001  1.00000021 1.00000044 1.00000092]

3.2.2 Background and Basics for Gradient-based Optimization

3.2.3 Gradient

The gradient \(\nabla f(J)\) for a scalar function \(f(J)\) with \(n\) different variables is defined by its partial derivatives:

\[ \nabla f(J) = \left[ \frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2}, \ldots, \frac{\partial f}{\partial x_n} \right] \]

3.2.4 Jacobian Matrix

The Jacobian matrix \(J(J)\) for a vector-valued function \(F(J) = [f_1(J), f_2(J), \ldots, f_m(J)]\) is defined as:

\(J(J) = \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} & \ldots & \frac{\partial f_1}{\partial x_n} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2} & \ldots & \frac{\partial f_2}{\partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_m}{\partial x_1} & \frac{\partial f_m}{\partial x_2} & \ldots & \frac{\partial f_m}{\partial x_n} \end{bmatrix}\)

It consists of the first order partial derivatives and gives therefore an overview about the gradients of a vector valued function.

Example 3.2 (acobian matrix) Consider a vector-valued function \(f : \mathbb{R}^2 \rightarrow \mathbb{R}^3\) defined as follows: \[f(J) = \begin{bmatrix} x_1^2 + 2x_2 \\ 3x_1 - \sin(x_2) \\ e^{x_1 + x_2} \end{bmatrix}\]

Let’s compute the partial derivatives and construct the Jacobian matrix:

\(\frac{\partial f_1}{\partial x_1} = 2x_1, \quad \frac{\partial f_1}{\partial x_2} = 2\)

\(\frac{\partial f_2}{\partial x_1} = 3, \quad \frac{\partial f_2}{\partial x_2} = -\cos(x_2)\)

\(\frac{\partial f_3}{\partial x_1} = e^{x_1 + x_2}, \quad \frac{\partial f_3}{\partial x_2} = e^{x_1 + x_2}\)

So, the Jacobian matrix is:

\[J(J) = \begin{bmatrix} 2x_1 & 2 \\ 3 & -\cos(x_2) \\ e^{x_1 + x_2} & e^{x_1 + x_2} \end{bmatrix}\]

This Jacobian matrix provides information about how small changes in the input variables \(x_1\) and \(x_2\) affect the corresponding changes in each component of the output vector.

3.2.5 Hessian Matrix

The Hessian matrix \(H(J)\) for a scalar function \(f(J)\) is defined as:

\(H(J) = \begin{bmatrix} \frac{\partial^2 f}{\partial x_1^2} & \frac{\partial^2 f}{\partial x_1 \partial x_2} & \ldots & \frac{\partial^2 f}{\partial x_1 \partial x_n} \\ \frac{\partial^2 f}{\partial x_2 \partial x_1} & \frac{\partial^2 f}{\partial x_2^2} & \ldots & \frac{\partial^2 f}{\partial x_2 \partial x_n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial^2 f}{\partial x_n \partial x_1} & \frac{\partial^2 f}{\partial x_n \partial x_2} & \ldots & \frac{\partial^2 f}{\partial x_n^2} \end{bmatrix}\)

So, the Hessian matrix consists of the second order dervatives of the function. It provides information about the local curvature of the function with respect to changes in the input variables.

Example 3.3 (Hessian matrix) Consider a scalar-valued function: \[f(J) = x_1^2 + 2x_2^2 + \sin(x_1 x_2)\]

The Hessian matrix of this scalar-valued function is the matrix of its second-order partial derivatives with respect to the input variables: \[H(J) = \begin{bmatrix} \frac{\partial^2 f}{\partial x_1^2} & \frac{\partial^2 f}{\partial x_1 \partial x_2} \\ \frac{\partial^2 f}{\partial x_2 \partial x_1} & \frac{\partial^2 f}{\partial x_2^2} \end{bmatrix}\]

Let’s compute the second-order partial derivatives and construct the Hessian matrix:

\[\begin{align} \frac{\partial^2 f}{\partial x_1^2} &= 2 + \cos(x_1 x_2) x_2^2\\ \frac{\partial^2 f}{\partial x_1 \partial x_2} &= 2x_1 x_2 \cos(x_1 x_2) - \sin(x_1 x_2)\\ \frac{\partial^2 f}{\partial x_2 \partial x_1} &= 2x_1 x_2 \cos(x_1 x_2) - \sin(x_1 x_2)\\ \frac{\partial^2 f}{\partial x_2^2} &= 4x_2^2 + \cos(x_1 x_2) x_1^2 \end{align}\]

So, the Hessian matrix is:

\[H(J) = \begin{bmatrix} 2 + \cos(x_1 x_2) x_2^2 & 2x_1 x_2 \cos(x_1 x_2) - \sin(x_1 x_2) \\ 2x_1 x_2 \cos(x_1 x_2) - \sin(x_1 x_2) & 4x_2^2 + \cos(x_1 x_2) x_1^2 \end{bmatrix}\]

3.2.6 Gradient for Optimization

In optimization, the goal is to find the minimum or maximum of a function. Gradient-based optimization methods utilize information about the gradient (or derivative) of the function to guide the search for the optimal solution. This is particularly useful when dealing with complex, high-dimensional functions where an exhaustive search is impractical.

The gradient descent method can be divided in the following steps:

  • Initialize: start with an initial guess for the parameters of the function to be optimized.
  • Compute Gradient: Calculate the gradient (partial derivatives) of the function with respect to each parameter at the current point. The gradient indicates the direction of the steepest increase in the function.
  • Update Parameters: Adjust the parameters in the opposite direction of the gradient, scaled by a learning rate. This step aims to move towards the minimum of the function:
    • \(x_{k+1} = x_k - \alpha \times \nabla f(x_{k})\)
    • \(x_{x}\) is current parameter vector or point in the parameter space.
    • \(\alpha\) is the learning rate, a positive scalar that determines the step size in each iteration.
    • \(\nabla f(x)\) is the gradient of the objective function.
  • Iterate: Repeat the above steps until convergence or a predefined number of iterations. Convergence is typically determined when the change in the function value or parameters becomes negligible.

Example 3.4 (Gradient Descent) We consider a simple quadratic function as an example: \[ f(x) = x^2 + 4x + y^2 + 2y + 4. \]

We’ll use gradient descent to find the minimum of this function.

import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

# Define the quadratic function
def quadratic_function(x, y):
    return x**2 + 4*x + y**2 + 2*y + 4

# Define the gradient of the quadratic function
def gradient_quadratic_function(x, y):
    grad_x = 2*x + 4
    grad_y = 2*y + 2
    return np.array([grad_x, grad_y])

# Gradient Descent for optimization in 2D
def gradient_descent(initial_point, learning_rate, num_iterations):
    points = [np.array(initial_point)]
    for _ in range(num_iterations):
        current_point = points[-1]
        gradient = gradient_quadratic_function(*current_point)
        new_point = current_point - learning_rate * gradient
        points.append(new_point)
    return points

# Visualization of optimization process with 3D surface and consistent arrow sizes
def plot_optimization_process_3d_consistent_arrows(points):
    fig = plt.figure(figsize=(10, 8))
    ax = fig.add_subplot(111, projection='3d')

    x_vals = np.linspace(-10, 2, 100)
    y_vals = np.linspace(-10, 2, 100)
    X, Y = np.meshgrid(x_vals, y_vals)
    Z = quadratic_function(X, Y)

    ax.plot_surface(X, Y, Z, cmap='viridis', alpha=0.6)
    ax.scatter(*zip(*points), [quadratic_function(*p) for p in points], c='red', label='Optimization Trajectory')

    for i in range(len(points) - 1):  
        x, y = points[i]
        dx, dy = points[i + 1] - points[i]
        dz = quadratic_function(*(points[i + 1])) - quadratic_function(*points[i])
        gradient_length = 0.5

        ax.quiver(x, y, quadratic_function(*points[i]), dx, dy, dz, color='blue', length=gradient_length, normalize=False, arrow_length_ratio=0.1)

    ax.set_title('Gradient-Based Optimization with 2D Quadratic Function')
    ax.set_xlabel('x')
    ax.set_ylabel('y')
    ax.set_zlabel('f(x, y)')
    ax.legend()
    plt.show()

# Initial guess and parameters
initial_guess = [-9.0, -9.0]
learning_rate = 0.2
num_iterations = 10

# Run gradient descent in 2D and visualize the optimization process with 3D surface and consistent arrow sizes
trajectory = gradient_descent(initial_guess, learning_rate, num_iterations)
plot_optimization_process_3d_consistent_arrows(trajectory)

3.2.7 Newton Method

Initialization: Start with an initial guess for the optimal solution: \(x_0\).

Iteration: Repeat the following three steps until convergence or a predefined stopping criterion is met:

  1. Calculate the gradient (\(\nabla\)) and the Hessian matrix (\(\nabla^2\)) of the objective function at the current point: \[\nabla f(x_k) \quad \text{and} \quad \nabla^2 f(x_k)\]

  2. Update the current solution using the Newton-Raphson update formula \[x_{k+1} = x_k - [\nabla^2 f(x_k)]^{-1} \nabla f(x_k),\] where

    * $\nabla f(x_k)$ is the gradient (first derivative) of the objective function with respect to the variable $x$, evaluated at the current solution $x_k$.
    • \(\nabla^2 f(x_k)\): The Hessian matrix (second derivative) of the objective function with respect to \(x\), evaluated at the current solution \(x_k\).
    • \(x_k\): The current solution or point in the optimization process.
    • \(\nabla^2 f(x_k)]^{-1}\): The inverse of the Hessian matrix at the current point, representing the approximation of the curvature of the objective function.
    • \(x_{k+1}\): The updated solution or point after applying the Newton-Raphson update.
  3. Check for convergence.

Example 3.5 (Newton Method) We want to optimize the Rosenbrock function and use the Hessian and the Jacobian (which is equal to the gradient vector for scalar objective function) to the minimize function.

def rosenbrock(x):
    return 100 * (x[1] - x[0]**2)**2 + (1 - x[0])**2

def rosenbrock_gradient(x):
    dfdx0 = -400 * x[0] * (x[1] - x[0]**2) - 2 * (1 - x[0])
    dfdx1 = 200 * (x[1] - x[0]**2)
    return np.array([dfdx0, dfdx1])

def rosenbrock_hessian(x):
    d2fdx0 = 1200 * x[0]**2 - 400 * x[1] + 2
    d2fdx1 = -400 * x[0]
    return np.array([[d2fdx0, d2fdx1], [d2fdx1, 200]])

def classical_newton_optimization_2d(initial_guess, tol=1e-6, max_iter=100):
    x = initial_guess.copy()

    for i in range(max_iter):
        gradient = rosenbrock_gradient(x)
        hessian = rosenbrock_hessian(x)

        # Solve the linear system H * d = -g for d
        d = np.linalg.solve(hessian, -gradient)

        # Update x
        x += d

        # Check for convergence
        if np.linalg.norm(gradient, ord=np.inf) < tol:
            break

    return x

# Initial guess
initial_guess_2d = np.array([0.0, 0.0])

# Run classical Newton optimization for the 2D Rosenbrock function
result_2d = classical_newton_optimization_2d(initial_guess_2d)

# Print the result
print("Optimal solution:", result_2d)
print("Objective value:", rosenbrock(result_2d))
Optimal solution: [1. 1.]
Objective value: 0.0

3.2.8 BFGS-Algorithm

BFGS is an optimization algorithm designed for unconstrained optimization problems. It belongs to the class of quasi-Newton methods and is known for its efficiency in finding the minimum of a smooth, unconstrained objective function.

3.2.9 Procedure:

  1. Initialization:
    • Start with an initial guess for the parameters of the objective function.
    • Initialize an approximation of the Hessian matrix (inverse) denoted by \(H\).
  2. Iterative Update:
    • At each iteration, compute the gradient vector at the current point.
    • Update the parameters using the BFGS update formula, which involves the inverse Hessian matrix approximation, the gradient, and the difference in parameter vectors between successive iterations: \[x_{k+1} = x_k - H_k^{-1} \nabla f(x_k).\]
    • Update the inverse Hessian approximation using the BFGS update formula for the inverse Hessian. \[H_{k+1} = H_k + \frac{\Delta x_k \Delta x_k^T}{\Delta x_k^T \Delta g_k} - \frac{H_k g_k g_k^T H_k}{g_k^T H_k g_k},\] where:
    • \(x_k\) and \(x_{k+1}\) are the parameter vectors at the current and updated iterations, respectively.
    • \(\nabla f(x_k)\) is the gradient vector at the current iteration.
    • \(\Delta x_k = x_{k+1} - x_k\) is the change in parameter vectors.
    • \(\Delta g_k = \nabla f(x_{k+1}) - \nabla f(x_k)\) is the change in gradient vectors.
  3. Convergence:
    • Repeat the iterative update until the optimization converges. Convergence is typically determined by reaching a sufficiently low gradient or parameter change.

Example 3.6 (BFGS for Rosenbrock)  

import numpy as np
from scipy.optimize import minimize

# Define the 2D Rosenbrock function
def rosenbrock(x):
    return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2

# Initial guess
initial_guess = np.array([0.0, 0.0])

# Minimize the Rosenbrock function using BFGS
minimize(rosenbrock, initial_guess, method='BFGS')
  message: Optimization terminated successfully.
  success: True
   status: 0
      fun: 2.843987518235081e-11
        x: [ 1.000e+00  1.000e+00]
      nit: 19
      jac: [ 3.987e-06 -2.844e-06]
 hess_inv: [[ 4.948e-01  9.896e-01]
            [ 9.896e-01  1.984e+00]]
     nfev: 72
     njev: 24

3.2.10 Visualization BFGS for Rosenbrock

A visualization of the BFGS search process on Rosenbrock’s function can be found here: https://upload.wikimedia.org/wikipedia/de/f/ff/Rosenbrock-bfgs-animation.gif

Tasks
  • In which situations is it possible to use algorithms like BFGS, but not the classical Newton method?
  • Investigate the Newton-CG method
  • Use an objective function of your choice and apply Newton-CG
  • Compare the Newton-CG method with the BFGS. What are the similarities and differences between the two algorithms?

3.3 Gradient- and Hessian-based Optimization Algorithms

Section 3.3.1 presents an optimization algorithm that uses gradient and Hessian information to find the minimum. Section 3.3.2 presents an optimization algorithm that uses gradient and Hessian information to find the minimum. Section 3.3.3 presents an optimization algorithm that uses gradient and Hessian information to find the minimum.

The methods Newton-CG, trust-ncg and trust-krylov are suitable for dealing with large-scale problems (problems with thousands of variables). That is because the conjugate gradient algorithm approximately solve the trust-region subproblem (or invert the Hessian) by iterations without the explicit Hessian factorization. Since only the product of the Hessian with an arbitrary vector is needed, the algorithm is specially suited for dealing with sparse Hessians, allowing low storage requirements and significant time savings for those sparse problems.

3.3.1 Newton-Conjugate-Gradient Algorithm

Newton-Conjugate Gradient algorithm is a modified Newton’s method and uses a conjugate gradient algorithm to (approximately) invert the local Hessian.

3.3.2 Trust-Region Newton-Conjugate-Gradient Algorithm

3.3.3 Trust-Region Truncated Generalized Lanczos / Conjugate Gradient Algorithm

3.4 Global Optimization

Global optimization aims to find the global minimum of a function within given bounds, in the presence of potentially many local minima. Typically, global minimizers efficiently search the parameter space, while using a local minimizer (e.g., minimize) under the hood.

3.4.1 Local vs Global Optimization

3.4.1.1 Local Optimizater:

  • Seeks the optimum in a specific region of the search space
  • Tends to exploit the local environment, to find solutions in the immediate area
  • Highly sensitive to initial conditions; may converge to different local optima based on the starting point
  • Often computationally efficient for low-dimensional problems but may struggle with high-dimensional or complex search spaces
  • Commonly used in situations where the objective is to refine and improve existing solutions

3.4.1.2 Global Optimizer:

  • Explores the entire search space to find the global optimum
  • Emphasize exploration over exploitation, aiming to search broadly and avoid premature convergence to local optima
  • Aim to mitigate the risk of premature convergence to local optima by employing strategies for global exploration
  • Less sensitive to initial conditions, designed to navigate diverse regions of the search space
  • Equipped to handle high-dimensional and complex problems, though computational demands may vary depending on the specific algorithm
  • Preferred for applications where a comprehensive search of the solution space is crucial, such as in parameter tuning, machine learning, and complex engineering design

Local vs Global Optimum

3.4.2 Global Optimizers in SciPy

SciPy contains a number of good global optimizers. Here, we’ll use those on the same objective function, namely the (aptly named) eggholder function:

def eggholder(x):
    return (-(x[1] + 47) * np.sin(np.sqrt(abs(x[0]/2 + (x[1]  + 47))))
            -x[0] * np.sin(np.sqrt(abs(x[0] - (x[1]  + 47)))))

bounds = [(-512, 512), (-512, 512)]
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

x = np.arange(-512, 513)
y = np.arange(-512, 513)
xgrid, ygrid = np.meshgrid(x, y)
xy = np.stack([xgrid, ygrid])

fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.view_init(45, -45)
ax.plot_surface(xgrid, ygrid, eggholder(xy), cmap='terrain')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('eggholder(x, y)')
plt.show()

We now use the global optimizers to obtain the minimum and the function value at the minimum. We’ll store the results in a dictionary so we can compare different optimization results later.

from scipy import optimize
results = dict()
results['shgo'] = optimize.shgo(eggholder, bounds)
results['shgo']
 message: Optimization terminated successfully.
 success: True
     fun: -935.3379515605789
    funl: [-9.353e+02]
       x: [ 4.395e+02  4.540e+02]
      xl: [[ 4.395e+02  4.540e+02]]
     nit: 1
    nfev: 45
   nlfev: 40
   nljev: 10
   nlhev: 0
results['DA'] = optimize.dual_annealing(eggholder, bounds)
results['DA']
 message: ['Maximum number of iteration reached']
 success: True
  status: 0
     fun: -959.6406627208487
       x: [ 5.120e+02  4.042e+02]
     nit: 1000
    nfev: 4079
    njev: 26
    nhev: 0

All optimizers return an OptimizeResult, which in addition to the solution contains information on the number of function evaluations, whether the optimization was successful, and more. For brevity, we won’t show the full output of the other optimizers:

results['DE'] = optimize.differential_evolution(eggholder, bounds)
results['DE']
             message: Optimization terminated successfully.
             success: True
                 fun: -959.6406627206917
                   x: [ 5.120e+02  4.042e+02]
                 nit: 50
                nfev: 1542
          population: [[ 5.120e+02  4.042e+02]
                       [ 5.120e+02  4.052e+02]
                       ...
                       [ 5.119e+02  4.050e+02]
                       [ 5.120e+02  4.044e+02]]
 population_energies: [-9.596e+02 -9.583e+02 ... -9.582e+02 -9.596e+02]
                 jac: [-3.386e+00  3.411e-05]

shgo has a second method, which returns all local minima rather than only what it thinks is the global minimum:

results['shgo_sobol'] = optimize.shgo(eggholder, bounds, n=200, iters=5,
                                      sampling_method='sobol')
results['shgo_sobol']
 message: Optimization terminated successfully.
 success: True
     fun: -959.640662720831
    funl: [-9.596e+02 -9.353e+02 ... -6.591e+01 -6.387e+01]
       x: [ 5.120e+02  4.042e+02]
      xl: [[ 5.120e+02  4.042e+02]
           [ 4.395e+02  4.540e+02]
           ...
           [ 3.165e+01 -8.523e+01]
           [ 5.865e+01 -5.441e+01]]
     nit: 5
    nfev: 3529
   nlfev: 2327
   nljev: 634
   nlhev: 0

We’ll now plot all found minima on a heatmap of the function:

fig = plt.figure()
ax = fig.add_subplot(111)
im = ax.imshow(eggholder(xy), interpolation='bilinear', origin='lower',
               cmap='gray')
ax.set_xlabel('x')
ax.set_ylabel('y')

def plot_point(res, marker='o', color=None):
    ax.plot(512+res.x[0], 512+res.x[1], marker=marker, color=color, ms=10)

plot_point(results['DE'], color='c')  # differential_evolution - cyan
plot_point(results['DA'], color='w')  # dual_annealing.        - white

# SHGO produces multiple minima, plot them all (with a smaller marker size)
plot_point(results['shgo'], color='r', marker='+')
plot_point(results['shgo_sobol'], color='r', marker='x')
for i in range(results['shgo_sobol'].xl.shape[0]):
    ax.plot(512 + results['shgo_sobol'].xl[i, 0],
            512 + results['shgo_sobol'].xl[i, 1],
            'ro', ms=2)

ax.set_xlim([-4, 514*2])
ax.set_ylim([-4, 514*2])
plt.show()

3.4.3 Dual Annealing Optimization

This function implements the Dual-Annealing optimization, which is a variant of the famous simulated annealing optimization.

Simulated Annealing is a probabilistic optimization algorithm inspired by the annealing process in metallurgy. The algorithm is designed to find a good or optimal global solution to a problem by exploring the solution space in a controlled and adaptive manner.

Annealing in Metallurgy

Simulated Annealing draws inspiration from the physical process of annealing in metallurgy. Just as metals are gradually cooled to achieve a more stable state, Simulated Annealing uses a similar approach to explore solution spaces in the digital world.

Heating Phase: In metallurgy, a metal is initially heated to a high temperature. At this elevated temperature, the atoms or molecules in the material become more energetic and chaotic, allowing the material to overcome energy barriers and defects.

Analogy Simulated Annealing (Exploration Phase): In Simulated Annealing, the algorithm starts with a high “temperature,” which encourages exploration of the solution space. At this stage, the algorithm is more likely to accept solutions that are worse than the current one, allowing it to escape local optima and explore a broader region of the solution space.

Cooling Phase: The material is then gradually cooled at a controlled rate. As the temperature decreases, the atoms or molecules start to settle into more ordered and stable arrangements. The slow cooling rate is crucial to avoid the formation of defects and to ensure the material reaches a well-organized state.

Analogy Simulated Annealing (Exploitation Phase): As the algorithm progresses, the temperature is gradually reduced over time according to a cooling schedule. This reduction simulates the cooling process in metallurgy. With lower temperatures, the algorithm becomes more selective and tends to accept only better solutions, focusing on refining and exploiting the promising regions discovered during the exploration phase.

3.4.3.1 Key Concepts

Temperature: The temperature is a parameter that controls the likelihood of accepting worse solutions. We start with a high temperature, allowing the algorithm to explore the solution space braodly. The temperature decreases with the iterations of the algorithm.

Cooling Schedule: The temperature parameter is reduced according to this schedule. The analogy to the annealing of metals: a slower cooling rate allows the material to reach a more stable state.

Neighborhood Exploration: At each iteration, the algorithm explores the neighborhood of the current solution. The neighborhood is defined by small perturbations or changes to the current solution.

Acceptance Probability: The algorithm evaluates the objective function for the new solution in the neighborhood. If the new solution is better, it is accepted. If the new solution is worse, it may still be accepted with a certain probability. This probability is determined by both the difference in objective function values and the current temperature.

For minimization: If: \[ f(x_{t}) > f(x_{t+1}) \] Then: \[ P(accept\_new\_point) = 1 \]

If: \[ f(x_{t}) < f(x_{t+1}) \] Then: \[ P(accept\_new\_point) = e^{-(\frac{f(x_{t+1}) - f(x_{t})}{Tt})} \]

Termination Criterion: The algorithm continues iterations until a termination condition is met. This could be a fixed number of iterations, reaching a specific temperature threshold, or achieving a satisfactory solution.

3.4.3.2 Steps

1. Initialization: Set an initial temperature (\(T_{0}\)) and an initial solution (\(f(x_{0})\)). The temperature is typically set high initially to encourage exploration.

2. Generate a Neighbor: Perturb the current solution to generate a neighboring solution. The perturbation can be random or follow a specific strategy.

3. Evaluate the Neighbor: Evaluate the objective function for the new solution in the neighborhood.

4. Accept or Reject the Neighbor: + If the new solution is better (lower cost for minimization problems or higher for maximization problems), accept it as the new current solution. + If the new solution is worse, accept it with a probability determined by an acceptance probability function as mentioned above. The probability is influenced by the difference in objective function values and the current temperature.

5. Cooling: Reduce the temperature according to a cooling schedule. The cooling schedule defines how fast the temperature decreases over time. Common cooling schedules include exponential or linear decay.

6. Termination Criterion: Repeat the iterations (2-5) until a termination condition is met. This could be a fixed number of iterations, reaching a specific temperature threshold, or achieving a satisfactory solution.

3.4.3.3 Scipy Implementation of the Dual Annealing Algorithm

In Scipy, we utilize the Dual Annealing optimizer, an extension of the simulated annealing algorithm that is versatile for both discrete and continuous problems.

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import dual_annealing

def rastrigin_function(x):
    return 20 + x[0]**2 - 10 * np.cos(2 * np.pi * x[0]) + x[1]**2 - 10 * np.cos(2 * np.pi * x[1])

# Define the Rastrigin function for visualization
def rastrigin_visualization(x, y):
    return 20 + x**2 - 10 * np.cos(2 * np.pi * x) + y**2 - 10 * np.cos(2 * np.pi * y)

# Create a meshgrid for visualization
x_vals = np.linspace(-10, 10, 100)
y_vals = np.linspace(-10, 10, 100)
x_mesh, y_mesh = np.meshgrid(x_vals, y_vals)
z_mesh = rastrigin_visualization(x_mesh, y_mesh)

# Visualize the Rastrigin function
plt.figure(figsize=(10, 8))
contour = plt.contour(x_mesh, y_mesh, z_mesh, levels=50, cmap='viridis')
plt.colorbar(contour, label='Rastrigin Function Value')
plt.title('Visualization of the 2D Rastrigin Function')

# Optimize the Rastrigin function using dual annealing
result = dual_annealing(func = rastrigin_function,
                        x0=[5.0,3.0],                       #Initial Guess
                        bounds= [(-10, 10), (-10, 10)],
                        initial_temp = 5230,                #Intial Value for temperature
                        restart_temp_ratio = 2e-05,         #Temperature schedule
                        seed=42)

# Plot the optimized point
optimal_x, optimal_y = result.x
plt.plot(optimal_x, optimal_y, 'ro', label='Optimal Point')

# Set labels and legend
plt.xlabel('X')
plt.ylabel('Y')
plt.legend()

# Show the plot
plt.show()

# Display the optimization result
print("Optimal parameters:", result.x)
print("Minimum value of the Rastrigin function:", result.fun)

Optimal parameters: [-4.60133247e-09 -4.31928660e-09]
Minimum value of the Rastrigin function: 7.105427357601002e-15
result
 message: ['Maximum number of iteration reached']
 success: True
  status: 0
     fun: 7.105427357601002e-15
       x: [-4.601e-09 -4.319e-09]
     nit: 1000
    nfev: 4088
    njev: 29
    nhev: 0

3.4.3.4 Example of Comparison Table:

Function Algorithm Best Objective Function Value Number of Iterations Number of Evaluations Local/Global Optimizer Gradient-Based/Free Algorithm Other Comments
Rastrigin Algorithm1 1.23 1000 5000 Global Free ?
Rastrigin Algorithm2 0.95 800 4000 Local Gradient-Based ?
Rastrigin Algorithm3 1.45 1200 6000 Global Free ?
Rosenbrock Algorithm1 2.56 1500 7500 Local Gradient-Based ?
Rosenbrock Algorithm2 2.10 1200 6000 Global Free ?
Rosenbrock Algorithm3 2.75 1800 9000 Local Gradient-Based ?

3.4.4 Differential Evolution

Differential Evolution is an algorithm used for finding the global minimum of multivariate functions. It is stochastic in nature (does not use gradient methods), and can search large areas of candidate space, but often requires larger numbers of function evaluations than conventional gradient based techniques.

Differential Evolution (DE) is a versatile and global optimization algorithm inspired by natural selection and evolutionary processes. Introduced by Storn and Price in 1997, DE mimics the survival-of-the-fittest principle by evolving a population of candidate solutions through iterative mutation, crossover, and selection operations. This nature-inspired approach enables DE to efficiently explore complex and non-linear solution spaces, making it a widely adopted optimization technique in diverse fields such as engineering, finance, and machine learning.

3.4.5 Procedure

The procedure boils down to the following steps:

  1. Initialization:
    • Create a population of candidate solutions randomly within the specified search space.
  2. Mutation:
    • For each individual in the population, select three distinct individuals (vectors) randomly.
    • Generate a mutant vector V by combining these three vectors with a scaling factor.
  3. Crossover:
    • Perform the crossover operation between the target vector U and the mutant vector V. Information from both vectors is used to create a trial vector
Cross-Over Strategies in DE
  • There are several crossover strategies in the literature. Two examples are:

Binominal Crossover:

In this strategy, each component of the trial vector is selected from the mutant vector with a probability equal to the crossover rate (\(CR\)). This means that each element of the trial vector has an independent probability of being replaced by the corresponding element of the mutant vector.

\[U'_i = \begin{cases} V_i, & \text{if a random number} \ \sim U(0, 1) \leq CR \ \text{(Crossover Rate)} \\ U_i, & \text{otherwise} \end{cases} \]

Exponential Crossover:

In exponential crossover, the trial vector is constructed by selecting a random starting point and copying elements from the mutant vector with a certain probability. The probability decreases exponentially with the distance from the starting point. This strategy introduces a correlation between neighboring elements in the trial vector.

  1. Selection:
    • Evaluate the fitness of the trial vector obtained from the crossover.
    • Replace the target vector with the trial vector if its fitness is better.
  2. Termination:
    • Repeat the mutation, crossover, and selection steps for a predefined number of generations or until convergence criteria are met.
  3. Result:
    • The algorithm returns the best-found solution after the specified number of iterations.

The key parameters in DE include the population size, crossover probability, and the scaling factor. Tweak these parameters based on the characteristics of the optimization problem for optimal performance.

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize

# Define the Rastrigin function
def rastrigin(x):
    A = 10
    return A * len(x) + sum([(xi**2 - A * np.cos(2 * np.pi * xi)) for xi in x])

# Create a grid for visualization
x_vals = np.linspace(-5.12, 5.12, 100)
y_vals = np.linspace(-5.12, 5.12, 100)
X, Y = np.meshgrid(x_vals, y_vals)
Z = rastrigin(np.vstack([X.ravel(), Y.ravel()]))

# Reshape Z to match the shape of X and Y
Z = Z.reshape(X.shape)

# Plot the Rastrigin function
plt.contour(X, Y, Z, levels=50, cmap='viridis', label='Rastrigin Function')

# Initial guess (starting point for the optimization)
initial_guess = (4,3,4,2)

# Define the bounds for each variable in the Rastrigin function
bounds = [(-5.12, 5.12)] * 4  # 4D problem, each variable has bounds (-5.12, 5.12)

# Run the minimize function
result = minimize(rastrigin, initial_guess, bounds=bounds, method='L-BFGS-B')

# Extract the optimal solution
optimal_solution = result.x

# Plot the optimal solution
plt.scatter(optimal_solution[0], optimal_solution[1], color='red', marker='x', label='Optimal Solution')

# Add labels and legend
plt.title('Optimization of Rastrigin Function with Minimize')
plt.xlabel('Variable 1')
plt.ylabel('Variable 2')
plt.legend()

# Show the plot
plt.show()

# Print the optimization result
print("Optimal Solution:", optimal_solution)
print("Optimal Objective Value:", result.fun)

Optimal Solution: [-2.52869119e-08 -2.07795060e-08 -2.52869119e-08 -1.62721002e-08]
Optimal Objective Value: 3.907985046680551e-13

3.4.6 DIRECT

DIviding RECTangles (DIRECT) is a deterministic global optimization algorithm capable of minimizing a black box function with its variables subject to lower and upper bound constraints by sampling potential solutions in the search space

3.4.7 SHGO

SHGO stands for “simplicial homology global optimization”. It is considered appropriate for solving general purpose NLP and blackbox optimization problems to global optimality (low-dimensional problems).

3.4.8 Basin-hopping

Basin-hopping is a two-phase method that combines a global stepping algorithm with local minimization at each step. Designed to mimic the natural process of energy minimization of clusters of atoms, it works well for similar problems with “funnel-like, but rugged” energy landscapes

3.5 Project: One-Mass Oscillator Optimization

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.optimize import minimize

3.5.1 Introduction

In this project, you will apply various optimization algorithms to fit a one-mass oscillator model to real-world data. The objective is to minimize the sum of the squared residuals between the model predictions and the observed amplitudes of a one-mass oscillator system across different frequencies.

3.5.2 One-Mass Oscillator Model

The one-mass oscillator is characterized by the following equation, representing the amplitudes of the system:

\[ V(\omega) = \frac{F}{\sqrt{(1 - \nu^2)^2 + 4D^2\nu^2}} \]

Here, \(\omega\) represents the angular frequency of the system, \(\nu\) is the ratio of the excitation frequency to the natural frequency, i.e., \[ \nu = \frac{\omega_{\text{err}}}{\omega_{\text{eig}}}, \] \(D\) is the damping ratio, and \(F\) is the force applied to the system.

The goal of the project is to determine the optimal values for the parameters \(\omega_{\text{eig}}\), \(D\), and \(F\) that result in the best fit of the one-mass oscillator model to the observed amplitudes.

3.5.3 The Real-World Data

There are two different measurements. J represents the measured frequencies, and N represents the measured amplitudes.

df1 = pd.read_pickle("./data/Hcf.d/df1.pkl")
df2 = pd.read_pickle("./data/Hcf.d/df2.pkl")
df1.describe()
J N
count 33.000000 33.000000
mean 8148.750252 10.430887
std 6.870023 2.846469
min 8137.649210 4.698761
25% 8143.799766 8.319253
50% 8146.942295 10.152119
75% 8153.934051 13.407260
max 8162.504002 14.382749
df1.head()
J N
14999 8162.504002 5.527511
15011 8156.384831 7.359789
15016 8159.199238 6.532958
15020 8159.200889 5.895933
15025 8153.934051 9.326749
# plot the data, i.e., the measured amplitudes as a function of the measured frequencies
plt.scatter(df1["J"], df1["N"], color="black", label="Spektralpunkte", zorder=5, s=10)
plt.xlabel("Frequency [Hz]")
plt.ylabel("Amplitude")
plt.show()

Note: Low amplitudes distort the fit and are negligible therefore we define a lower threshold for N.

threshold = 0.4
df1.sort_values("N")
max_N = max(df1["N"])
df1 = df1[df1["N"]>=threshold*max_N]

We extract the frequency value for maximum value of the amplitude. This serves as the initial value for one decision variable.

df_max=df1[df1["N"]==max(df1["N"])]
initial_Oeig = df_max["J"].values[0]
max_N = df_max["N"].values[0]

We also have to define the other two initial guesses for the damping ratio and the force, e.g.,

initial_D = 0.006
initial_F = 0.120
initial_values = [initial_Oeig, initial_D, initial_F]

Additionally, we define the bounds for the decision variables:

min_Oerr = min(df1["J"])
max_Oerr = max(df1["J"])
bounds = [(min_Oerr, max_Oerr), (0, 0.03), (0, 1)]

3.5.4 Objective Function

Then we define the objective function:

def one_mass_oscillator(params, Oerr) -> np.ndarray:
    # returns amplitudes of the system
    # Defines the model of a one mass oscilator 
    Oeig, D, F = params
    nue = Oerr / Oeig
    V = F / (np.sqrt((1 - nue**2) ** 2 + (4 * D**2 * nue**2)))
    return V
def objective_function(params, Oerr, amplitudes) -> np.ndarray:
    # objective function to compare calculated and real amplitudes
    return np.sum((amplitudes - one_mass_oscillator(params, Oerr)) ** 2)

We define the options for the optimzer and start the optimization process:

options = {
    "maxfun": 100000,
    "ftol": 1e-9,
    "xtol": 1e-9,
    "stepmx": 10,
    "eta": 0.25,
    "gtol": 1e-5}
J = np.array(df1["J"]) # measured frequency
N = np.array(df1["N"]) # measured amplitude
result = minimize(
    objective_function,
    initial_values,
    args=(J, N),
    method='Nelder-Mead',
    bounds=bounds,
    options=options)

3.5.5 Results

We can observe the results:

# map optimized values to variables
resonant_frequency = result.x[0]
D = result.x[1]
F = result.x[2]
# predict the resonant amplitude with the fitted one mass oscillator.
X_pred = np.linspace(min_Oerr, max_Oerr, 1000)
ypred_one_mass_oscillator = one_mass_oscillator(result.x, X_pred)
resonant_amplitude = max(ypred_one_mass_oscillator)
print(f"result: {result}")
result:        message: Optimization terminated successfully.
       success: True
        status: 0
           fun: 53.54144061205875
             x: [ 8.148e+03  7.435e-04  2.153e-02]
           nit: 93
          nfev: 169
 final_simplex: (array([[ 8.148e+03,  7.435e-04,  2.153e-02],
                       [ 8.148e+03,  7.435e-04,  2.153e-02],
                       [ 8.148e+03,  7.435e-04,  2.153e-02],
                       [ 8.148e+03,  7.435e-04,  2.153e-02]]), array([ 5.354e+01,  5.354e+01,  5.354e+01,  5.354e+01]))

Finally, we can plot the optimized fit and the real values:

plt.scatter(
    df1["J"],
    df1["N"],
    color="black",
    label="Spektralpunkte filtered",
    zorder=5,
    s=10,
)
# color the max amplitude point red
plt.scatter(
    initial_Oeig,
    max_N,
    color="red",
    label="Max Amplitude",
    zorder=5,
    s=10,
)

plt.plot(
        X_pred,
        ypred_one_mass_oscillator,
        label="Alpha",
        color="blue",
        linewidth=1,
    )
plt.scatter(
    resonant_frequency,
    resonant_amplitude,
    color="blue",
    label="Max Curve Fit",
    zorder=10,
    s=20,
)

3.5.6 Tasks

  • Investigate the trust region Conjugate Gradient method
  • Investigate the Differential Evolution Algorithm
  • Compare all the methods we introduced for the Rosenbrock (4d) and Rastrigin (2d) function

3.5.7 Task for the Project Work

Task
  • Experiment with various optimizers to identify the optimal parameter setup for the one-mass oscillator model and both data frames.
  • Please explain your thoughts and ideas, and subsequently compare the results obtained from different optimizers.

3.6 Jupyter Notebook

Note