5  Benchmarking SpotOptim with Sklearn Kriging (Matern Kernel) on 6D Rosenbrock and 10D Michalewicz Functions

Note

These test functions were used during the Dagstuhl Seminar 25451 Bayesian Optimisation (Nov 02 – Nov 07, 2025), see here.

This notebook demonstrates the use of SpotOptim with sklearn’s Gaussian Process Regressor as a surrogate model.

5.1 SpotOptim with Sklearn Kriging in 6 Dimensions: Rosenbrock Function

This section demonstrates how to use the SpotOptim class with sklearn’s Gaussian Process Regressor (using Matern kernel) as a surrogate on the 6-dimensional Rosenbrock function. We use a maximum of 100 function evaluations.

import warnings
warnings.filterwarnings("ignore")
import json
import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import rosenbrock

5.1.1 Define the 6D Rosenbrock Function

dim = 6
lower = np.full(dim, -2.0)
upper = np.full(dim, 2.0)
bounds = list(zip(lower, upper))
fun = rosenbrock
max_iter = 100

5.1.2 Set up SpotOptim Parameters

n_initial = dim
seed = 321

5.1.3 Sklearn Gaussian Process Regressor as Surrogate

from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import Matern, ConstantKernel

# Use a Matern kernel instead of the standard RBF kernel
kernel = ConstantKernel(1.0, (1e-2, 1e12)) * Matern(
    length_scale=1.0, 
    length_scale_bounds=(1e-4, 1e2), 
    nu=2.5
)
surrogate = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=100)

# Create SpotOptim instance with sklearn surrogate
opt_rosen = SpotOptim(
    fun=fun,
    bounds=bounds,
    n_initial=n_initial,
    max_iter=max_iter,
    surrogate=surrogate,
    seed=seed,
    verbose=1
)

# Run optimization
result_rosen = opt_rosen.optimize()
TensorBoard logging disabled
Initial best: f(x) = 321.834153
Iter 4 | Best: 179.356120 | Rate: 0.25 | Evals: 10.0%
Iter 5 | Best: 147.216512 | Rate: 0.40 | Evals: 11.0%
Iter 6 | Best: 126.879058 | Rate: 0.50 | Evals: 12.0%
Iter 7 | Best: 106.910445 | Rate: 0.57 | Evals: 13.0%
Iter 8 | Best: 77.690090 | Rate: 0.62 | Evals: 14.0%
Iter 9 | Best: 67.650765 | Rate: 0.67 | Evals: 15.0%
Iter 11 | Best: 66.959383 | Rate: 0.64 | Evals: 17.0%
Iter 12 | Best: 66.886607 | Rate: 0.67 | Evals: 18.0%
Iter 13 | Best: 63.396091 | Rate: 0.69 | Evals: 19.0%
Iter 14 | Best: 53.830939 | Rate: 0.71 | Evals: 20.0%
Iter 15 | Best: 53.480172 | Rate: 0.73 | Evals: 21.0%
Iter 16 | Best: 52.741893 | Rate: 0.75 | Evals: 22.0%
Iter 17 | Best: 51.637049 | Rate: 0.76 | Evals: 23.0%
Iter 18 | Best: 48.385433 | Rate: 0.78 | Evals: 24.0%
Iter 20 | Best: 48.057586 | Rate: 0.75 | Evals: 26.0%
Iter 22 | Best: 47.104651 | Rate: 0.73 | Evals: 28.0%
Iter 23 | Best: 45.850796 | Rate: 0.74 | Evals: 29.0%
Iter 24 | Best: 45.062114 | Rate: 0.75 | Evals: 30.0%
Iter 25 | Best: 43.538121 | Rate: 0.76 | Evals: 31.0%
Iter 26 | Best: 43.468547 | Rate: 0.77 | Evals: 32.0%
Iter 28 | Best: 39.919635 | Rate: 0.75 | Evals: 34.0%
Iter 29 | Best: 39.496653 | Rate: 0.76 | Evals: 35.0%
Iter 30 | Best: 39.003451 | Rate: 0.77 | Evals: 36.0%
Iter 32 | Best: 37.363045 | Rate: 0.75 | Evals: 38.0%
Iter 33 | Best: 29.875362 | Rate: 0.76 | Evals: 39.0%
Iter 34 | Best: 28.627645 | Rate: 0.76 | Evals: 40.0%
Iter 35 | Best: 26.549126 | Rate: 0.77 | Evals: 41.0%
Iter 36 | Best: 26.451449 | Rate: 0.78 | Evals: 42.0%
Iter 37 | Best: 26.352199 | Rate: 0.78 | Evals: 43.0%
Iter 39 | Best: 22.726275 | Rate: 0.77 | Evals: 45.0%
Iter 40 | Best: 22.075592 | Rate: 0.78 | Evals: 46.0%
Iter 41 | Best: 17.693068 | Rate: 0.78 | Evals: 47.0%
Iter 42 | Best: 15.583451 | Rate: 0.79 | Evals: 48.0%
Iter 44 | Best: 15.127598 | Rate: 0.77 | Evals: 50.0%
Iter 45 | Best: 14.524074 | Rate: 0.78 | Evals: 51.0%
Iter 46 | Best: 13.422546 | Rate: 0.78 | Evals: 52.0%
Iter 47 | Best: 13.018001 | Rate: 0.79 | Evals: 53.0%
Iter 48 | Best: 11.472323 | Rate: 0.79 | Evals: 54.0%
Iter 49 | Best: 6.578769 | Rate: 0.80 | Evals: 55.0%
Iter 50 | Best: 6.463796 | Rate: 0.80 | Evals: 56.0%
Iter 51 | Best: 6.330959 | Rate: 0.80 | Evals: 57.0%
Iter 52 | Best: 6.187917 | Rate: 0.81 | Evals: 58.0%
Iter 54 | Best: 6.181832 | Rate: 0.80 | Evals: 60.0%
Iter 55 | Best: 6.040090 | Rate: 0.80 | Evals: 61.0%
Iter 58 | Best: 5.772204 | Rate: 0.78 | Evals: 64.0%
Optimizer candidate 1/3 was duplicate/invalid.
Iter 61 | Best: 5.714682 | Rate: 0.75 | Evals: 67.0%
Iter 63 | Best: 5.687604 | Rate: 0.75 | Evals: 69.0%
Iter 64 | Best: 5.653711 | Rate: 0.75 | Evals: 70.0%
Iter 65 | Best: 5.638085 | Rate: 0.75 | Evals: 71.0%
Iter 66 | Best: 5.636186 | Rate: 0.76 | Evals: 72.0%
Iter 67 | Best: 5.627385 | Rate: 0.76 | Evals: 73.0%
Iter 68 | Best: 5.620197 | Rate: 0.76 | Evals: 74.0%
Iter 69 | Best: 5.611676 | Rate: 0.77 | Evals: 75.0%
Optimizer candidate 1/3 was duplicate/invalid.
Iter 72 | Best: 5.592104 | Rate: 0.75 | Evals: 78.0%
Iter 74 | Best: 5.590547 | Rate: 0.74 | Evals: 80.0%
Iter 75 | Best: 5.587248 | Rate: 0.75 | Evals: 81.0%
Iter 78 | Best: 5.585965 | Rate: 0.73 | Evals: 84.0%
Iter 79 | Best: 5.576774 | Rate: 0.73 | Evals: 85.0%
Iter 80 | Best: 5.564790 | Rate: 0.74 | Evals: 86.0%
Iter 81 | Best: 5.562889 | Rate: 0.74 | Evals: 87.0%
Iter 84 | Best: 5.555881 | Rate: 0.73 | Evals: 90.0%
Optimizer candidate 1/3 was duplicate/invalid.
Iter 87 | Best: 5.540122 | Rate: 0.71 | Evals: 93.0%
Iter 88 | Best: 5.520320 | Rate: 0.72 | Evals: 94.0%
Iter 90 | Best: 5.512031 | Rate: 0.71 | Evals: 96.0%
Iter 91 | Best: 5.511462 | Rate: 0.71 | Evals: 97.0%
Iter 92 | Best: 5.479503 | Rate: 0.72 | Evals: 98.0%
print(f"[6D] Sklearn Kriging: min y = {result_rosen.fun:.4f} at x = {result_rosen.x}")
print(f"Number of function evaluations: {result_rosen.nfev}")
print(f"Number of iterations: {result_rosen.nit}")
[6D] Sklearn Kriging: min y = 5.4795 at x = [-3.08468017e-01  1.10645468e-01  3.08535031e-02  1.15996006e-02
  7.87231128e-03 -3.12567231e-05]
Number of function evaluations: 100
Number of iterations: 94

5.1.4 Visualize Optimization Progress

import matplotlib.pyplot as plt

# Plot the optimization progress
plt.figure(figsize=(10, 6))
plt.semilogy(np.minimum.accumulate(opt_rosen.y_), 'b-', linewidth=2)
plt.xlabel('Function Evaluations', fontsize=12)
plt.ylabel('Best Objective Value (log scale)', fontsize=12)
plt.title('6D Rosenbrock: Sklearn Kriging Progress', fontsize=14)
plt.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()

5.1.5 Evaluation of Multiple Repeats

To perform 30 repeats and collect statistics:

# Perform 30 independent runs
n_repeats = 30
results = []

print(f"Running {n_repeats} independent optimizations...")
for i in range(n_repeats):
    kernel_i = ConstantKernel(1.0, (1e-2, 1e12)) * Matern(
        length_scale=1.0, 
        length_scale_bounds=(1e-4, 1e2), 
        nu=2.5
    )
    surrogate_i = GaussianProcessRegressor(kernel=kernel_i, n_restarts_optimizer=100)
    
    opt_i = SpotOptim(
        fun=fun,
        bounds=bounds,
        n_initial=n_initial,
        max_iter=max_iter,
        surrogate=surrogate_i,
        seed=seed + i,  # Different seed for each run
        verbose=0
    )
    
    result_i = opt_i.optimize()
    results.append(result_i.fun)
    
    if (i + 1) % 10 == 0:
        print(f"  Completed {i + 1}/{n_repeats} runs")

# Compute statistics
mean_result = np.mean(results)
std_result = np.std(results)
min_result = np.min(results)
max_result = np.max(results)

print(f"\nResults over {n_repeats} runs:")
print(f"  Mean of best values: {mean_result:.6f}")
print(f"  Std of best values:  {std_result:.6f}")
print(f"  Min of best values:  {min_result:.6f}")
print(f"  Max of best values:  {max_result:.6f}")

5.2 SpotOptim with Sklearn Kriging in 10 Dimensions: Michalewicz Function

This section demonstrates how to use the SpotOptim class with sklearn’s Gaussian Process Regressor (using Matern kernel) as a surrogate on the 10-dimensional Michalewicz function. We use a maximum of 300 function evaluations.

5.2.1 Define the 10D Michalewicz Function

from spotoptim.function import michalewicz

dim = 10
lower = np.full(dim, 0.0)
upper = np.full(dim, np.pi)
bounds = list(zip(lower, upper))
fun = michalewicz
max_iter = 300

5.2.2 Set up SpotOptim Parameters

n_initial = dim
seed = 321

5.2.3 Sklearn Gaussian Process Regressor as Surrogate

from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import Matern, ConstantKernel

# Use a Matern kernel instead of the standard RBF kernel
kernel = ConstantKernel(1.0, (1e-2, 1e12)) * Matern(
    length_scale=1.0, 
    length_scale_bounds=(1e-4, 1e2), 
    nu=2.5
)
surrogate = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=100)

# Create SpotOptim instance with sklearn surrogate
opt_micha = SpotOptim(
    fun=fun,
    bounds=bounds,
    n_initial=n_initial,
    max_iter=max_iter,
    surrogate=surrogate,
    seed=seed,
    verbose=1
)

# Run optimization
result_micha = opt_micha.optimize()
TensorBoard logging disabled
Initial best: f(x) = -1.909129
Iter 2 | Best: -2.778262 | Rate: 0.50 | Evals: 4.0%
Iter 5 | Best: -3.211991 | Rate: 0.40 | Evals: 5.0%
Iter 8 | Best: -3.350611 | Rate: 0.38 | Evals: 6.0%
Optimizer candidate 1/3 was duplicate/invalid.
Iter 11 | Best: -3.420176 | Rate: 0.36 | Evals: 7.0%
Iter 12 | Best: -3.477838 | Rate: 0.42 | Evals: 7.3%
Optimizer candidate 1/3 was duplicate/invalid.
Iter 16 | Best: -3.541510 | Rate: 0.38 | Evals: 8.7%
Iter 22 | Best: -3.623121 | Rate: 0.32 | Evals: 10.7%
Iter 24 | Best: -3.645627 | Rate: 0.33 | Evals: 11.3%
Optimizer candidate 1/3 was duplicate/invalid.
Iter 26 | Best: -3.879398 | Rate: 0.35 | Evals: 12.0%
Iter 27 | Best: -4.717449 | Rate: 0.37 | Evals: 12.3%
Iter 29 | Best: -4.755992 | Rate: 0.38 | Evals: 13.0%
Optimizer candidate 1/3 was duplicate/invalid.
Iter 32 | Best: -4.760523 | Rate: 0.38 | Evals: 14.0%
Iter 34 | Best: -4.855653 | Rate: 0.38 | Evals: 14.7%
Iter 35 | Best: -4.909507 | Rate: 0.40 | Evals: 15.0%
Iter 36 | Best: -4.915291 | Rate: 0.42 | Evals: 15.3%
Iter 38 | Best: -4.959107 | Rate: 0.42 | Evals: 16.0%
Iter 41 | Best: -4.959408 | Rate: 0.41 | Evals: 17.0%
Iter 43 | Best: -4.989346 | Rate: 0.42 | Evals: 17.7%
Iter 44 | Best: -4.996114 | Rate: 0.43 | Evals: 18.0%
Iter 45 | Best: -5.008915 | Rate: 0.44 | Evals: 18.3%
Optimizer candidate 1/3 was duplicate/invalid.
Iter 48 | Best: -5.136332 | Rate: 0.44 | Evals: 19.3%
Iter 50 | Best: -5.383622 | Rate: 0.44 | Evals: 20.0%
Iter 54 | Best: -5.783182 | Rate: 0.43 | Evals: 21.3%
Iter 56 | Best: -5.796548 | Rate: 0.43 | Evals: 22.0%
Iter 57 | Best: -5.844872 | Rate: 0.44 | Evals: 22.3%
Iter 58 | Best: -5.891558 | Rate: 0.45 | Evals: 22.7%
Iter 60 | Best: -5.953402 | Rate: 0.45 | Evals: 23.3%
Iter 61 | Best: -5.954577 | Rate: 0.46 | Evals: 23.7%
Iter 66 | Best: -6.039084 | Rate: 0.44 | Evals: 25.3%
Optimizer candidate 1/3 was duplicate/invalid.
Iter 71 | Best: -6.046120 | Rate: 0.42 | Evals: 27.0%
Iter 74 | Best: -6.066862 | Rate: 0.42 | Evals: 28.0%
Iter 75 | Best: -6.090692 | Rate: 0.43 | Evals: 28.3%
Iter 80 | Best: -6.145364 | Rate: 0.41 | Evals: 30.0%
Iter 81 | Best: -6.168414 | Rate: 0.42 | Evals: 30.3%
Iter 85 | Best: -6.228245 | Rate: 0.41 | Evals: 31.7%
Iter 86 | Best: -6.236635 | Rate: 0.42 | Evals: 32.0%
Iter 90 | Best: -6.308253 | Rate: 0.41 | Evals: 33.3%
Iter 91 | Best: -6.319316 | Rate: 0.42 | Evals: 33.7%
Iter 97 | Best: -6.322203 | Rate: 0.40 | Evals: 35.7%
Iter 104 | Best: -6.337577 | Rate: 0.39 | Evals: 38.0%
Iter 109 | Best: -6.338079 | Rate: 0.38 | Evals: 39.7%
Iter 111 | Best: -6.345416 | Rate: 0.38 | Evals: 40.3%
Iter 114 | Best: -6.349389 | Rate: 0.38 | Evals: 41.3%
Iter 118 | Best: -6.351055 | Rate: 0.38 | Evals: 42.7%
Iter 121 | Best: -6.354034 | Rate: 0.39 | Evals: 43.7%
Iter 129 | Best: -6.356517 | Rate: 0.35 | Evals: 46.3%
Iter 137 | Best: -6.359396 | Rate: 0.32 | Evals: 49.0%
Iter 139 | Best: -6.360180 | Rate: 0.32 | Evals: 49.7%
Iter 142 | Best: -6.362200 | Rate: 0.32 | Evals: 50.7%
Iter 146 | Best: -6.364515 | Rate: 0.30 | Evals: 52.0%
Iter 153 | Best: -6.368800 | Rate: 0.29 | Evals: 54.3%
Iter 157 | Best: -6.368866 | Rate: 0.27 | Evals: 55.7%
Iter 161 | Best: -6.369591 | Rate: 0.25 | Evals: 57.0%
Iter 164 | Best: -6.369619 | Rate: 0.26 | Evals: 58.0%
Iter 170 | Best: -6.370081 | Rate: 0.26 | Evals: 60.0%
Iter 171 | Best: -6.370742 | Rate: 0.26 | Evals: 60.3%
Iter 172 | Best: -6.371327 | Rate: 0.27 | Evals: 60.7%
Iter 182 | Best: -6.371413 | Rate: 0.24 | Evals: 64.0%
Iter 183 | Best: -6.371597 | Rate: 0.25 | Evals: 64.3%
Iter 186 | Best: -6.373072 | Rate: 0.24 | Evals: 65.3%
Iter 188 | Best: -6.375814 | Rate: 0.25 | Evals: 66.0%
Iter 190 | Best: -6.375829 | Rate: 0.25 | Evals: 66.7%
Iter 195 | Best: -6.376123 | Rate: 0.25 | Evals: 68.3%
Iter 196 | Best: -6.376579 | Rate: 0.26 | Evals: 68.7%
Iter 199 | Best: -6.377063 | Rate: 0.26 | Evals: 69.7%
Iter 200 | Best: -6.378141 | Rate: 0.27 | Evals: 70.0%
Iter 202 | Best: -6.378195 | Rate: 0.28 | Evals: 70.7%
Iter 210 | Best: -6.379411 | Rate: 0.27 | Evals: 73.3%
Iter 213 | Best: -6.390198 | Rate: 0.27 | Evals: 74.3%
Iter 217 | Best: -6.394848 | Rate: 0.27 | Evals: 75.7%
Iter 218 | Best: -6.396393 | Rate: 0.27 | Evals: 76.0%
Iter 220 | Best: -6.397107 | Rate: 0.28 | Evals: 76.7%
Iter 222 | Best: -6.399157 | Rate: 0.28 | Evals: 77.3%
Iter 227 | Best: -6.402794 | Rate: 0.29 | Evals: 79.0%
Iter 228 | Best: -6.404218 | Rate: 0.30 | Evals: 79.3%
Iter 230 | Best: -6.415582 | Rate: 0.30 | Evals: 80.0%
Iter 231 | Best: -6.432140 | Rate: 0.31 | Evals: 80.3%
Iter 232 | Best: -6.443829 | Rate: 0.32 | Evals: 80.7%
Iter 233 | Best: -6.444079 | Rate: 0.33 | Evals: 81.0%
Iter 234 | Best: -6.453086 | Rate: 0.34 | Evals: 81.3%
Iter 236 | Best: -6.461187 | Rate: 0.35 | Evals: 82.0%
Iter 241 | Best: -6.461401 | Rate: 0.34 | Evals: 83.7%
Iter 243 | Best: -6.470061 | Rate: 0.34 | Evals: 84.3%
Iter 244 | Best: -6.481072 | Rate: 0.35 | Evals: 84.7%
Iter 245 | Best: -6.494963 | Rate: 0.36 | Evals: 85.0%
Iter 248 | Best: -6.496383 | Rate: 0.36 | Evals: 86.0%
Iter 250 | Best: -6.540514 | Rate: 0.37 | Evals: 86.7%
Iter 252 | Best: -6.551032 | Rate: 0.38 | Evals: 87.3%
Iter 253 | Best: -6.577740 | Rate: 0.38 | Evals: 87.7%
Iter 254 | Best: -6.615227 | Rate: 0.39 | Evals: 88.0%
Iter 255 | Best: -6.616279 | Rate: 0.40 | Evals: 88.3%
Iter 256 | Best: -6.617377 | Rate: 0.41 | Evals: 88.7%
Iter 257 | Best: -6.627917 | Rate: 0.41 | Evals: 89.0%
Iter 258 | Best: -6.633127 | Rate: 0.42 | Evals: 89.3%
Iter 259 | Best: -6.633249 | Rate: 0.43 | Evals: 89.7%
Iter 262 | Best: -6.699553 | Rate: 0.43 | Evals: 90.7%
Iter 268 | Best: -6.743867 | Rate: 0.43 | Evals: 92.7%
Iter 275 | Best: -6.763458 | Rate: 0.41 | Evals: 95.0%
Iter 276 | Best: -6.769384 | Rate: 0.42 | Evals: 95.3%
Iter 277 | Best: -6.795373 | Rate: 0.43 | Evals: 95.7%
Iter 282 | Best: -6.795771 | Rate: 0.43 | Evals: 97.3%
Iter 287 | Best: -6.804354 | Rate: 0.42 | Evals: 99.0%
print(f"[10D] Sklearn Kriging: min y = {result_micha.fun:.4f} at x = {result_micha.x}")
print(f"Number of function evaluations: {result_micha.nfev}")
print(f"Number of iterations: {result_micha.nit}")
[10D] Sklearn Kriging: min y = -6.8044 at x = [2.2149783  2.72651221 2.21733048 2.48637772 2.62638677 1.95663912
 2.2205716  1.36304701 1.28275548 1.21655994]
Number of function evaluations: 300
Number of iterations: 290

5.2.4 Visualize Optimization Progress

import matplotlib.pyplot as plt

# Plot the optimization progress
plt.figure(figsize=(10, 6))
plt.plot(np.minimum.accumulate(opt_micha.y_), 'b-', linewidth=2)
plt.xlabel('Function Evaluations', fontsize=12)
plt.ylabel('Best Objective Value', fontsize=12)
plt.title('10D Michalewicz: Sklearn Kriging Progress', fontsize=14)
plt.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()

5.2.5 Evaluation of Multiple Repeats

To perform 30 repeats and collect statistics:

# Perform 30 independent runs
n_repeats = 30
results = []

print(f"Running {n_repeats} independent optimizations...")
for i in range(n_repeats):
    kernel_i = ConstantKernel(1.0, (1e-2, 1e12)) * Matern(
        length_scale=1.0, 
        length_scale_bounds=(1e-4, 1e2), 
        nu=2.5
    )
    surrogate_i = GaussianProcessRegressor(kernel=kernel_i, n_restarts_optimizer=100)
    
    opt_i = SpotOptim(
        fun=fun,
        bounds=bounds,
        n_initial=n_initial,
        max_iter=max_iter,
        surrogate=surrogate_i,
        seed=seed + i,  # Different seed for each run
        verbose=0
    )
    
    result_i = opt_i.optimize()
    results.append(result_i.fun)
    
    if (i + 1) % 10 == 0:
        print(f"  Completed {i + 1}/{n_repeats} runs")

# Compute statistics
mean_result = np.mean(results)
std_result = np.std(results)
min_result = np.min(results)
max_result = np.max(results)

print(f"\nResults over {n_repeats} runs:")
print(f"  Mean of best values: {mean_result:.6f}")
print(f"  Std of best values:  {std_result:.6f}")
print(f"  Min of best values:  {min_result:.6f}")
print(f"  Max of best values:  {max_result:.6f}")

5.3 Comparison: SpotOptim vs SpotPython

The SpotOptim package provides a scipy-compatible interface for Bayesian optimization with the following key features:

  1. Scipy-compatible API: Returns OptimizeResult objects that work seamlessly with scipy’s optimization ecosystem
  2. Custom Surrogates: Supports any sklearn-compatible surrogate model (as demonstrated with GaussianProcessRegressor)
  3. Flexible Interface: Simplified parameter specification with bounds, n_initial, and max_iter
  4. Analytical Test Functions: Built-in test functions (rosenbrock, ackley, michalewicz) for benchmarking

The main differences from spotpython are:

  • SpotOptim: Uses bounds, n_initial, max_iter parameters with scipy-style interface
  • SpotPython: Uses fun_control, design_control, surrogate_control with more complex configuration

Both packages support custom surrogates and provide powerful Bayesian optimization capabilities.

5.4 Summary

This notebook demonstrated how to:

  1. Use SpotOptim with sklearn’s Gaussian Process Regressor (Matern kernel) as a surrogate
  2. Optimize 6D Rosenbrock function with 100 evaluations
  3. Optimize 10D Michalewicz function with 300 evaluations
  4. Visualize optimization progress
  5. Perform multiple independent runs for statistical analysis

The results show that SpotOptim with sklearn surrogates provides effective Bayesian optimization for challenging benchmark functions.

5.5 Jupyter Notebook

Note