import warnings
warnings.filterwarnings("ignore")
import json
import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import rosenbrock3 Benchmarking SpotOptim with Sklearn Kriging (Matern Kernel) on 6D Rosenbrock and 10D Michalewicz Functions
These test functions were used during the Dagstuhl Seminar 25451 Bayesian Optimisation (Nov 02 – Nov 07, 2025), see here.
This notebook demonstrates the use of SpotOptim with sklearn’s Gaussian Process Regressor as a surrogate model.
3.1 SpotOptim with Sklearn Kriging in 6 Dimensions: Rosenbrock Function
This section demonstrates how to use the SpotOptim class with sklearn’s Gaussian Process Regressor (using Matern kernel) as a surrogate on the 6-dimensional Rosenbrock function. We use a maximum of 100 function evaluations.
3.1.1 Define the 6D Rosenbrock Function
dim = 6
lower = np.full(dim, -2.0)
upper = np.full(dim, 2.0)
bounds = list(zip(lower, upper))
fun = rosenbrock
max_iter = 1003.1.2 Set up SpotOptim Parameters
n_initial = dim
seed = 3213.1.3 Sklearn Gaussian Process Regressor as Surrogate
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import Matern, ConstantKernel
# Use a Matern kernel instead of the standard RBF kernel
kernel = ConstantKernel(1.0, (1e-2, 1e12)) * Matern(
length_scale=1.0,
length_scale_bounds=(1e-4, 1e2),
nu=2.5
)
surrogate = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=100)
# Create SpotOptim instance with sklearn surrogate
opt_rosen = SpotOptim(
fun=fun,
bounds=bounds,
n_initial=n_initial,
max_iter=max_iter,
surrogate=surrogate,
seed=seed,
verbose=1
)
# Run optimization
result_rosen = opt_rosen.optimize()TensorBoard logging disabled
Initial best: f(x) = 321.834153
Iteration 1: f(x) = 3523.035425
Iteration 2: f(x) = 1535.228517
Iteration 3: f(x) = 326.971112
Iteration 4: New best f(x) = 179.355789
Iteration 5: New best f(x) = 147.216335
Iteration 6: New best f(x) = 126.871631
Iteration 7: New best f(x) = 106.906387
Iteration 8: New best f(x) = 77.691981
Iteration 9: New best f(x) = 67.644842
Iteration 10: f(x) = 70.987451
Iteration 11: New best f(x) = 66.965514
Iteration 12: New best f(x) = 66.887652
Iteration 13: New best f(x) = 63.401612
Iteration 14: New best f(x) = 53.809466
Iteration 15: New best f(x) = 53.460560
Iteration 16: New best f(x) = 52.728344
Iteration 17: New best f(x) = 51.522574
Iteration 18: New best f(x) = 48.437550
Iteration 19: f(x) = 48.537948
Iteration 20: f(x) = 48.514781
Iteration 21: New best f(x) = 47.018582
Iteration 22: New best f(x) = 44.754208
Iteration 23: New best f(x) = 44.386191
Iteration 24: f(x) = 45.873518
Iteration 25: New best f(x) = 39.579355
Iteration 26: New best f(x) = 39.004678
Iteration 27: f(x) = 39.206442
Iteration 28: New best f(x) = 37.440635
Iteration 29: New best f(x) = 36.554744
Iteration 30: New best f(x) = 35.869808
Iteration 31: New best f(x) = 31.735313
Iteration 32: New best f(x) = 29.623885
Iteration 33: New best f(x) = 25.865943
Iteration 34: New best f(x) = 20.834654
Iteration 35: New best f(x) = 15.808128
Iteration 36: New best f(x) = 15.600024
Iteration 37: New best f(x) = 15.063367
Iteration 38: New best f(x) = 13.840863
Iteration 39: New best f(x) = 13.504484
Iteration 40: f(x) = 13.568332
Iteration 41: f(x) = 13.698873
Iteration 42: New best f(x) = 12.903883
Iteration 43: New best f(x) = 10.322010
Iteration 44: New best f(x) = 9.052717
Iteration 45: New best f(x) = 8.373291
Iteration 46: New best f(x) = 8.338980
Iteration 47: f(x) = 8.398416
Iteration 48: New best f(x) = 7.676804
Iteration 49: f(x) = 7.679488
Iteration 50: f(x) = 7.820387
Iteration 51: f(x) = 7.686720
Iteration 52: New best f(x) = 6.963112
Iteration 53: New best f(x) = 6.321993
Iteration 54: f(x) = 6.401167
Iteration 55: New best f(x) = 6.106900
Iteration 56: f(x) = 6.177208
Iteration 57: New best f(x) = 5.778451
Iteration 58: f(x) = 5.787909
Iteration 59: f(x) = 5.917825
Iteration 60: New best f(x) = 5.698457
Iteration 61: New best f(x) = 5.687317
Iteration 62: New best f(x) = 5.410187
Iteration 63: New best f(x) = 5.216013
Iteration 64: New best f(x) = 5.010958
Iteration 65: f(x) = 5.033852
Iteration 66: f(x) = 5.058485
Iteration 67: f(x) = 5.012669
Iteration 68: New best f(x) = 4.997682
Iteration 69: New best f(x) = 4.982265
Iteration 70: New best f(x) = 4.975478
Iteration 71: f(x) = 4.987280
Iteration 72: New best f(x) = 4.952667
Iteration 73: New best f(x) = 4.920744
Iteration 74: New best f(x) = 4.884347
Iteration 75: New best f(x) = 4.802916
Iteration 76: New best f(x) = 4.797966
Iteration 77: New best f(x) = 4.797966
Iteration 78: New best f(x) = 4.791954
Iteration 79: New best f(x) = 4.763491
Iteration 80: f(x) = 4.856343
Iteration 81: New best f(x) = 4.676197
Iteration 82: New best f(x) = 4.647480
Iteration 83: New best f(x) = 4.635803
Iteration 84: New best f(x) = 4.588024
Iteration 85: New best f(x) = 4.560769
Iteration 86: New best f(x) = 4.559662
Iteration 87: f(x) = 4.618137
Iteration 88: New best f(x) = 4.478985
Iteration 89: New best f(x) = 4.393477
Iteration 90: New best f(x) = 4.376783
Iteration 91: f(x) = 4.395855
Iteration 92: f(x) = 4.402738
Iteration 93: New best f(x) = 4.318500
Iteration 94: New best f(x) = 4.222778
print(f"[6D] Sklearn Kriging: min y = {result_rosen.fun:.4f} at x = {result_rosen.x}")
print(f"Number of function evaluations: {result_rosen.nfev}")
print(f"Number of iterations: {result_rosen.nit}")[6D] Sklearn Kriging: min y = 4.2228 at x = [ 3.24505006e-01 1.09235158e-01 1.02732764e-02 1.12750960e-02
-7.51537444e-04 -2.90449492e-04]
Number of function evaluations: 100
Number of iterations: 94
3.1.4 Visualize Optimization Progress
import matplotlib.pyplot as plt
# Plot the optimization progress
plt.figure(figsize=(10, 6))
plt.semilogy(np.minimum.accumulate(opt_rosen.y_), 'b-', linewidth=2)
plt.xlabel('Function Evaluations', fontsize=12)
plt.ylabel('Best Objective Value (log scale)', fontsize=12)
plt.title('6D Rosenbrock: Sklearn Kriging Progress', fontsize=14)
plt.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
3.1.5 Evaluation of Multiple Repeats
To perform 30 repeats and collect statistics:
# Perform 30 independent runs
n_repeats = 30
results = []
print(f"Running {n_repeats} independent optimizations...")
for i in range(n_repeats):
kernel_i = ConstantKernel(1.0, (1e-2, 1e12)) * Matern(
length_scale=1.0,
length_scale_bounds=(1e-4, 1e2),
nu=2.5
)
surrogate_i = GaussianProcessRegressor(kernel=kernel_i, n_restarts_optimizer=100)
opt_i = SpotOptim(
fun=fun,
bounds=bounds,
n_initial=n_initial,
max_iter=max_iter,
surrogate=surrogate_i,
seed=seed + i, # Different seed for each run
verbose=0
)
result_i = opt_i.optimize()
results.append(result_i.fun)
if (i + 1) % 10 == 0:
print(f" Completed {i + 1}/{n_repeats} runs")
# Compute statistics
mean_result = np.mean(results)
std_result = np.std(results)
min_result = np.min(results)
max_result = np.max(results)
print(f"\nResults over {n_repeats} runs:")
print(f" Mean of best values: {mean_result:.6f}")
print(f" Std of best values: {std_result:.6f}")
print(f" Min of best values: {min_result:.6f}")
print(f" Max of best values: {max_result:.6f}")3.2 SpotOptim with Sklearn Kriging in 10 Dimensions: Michalewicz Function
This section demonstrates how to use the SpotOptim class with sklearn’s Gaussian Process Regressor (using Matern kernel) as a surrogate on the 10-dimensional Michalewicz function. We use a maximum of 300 function evaluations.
3.2.1 Define the 10D Michalewicz Function
from spotoptim.function import michalewicz
dim = 10
lower = np.full(dim, 0.0)
upper = np.full(dim, np.pi)
bounds = list(zip(lower, upper))
fun = michalewicz
max_iter = 3003.2.2 Set up SpotOptim Parameters
n_initial = dim
seed = 3213.2.3 Sklearn Gaussian Process Regressor as Surrogate
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import Matern, ConstantKernel
# Use a Matern kernel instead of the standard RBF kernel
kernel = ConstantKernel(1.0, (1e-2, 1e12)) * Matern(
length_scale=1.0,
length_scale_bounds=(1e-4, 1e2),
nu=2.5
)
surrogate = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=100)
# Create SpotOptim instance with sklearn surrogate
opt_micha = SpotOptim(
fun=fun,
bounds=bounds,
n_initial=n_initial,
max_iter=max_iter,
surrogate=surrogate,
seed=seed,
verbose=1
)
# Run optimization
result_micha = opt_micha.optimize()TensorBoard logging disabled
Initial best: f(x) = -1.909129
Iteration 1: f(x) = -0.472894
Iteration 2: New best f(x) = -2.778175
Iteration 3: f(x) = -1.573171
Iteration 4: f(x) = -1.702335
Iteration 5: New best f(x) = -3.210187
Iteration 6: f(x) = -2.825227
Iteration 7: f(x) = -2.981091
Iteration 8: New best f(x) = -3.370768
Iteration 9: f(x) = -3.197512
Iteration 10: New best f(x) = -3.435585
Iteration 11: New best f(x) = -3.535777
Iteration 12: f(x) = -3.515957
Iteration 13: New best f(x) = -3.565795
Iteration 14: f(x) = -3.555973
Iteration 15: f(x) = -3.559112
Iteration 16: New best f(x) = -3.599575
Iteration 17: New best f(x) = -3.604897
Iteration 18: New best f(x) = -3.614352
Iteration 19: New best f(x) = -3.629860
Iteration 20: New best f(x) = -3.637273
Iteration 21: New best f(x) = -3.952062
Iteration 22: New best f(x) = -4.171054
Iteration 23: New best f(x) = -4.189829
Iteration 24: New best f(x) = -4.203839
Iteration 25: New best f(x) = -4.214087
Iteration 26: f(x) = -1.910095
Iteration 27: New best f(x) = -4.225941
Iteration 28: New best f(x) = -4.234162
Iteration 29: New best f(x) = -4.250291
Iteration 30: f(x) = -4.217819
Iteration 31: New best f(x) = -4.801168
Iteration 32: f(x) = -4.754788
Iteration 33: f(x) = -1.931731
Iteration 34: New best f(x) = -4.938163
Iteration 35: New best f(x) = -5.312599
Iteration 36: New best f(x) = -5.514364
Iteration 37: New best f(x) = -5.529772
Iteration 38: f(x) = -5.482236
Iteration 39: New best f(x) = -5.584472
Iteration 40: f(x) = -5.566771
Iteration 41: New best f(x) = -5.646320
Iteration 42: New best f(x) = -5.646606
Iteration 43: f(x) = -5.645369
Iteration 44: New best f(x) = -5.657200
Iteration 45: New best f(x) = -5.675147
Iteration 46: New best f(x) = -5.675207
Iteration 47: New best f(x) = -5.699566
Iteration 48: f(x) = -1.941617
Iteration 49: New best f(x) = -5.764010
Iteration 50: New best f(x) = -5.764365
Iteration 51: f(x) = -5.752112
Iteration 52: New best f(x) = -5.835976
Iteration 53: f(x) = -5.830759
Iteration 54: New best f(x) = -5.844836
Iteration 55: New best f(x) = -5.856099
Iteration 56: New best f(x) = -5.859355
Iteration 57: f(x) = -5.853655
Iteration 58: New best f(x) = -5.933364
Iteration 59: New best f(x) = -6.064317
Iteration 60: New best f(x) = -6.140370
Iteration 61: New best f(x) = -6.141126
Iteration 62: New best f(x) = -6.222756
Iteration 63: New best f(x) = -6.225205
Iteration 64: f(x) = -6.223755
Iteration 65: f(x) = -6.221461
Iteration 66: New best f(x) = -6.239932
Iteration 67: New best f(x) = -6.249427
Iteration 68: New best f(x) = -6.250131
Iteration 69: New best f(x) = -6.255039
Iteration 70: New best f(x) = -6.264061
Iteration 71: New best f(x) = -6.382018
Iteration 72: f(x) = -6.310706
Iteration 73: New best f(x) = -6.395777
Iteration 74: f(x) = -6.395736
Iteration 75: New best f(x) = -6.418288
Iteration 76: New best f(x) = -6.443822
Iteration 77: New best f(x) = -6.448178
Iteration 78: New best f(x) = -6.450720
Iteration 79: New best f(x) = -6.455011
Iteration 80: f(x) = -6.454946
Iteration 81: f(x) = -6.452428
Iteration 82: New best f(x) = -6.463368
Iteration 83: New best f(x) = -6.468139
Iteration 84: New best f(x) = -6.485411
Iteration 85: New best f(x) = -6.490563
Iteration 86: New best f(x) = -6.493886
Iteration 87: f(x) = -6.491808
Iteration 88: f(x) = -6.491872
Iteration 89: New best f(x) = -6.509027
Iteration 90: f(x) = -6.505648
Iteration 91: New best f(x) = -6.520518
Iteration 92: New best f(x) = -6.547986
Iteration 93: New best f(x) = -6.578156
Iteration 94: New best f(x) = -6.612669
Iteration 95: New best f(x) = -6.628944
Iteration 96: New best f(x) = -6.671321
Iteration 97: New best f(x) = -6.699631
Iteration 98: New best f(x) = -6.701666
Iteration 99: New best f(x) = -6.707297
Iteration 100: New best f(x) = -6.709748
Iteration 101: New best f(x) = -6.713291
Iteration 102: New best f(x) = -6.727486
Iteration 103: f(x) = -6.721449
Iteration 104: f(x) = -6.726763
Iteration 105: New best f(x) = -6.752141
Iteration 106: New best f(x) = -6.753371
Iteration 107: f(x) = -6.745192
Iteration 108: New best f(x) = -6.833193
Iteration 109: f(x) = -6.822328
Iteration 110: New best f(x) = -6.851481
Iteration 111: New best f(x) = -6.855171
Iteration 112: f(x) = -6.843422
Iteration 113: New best f(x) = -7.048401
Iteration 114: New best f(x) = -7.509051
Iteration 115: f(x) = -7.415935
Iteration 116: f(x) = -7.480376
Iteration 117: New best f(x) = -7.524448
Iteration 118: f(x) = -7.523788
Iteration 119: f(x) = -7.504695
Iteration 120: New best f(x) = -7.567424
Iteration 121: f(x) = -7.566794
Iteration 122: New best f(x) = -7.577133
Iteration 123: New best f(x) = -7.604995
Iteration 124: New best f(x) = -7.606821
Iteration 125: New best f(x) = -7.640504
Iteration 126: New best f(x) = -7.640620
Iteration 127: New best f(x) = -7.643365
Iteration 128: New best f(x) = -7.647676
Iteration 129: f(x) = -7.645318
Iteration 130: New best f(x) = -7.656076
Iteration 131: f(x) = -7.652039
Iteration 132: New best f(x) = -7.667303
Iteration 133: New best f(x) = -7.669073
Iteration 134: New best f(x) = -7.670949
Iteration 135: New best f(x) = -7.672332
Iteration 136: New best f(x) = -7.672619
Iteration 137: f(x) = -7.672100
Iteration 138: f(x) = -7.671564
Iteration 139: New best f(x) = -7.673694
Iteration 140: f(x) = -7.673454
Iteration 141: New best f(x) = -7.674771
Iteration 142: f(x) = -7.674679
Iteration 143: f(x) = -7.674536
Iteration 144: New best f(x) = -7.675900
Iteration 145: New best f(x) = -7.675931
Iteration 146: f(x) = -7.675706
Iteration 147: f(x) = -7.675830
Iteration 148: New best f(x) = -7.676121
Iteration 149: f(x) = -7.676018
Iteration 150: New best f(x) = -7.676288
Iteration 151: New best f(x) = -7.676312
Iteration 152: New best f(x) = -7.676317
Iteration 153: New best f(x) = -7.676328
Iteration 154: New best f(x) = -7.676380
Iteration 155: New best f(x) = -7.676630
Iteration 156: New best f(x) = -7.677188
Iteration 157: New best f(x) = -7.677315
Iteration 158: New best f(x) = -7.677360
Iteration 159: New best f(x) = -7.678104
Iteration 160: New best f(x) = -7.678976
Iteration 161: New best f(x) = -7.679012
Iteration 162: f(x) = -7.678909
Iteration 163: New best f(x) = -7.679537
Iteration 164: f(x) = -7.679520
Iteration 165: New best f(x) = -7.679637
Iteration 166: New best f(x) = -7.679680
Iteration 167: New best f(x) = -7.680459
Iteration 168: New best f(x) = -7.680527
Iteration 169: New best f(x) = -7.680752
Iteration 170: New best f(x) = -7.681085
Iteration 171: New best f(x) = -7.681313
Iteration 172: New best f(x) = -7.681419
Iteration 173: New best f(x) = -7.681420
Iteration 174: New best f(x) = -7.681423
Iteration 175: New best f(x) = -7.681428
Iteration 176: New best f(x) = -7.681439
Iteration 177: New best f(x) = -7.681466
Iteration 178: New best f(x) = -7.681500
Iteration 179: f(x) = -7.681483
Iteration 180: New best f(x) = -7.681545
Iteration 181: f(x) = -7.681544
Iteration 182: New best f(x) = -7.681572
Iteration 183: New best f(x) = -7.681647
Iteration 184: New best f(x) = -7.681650
Iteration 185: f(x) = -7.681647
Iteration 186: New best f(x) = -7.681654
Iteration 187: New best f(x) = -7.681655
Iteration 188: f(x) = -7.681653
Iteration 189: New best f(x) = -7.681657
Iteration 190: New best f(x) = -7.681658
Iteration 191: New best f(x) = -7.681662
Iteration 192: New best f(x) = -7.681677
Iteration 193: New best f(x) = -7.681695
Iteration 194: New best f(x) = -7.681697
Iteration 195: f(x) = -7.681695
Iteration 196: f(x) = -7.681694
Iteration 197: f(x) = -7.681696
Iteration 198: New best f(x) = -7.681701
Iteration 199: New best f(x) = -7.681704
Iteration 200: New best f(x) = -7.681716
Iteration 201: New best f(x) = -7.681736
Iteration 202: New best f(x) = -7.681748
Iteration 203: f(x) = -7.681735
Iteration 204: New best f(x) = -7.681767
Iteration 205: New best f(x) = -7.681799
Iteration 206: New best f(x) = -7.681799
Iteration 207: New best f(x) = -7.681807
Iteration 208: New best f(x) = -7.681813
Iteration 209: New best f(x) = -7.681836
Iteration 210: New best f(x) = -7.681867
Iteration 211: New best f(x) = -7.681870
Iteration 212: New best f(x) = -7.681872
Iteration 213: New best f(x) = -7.681875
Iteration 214: New best f(x) = -7.681882
Iteration 215: New best f(x) = -7.681896
Iteration 216: New best f(x) = -7.681904
Iteration 217: New best f(x) = -7.681905
Iteration 218: f(x) = -7.681902
Iteration 219: New best f(x) = -7.681906
Iteration 220: New best f(x) = -7.681907
Iteration 221: f(x) = -7.681904
Iteration 222: f(x) = -7.681902
Iteration 223: New best f(x) = -7.681908
Iteration 224: New best f(x) = -7.681909
Iteration 225: f(x) = -7.681903
Iteration 226: f(x) = -7.681908
Iteration 227: New best f(x) = -7.681910
Iteration 228: f(x) = -7.681909
Iteration 229: f(x) = -7.681907
Iteration 230: New best f(x) = -7.681910
Iteration 231: New best f(x) = -7.681914
Iteration 232: New best f(x) = -7.681919
Iteration 233: New best f(x) = -7.681933
Iteration 234: New best f(x) = -7.681946
Iteration 235: New best f(x) = -7.681947
Iteration 236: f(x) = -7.681942
Iteration 237: New best f(x) = -7.681953
Iteration 238: New best f(x) = -7.681958
Iteration 239: New best f(x) = -7.681963
Iteration 240: New best f(x) = -7.681972
Iteration 241: f(x) = -7.637223
Iteration 242: New best f(x) = -7.681976
Iteration 243: New best f(x) = -7.681980
Iteration 244: New best f(x) = -7.681983
Iteration 245: New best f(x) = -7.681985
Iteration 246: f(x) = -7.681981
Iteration 247: New best f(x) = -7.681985
Iteration 248: f(x) = -7.681984
Iteration 249: f(x) = -7.681985
Iteration 250: f(x) = -7.681984
Iteration 251: f(x) = -7.681983
Iteration 252: f(x) = -7.681985
Iteration 253: f(x) = -7.681984
Iteration 254: New best f(x) = -7.681988
Iteration 255: f(x) = -7.681988
Iteration 256: New best f(x) = -7.681989
Iteration 257: New best f(x) = -7.681990
Iteration 258: f(x) = -7.681989
Iteration 259: f(x) = -7.681990
Iteration 260: f(x) = -7.681989
Iteration 261: New best f(x) = -7.681991
Iteration 262: f(x) = -7.681990
Iteration 263: f(x) = -7.681987
Iteration 264: New best f(x) = -7.681991
Iteration 265: New best f(x) = -7.681992
Iteration 266: New best f(x) = -7.681992
Iteration 267: f(x) = -7.681989
Iteration 268: New best f(x) = -7.681992
Iteration 269: New best f(x) = -7.681995
Iteration 270: New best f(x) = -7.681995
Iteration 271: New best f(x) = -7.681999
Iteration 272: New best f(x) = -7.681999
Iteration 273: New best f(x) = -7.682000
Iteration 274: f(x) = -7.681999
Iteration 275: f(x) = -7.681999
Iteration 276: New best f(x) = -7.682001
Iteration 277: f(x) = -7.681998
Iteration 278: f(x) = -7.681998
Iteration 279: f(x) = -7.681999
Iteration 280: f(x) = -7.682000
Iteration 281: f(x) = -7.682000
Iteration 282: f(x) = -7.681999
Iteration 283: f(x) = -7.682000
Iteration 284: f(x) = -7.682000
Iteration 285: New best f(x) = -7.682001
Iteration 286: f(x) = -7.682000
Iteration 287: f(x) = -7.682000
Iteration 288: f(x) = -7.682001
Iteration 289: f(x) = -7.681999
Iteration 290: f(x) = -7.681999
print(f"[10D] Sklearn Kriging: min y = {result_micha.fun:.4f} at x = {result_micha.x}")
print(f"Number of function evaluations: {result_micha.nfev}")
print(f"Number of iterations: {result_micha.nit}")[10D] Sklearn Kriging: min y = -7.6820 at x = [2.20306576 2.71165168 2.21924592 2.48200604 2.62723929 2.02746899
2.22105713 1.36055049 1.28281231 1.21703564]
Number of function evaluations: 300
Number of iterations: 290
3.2.4 Visualize Optimization Progress
import matplotlib.pyplot as plt
# Plot the optimization progress
plt.figure(figsize=(10, 6))
plt.plot(np.minimum.accumulate(opt_micha.y_), 'b-', linewidth=2)
plt.xlabel('Function Evaluations', fontsize=12)
plt.ylabel('Best Objective Value', fontsize=12)
plt.title('10D Michalewicz: Sklearn Kriging Progress', fontsize=14)
plt.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
3.2.5 Evaluation of Multiple Repeats
To perform 30 repeats and collect statistics:
# Perform 30 independent runs
n_repeats = 30
results = []
print(f"Running {n_repeats} independent optimizations...")
for i in range(n_repeats):
kernel_i = ConstantKernel(1.0, (1e-2, 1e12)) * Matern(
length_scale=1.0,
length_scale_bounds=(1e-4, 1e2),
nu=2.5
)
surrogate_i = GaussianProcessRegressor(kernel=kernel_i, n_restarts_optimizer=100)
opt_i = SpotOptim(
fun=fun,
bounds=bounds,
n_initial=n_initial,
max_iter=max_iter,
surrogate=surrogate_i,
seed=seed + i, # Different seed for each run
verbose=0
)
result_i = opt_i.optimize()
results.append(result_i.fun)
if (i + 1) % 10 == 0:
print(f" Completed {i + 1}/{n_repeats} runs")
# Compute statistics
mean_result = np.mean(results)
std_result = np.std(results)
min_result = np.min(results)
max_result = np.max(results)
print(f"\nResults over {n_repeats} runs:")
print(f" Mean of best values: {mean_result:.6f}")
print(f" Std of best values: {std_result:.6f}")
print(f" Min of best values: {min_result:.6f}")
print(f" Max of best values: {max_result:.6f}")3.3 Comparison: SpotOptim vs SpotPython
The SpotOptim package provides a scipy-compatible interface for Bayesian optimization with the following key features:
- Scipy-compatible API: Returns
OptimizeResultobjects that work seamlessly with scipy’s optimization ecosystem - Custom Surrogates: Supports any sklearn-compatible surrogate model (as demonstrated with GaussianProcessRegressor)
- Flexible Interface: Simplified parameter specification with bounds, n_initial, and max_iter
- Analytical Test Functions: Built-in test functions (rosenbrock, ackley, michalewicz) for benchmarking
The main differences from spotpython are:
- SpotOptim: Uses
bounds,n_initial,max_iterparameters with scipy-style interface - SpotPython: Uses
fun_control,design_control,surrogate_controlwith more complex configuration
Both packages support custom surrogates and provide powerful Bayesian optimization capabilities.
3.4 Summary
This notebook demonstrated how to:
- Use
SpotOptimwith sklearn’s Gaussian Process Regressor (Matern kernel) as a surrogate - Optimize 6D Rosenbrock function with 100 evaluations
- Optimize 10D Michalewicz function with 300 evaluations
- Visualize optimization progress
- Perform multiple independent runs for statistical analysis
The results show that SpotOptim with sklearn surrogates provides effective Bayesian optimization for challenging benchmark functions.
3.5 Jupyter Notebook
- The Jupyter-Notebook of this chapter is available on GitHub in the Sequential Parameter Optimization Cookbook Repository