SpotOptim includes a built-in restart mechanism designed to help the optimizer escape local optima or recover from stagnation. This feature is particularly useful for difficult landscapes where the optimizer might get stuck in a suboptimal region.
8.1 Key Concepts
The restart mechanism monitors the optimization progress and triggers a complete reset (restart) of the optimization run if no improvement is observed for a specified number of iterations.
8.1.1 Parameters
Two key parameters control this behavior:
restart_after_n (int, default=100): The number of consecutive iterations with a success rate of 0.0 (no improvement) required to trigger a restart.
restart_inject_best (bool, default=True): If True, the best solution found in all previous runs is injected into the initial design of the new restart run. This ensures that the global search does not lose the best-known solution while exploring new regions.
8.2 How it Works
Monitoring: During optimization, SpotOptim tracks the success_rate (percentage of valid and improved points in the current window).
Triggering: If the success rate drops to 0.0 and stays there for restart_after_n consecutive iterations, the current run is terminated.
Restarting: A new optimization run is initialized.
A new random seed is generated (if running sequentially) to ensure a different random start.
A new initial design (LHS) is created.
Injection: If restart_inject_best=True, the overall best point found so far is added to this new initial design.
Aggregation: When the global max_iter or max_time is reached, results from all runs are aggregated. The final returned result corresponds to the best run found.
8.3 Example: Triggering Restarts
In this example, we set restart_after_n to a very small value (5) to intentionally force restarts and demonstrate the mechanism. We use a multimodal function where getting stuck is possible.
import numpy as npfrom spotoptim import SpotOptimdef multimodal_function(X):"""A simple 2D multimodal function (Ackley-like structure)""" X = np.atleast_2d(X)return-20* np.exp(-0.2* np.sqrt(0.5* np.sum(X**2, axis=1))) -\ np.exp(0.5* np.sum(np.cos(2* np.pi * X), axis=1)) +\20+ np.exp(1)# Configure optimizer with aggressive restart strategyoptimizer = SpotOptim( fun=multimodal_function, bounds=[(-5, 5), (-5, 5)], max_iter=50, # Total global budget n_initial=5, restart_after_n=5, # Restart after only 5 iterations of no improvement restart_inject_best=True, seed=42, verbose=True# Verbose output shows restart messages)result = optimizer.optimize()
You can access the results of each individual restart run via the restarts_results_ attribute.
print(f"Total global evaluations: {result.nfev}")print(f"Number of restarts performed: {len(optimizer.restarts_results_) -1}")print(f"Best value found globally: {result.fun:.6f}")print("\nBreakdown by run:")for i, res inenumerate(optimizer.restarts_results_):print(f" Run {i+1}: {res.nfev} evals, Best: {res.fun:.6f}, Status: {res.message}")
Total global evaluations: 50
Number of restarts performed: 0
Best value found globally: 0.000725
Breakdown by run:
Run 1: 50 evals, Best: 0.000725, Status: Optimization terminated: maximum evaluations (50) reached
Current function value: 0.000725
Iterations: 45
Function evaluations: 50
8.4 Example: Effect of restart_inject_best
The restart_inject_best parameter is crucial for efficiency. It ensures that “knowledge” is transferred between restarts.
True: The new run starts with the best point found so far included in its initial set. This allows the surrogate model to immediately be aware of the high-quality region, potentially refining it further or using it as a baseline to explore elsewhere.
False: Each restart is completely independent. This is equivalent to running the optimizer multiple times in parallel with different seeds and taking the best result.
TensorBoard logging disabled
Initial best: f(x) = 6.220297
Iter 1 | Best: 6.219898 | Rate: 1.00 | Evals: 15.0%
Iter 2 | Best: 6.073745 | Rate: 1.00 | Evals: 17.5%
Iter 3 | Best: 4.573169 | Rate: 1.00 | Evals: 20.0%
Iter 4 | Best: 3.122909 | Rate: 1.00 | Evals: 22.5%
Iter 5 | Best: 2.198455 | Rate: 1.00 | Evals: 25.0%
Iter 6 | Best: 2.198455 | Curr: 4.140366 | Rate: 0.67 | Evals: 27.5%
Iter 7 | Best: 1.314004 | Rate: 0.67 | Evals: 30.0%
Iter 8 | Best: 1.314004 | Curr: 3.943588 | Rate: 0.33 | Evals: 32.5%
Iter 9 | Best: 1.314004 | Curr: 1.322184 | Rate: 0.33 | Evals: 35.0%
Iter 10 | Best: 1.314004 | Curr: 1.456022 | Rate: 0.00 | Evals: 37.5%
Iter 11 | Best: 0.051453 | Rate: 0.33 | Evals: 40.0%
Iter 12 | Best: 0.051453 | Curr: 0.730353 | Rate: 0.33 | Evals: 42.5%
Iter 13 | Best: 0.049606 | Rate: 0.67 | Evals: 45.0%
Iter 14 | Best: 0.012631 | Rate: 0.67 | Evals: 47.5%
Iter 15 | Best: 0.012631 | Curr: 0.108246 | Rate: 0.67 | Evals: 50.0%
Iter 16 | Best: 0.009947 | Rate: 0.67 | Evals: 52.5%
Iter 17 | Best: 0.002686 | Rate: 0.67 | Evals: 55.0%
Iter 18 | Best: 0.002686 | Curr: 0.436242 | Rate: 0.67 | Evals: 57.5%
Iter 19 | Best: 0.002686 | Curr: 0.458714 | Rate: 0.33 | Evals: 60.0%
Iter 20 | Best: 0.002686 | Curr: 0.120530 | Rate: 0.00 | Evals: 62.5%
Iter 21 | Best: 0.002686 | Curr: 0.028871 | Rate: 0.00 | Evals: 65.0%
Restarting optimization: success_rate 0 for 3 iterations.
Starting point x0 validated and processed successfully.
Original scale: [-0.00083556 0.00043378]
Internal scale: [-0.00083556 0.00043378]
Restart injection: Using best found point so far as starting point (f(x)=0.002686).
Including 1 starting points from x0 in initial design.
Skipping re-evaluation of injected best point.
Initial best: f(x) = 0.002686
Iter 1 | Best: 0.002686 | Curr: 0.010840 | Rate: 0.00 | Evals: 80.0%
Iter 2 | Best: 0.002686 | Curr: 2.949009 | Rate: 0.00 | Evals: 82.5%
Restarting optimization: success_rate 0 for 3 iterations.
Starting point x0 validated and processed successfully.
Original scale: [-0.00083556 0.00043378]
Internal scale: [-0.00083556 0.00043378]
Restart injection: Using best found point so far as starting point (f(x)=0.002686).
Including 1 starting points from x0 in initial design.
Skipping re-evaluation of injected best point.
Initial best: f(x) = 0.002686
Iter 1 | Best: 0.002686 | Curr: 0.006207 | Rate: 0.00 | Evals: 97.5%
Iter 2 | Best: 0.002686 | Curr: 2.246743 | Rate: 0.00 | Evals: 100.0%
print(f"Best without injection: {res_no_inject.fun:.6f}")print(f"Best with injection: {res_inject.fun:.6f}")
Best without injection: 0.002686
Best with injection: 0.002686
8.5 When to Use Restarts?
Complex Landscapes: When the objective function has many local optima.
Stagnation: When the optimizer tends to “flatline” early but max_iter is large.
exploration vs Exploitation: Restarts favor exploration (by jumping to a new random initial design) when exploitation (local improvement) has seemingly exhausted the current basin of attraction.
Setting restart_after_n depends on your problem:
Low values (e.g., 10-20): Aggressive restarts. Good if function evaluation is cheap and you want to explore many basins.
High values (e.g., 100+): Conservative. Gives the optimizer plenty of time to refine the solution in the current basin before giving up.