SpotOptim includes a built-in restart mechanism designed to help the optimizer escape local optima or recover from stagnation. This feature is particularly useful for difficult landscapes where the optimizer might get stuck in a suboptimal region.
8.1 Key Concepts
The restart mechanism monitors the optimization progress and triggers a complete reset (restart) of the optimization run if no improvement is observed for a specified number of iterations.
8.1.1 Parameters
Two key parameters control this behavior:
restart_after_n (int, default=100): The number of consecutive iterations with a success rate of 0.0 (no improvement) required to trigger a restart.
restart_inject_best (bool, default=True): If True, the best solution found in all previous runs is injected into the initial design of the new restart run. This ensures that the global search does not lose the best-known solution while exploring new regions.
8.2 How it Works
Monitoring: During optimization, SpotOptim tracks the success_rate (percentage of valid and improved points in the current window).
Triggering: If the success rate drops to 0.0 and stays there for restart_after_n consecutive iterations, the current run is terminated.
Restarting: A new optimization run is initialized.
A new random seed is generated (if running sequentially) to ensure a different random start.
A new initial design (LHS) is created.
Injection: If restart_inject_best=True, the overall best point found so far is added to this new initial design.
Aggregation: When the global max_iter or max_time is reached, results from all runs are aggregated. The final returned result corresponds to the best run found.
8.3 Example: Triggering Restarts
In this example, we set restart_after_n to a very small value (5) to intentionally force restarts and demonstrate the mechanism. We use a multimodal function where getting stuck is possible.
import numpy as npfrom spotoptim import SpotOptimdef multimodal_function(X):"""A simple 2D multimodal function (Ackley-like structure)""" X = np.atleast_2d(X)return-20* np.exp(-0.2* np.sqrt(0.5* np.sum(X**2, axis=1))) -\ np.exp(0.5* np.sum(np.cos(2* np.pi * X), axis=1)) +\20+ np.exp(1)# Configure optimizer with aggressive restart strategyoptimizer = SpotOptim( fun=multimodal_function, bounds=[(-5, 5), (-5, 5)], max_iter=50, # Total global budget n_initial=5, restart_after_n=5, # Restart after only 5 iterations of no improvement restart_inject_best=True, seed=42, verbose=True# Verbose output shows restart messages)result = optimizer.optimize()
TensorBoard logging disabled
Initial best: f(x) = 6.344720
Iter 1 | Best: 5.666775 | Rate: 1.00 | Evals: 12.0%
Iter 2 | Best: 4.435415 | Rate: 1.00 | Evals: 14.0%
Iter 3 | Best: 3.255984 | Rate: 1.00 | Evals: 16.0%
Iter 4 | Best: 1.314889 | Rate: 1.00 | Evals: 18.0%
Iter 6 | Best: 0.624721 | Rate: 0.80 | Evals: 22.0%
Iter 8 | Best: 0.065870 | Rate: 0.60 | Evals: 26.0%
Iter 12 | Best: 0.017807 | Rate: 0.40 | Evals: 34.0%
Iter 13 | Best: 0.007839 | Rate: 0.40 | Evals: 36.0%
Iter 16 | Best: 0.002594 | Rate: 0.60 | Evals: 42.0%
Iter 23 | Best: 0.000775 | Rate: 0.20 | Evals: 56.0%
Restarting optimization: success_rate 0 for 5 iterations.
Starting point x0 validated and processed successfully.
Original scale: [0.00012203 0.00024454]
Internal scale: [0.00012203 0.00024454]
Restart injection: Using best found point so far as starting point (f(x)=0.000775).
Including 1 starting points from x0 in initial design.
Skipping re-evaluation of injected best point.
Initial best: f(x) = 0.000775
Restarting optimization: success_rate 0 for 5 iterations.
Starting point x0 validated and processed successfully.
Original scale: [0.00012203 0.00024454]
Internal scale: [0.00012203 0.00024454]
Restart injection: Using best found point so far as starting point (f(x)=0.000775).
Including 1 starting points from x0 in initial design.
Skipping re-evaluation of injected best point.
Initial best: f(x) = 0.000775
8.3.1 Analyzing Restart Results
You can access the results of each individual restart run via the restarts_results_ attribute.
print(f"Total global evaluations: {result.nfev}")print(f"Number of restarts performed: {len(optimizer.restarts_results_) -1}")print(f"Best value found globally: {result.fun:.6f}")print("\nBreakdown by run:")for i, res inenumerate(optimizer.restarts_results_):print(f" Run {i+1}: {res.nfev} evals, Best: {res.fun:.6f}, Status: {res.message}")
Total global evaluations: 36
Number of restarts performed: 2
Best value found globally: 0.000775
Breakdown by run:
Run 1: 36 evals, Best: 0.000775, Status: Restart triggered due to lack of improvement.
Run 2: 9 evals, Best: 0.000775, Status: Restart triggered due to lack of improvement.
Run 3: 5 evals, Best: 0.000775, Status: Optimization finished successfully
Current function value: 0.000775
Iterations: 0
Function evaluations: 5
8.4 Example: Effect of restart_inject_best
The restart_inject_best parameter is crucial for efficiency. It ensures that “knowledge” is transferred between restarts.
True: The new run starts with the best point found so far included in its initial set. This allows the surrogate model to immediately be aware of the high-quality region, potentially refining it further or using it as a baseline to explore elsewhere.
False: Each restart is completely independent. This is equivalent to running the optimizer multiple times in parallel with different seeds and taking the best result.
TensorBoard logging disabled
Initial best: f(x) = 6.344720
Iter 1 | Best: 5.666775 | Rate: 1.00 | Evals: 15.0%
Iter 2 | Best: 4.435415 | Rate: 1.00 | Evals: 17.5%
Iter 3 | Best: 3.255984 | Rate: 1.00 | Evals: 20.0%
Iter 4 | Best: 1.314889 | Rate: 1.00 | Evals: 22.5%
Iter 6 | Best: 0.624721 | Rate: 0.67 | Evals: 27.5%
Iter 8 | Best: 0.065870 | Rate: 0.67 | Evals: 32.5%
Iter 12 | Best: 0.017807 | Rate: 0.33 | Evals: 42.5%
Iter 13 | Best: 0.007839 | Rate: 0.67 | Evals: 45.0%
Iter 16 | Best: 0.002594 | Rate: 0.33 | Evals: 52.5%
Restarting optimization: success_rate 0 for 3 iterations.
Starting point x0 validated and processed successfully.
Original scale: [0.0004105 0.00081136]
Internal scale: [0.0004105 0.00081136]
Restart injection: Using best found point so far as starting point (f(x)=0.002594).
Including 1 starting points from x0 in initial design.
Skipping re-evaluation of injected best point.
Initial best: f(x) = 0.002594
Restarting optimization: success_rate 0 for 3 iterations.
Starting point x0 validated and processed successfully.
Original scale: [0.0004105 0.00081136]
Internal scale: [0.0004105 0.00081136]
Restart injection: Using best found point so far as starting point (f(x)=0.002594).
Including 1 starting points from x0 in initial design.
Skipping re-evaluation of injected best point.
Initial best: f(x) = 0.002594
Optimizer candidate 1/3 was duplicate/invalid.
Restarting optimization: success_rate 0 for 3 iterations.
Starting point x0 validated and processed successfully.
Original scale: [0.0004105 0.00081136]
Internal scale: [0.0004105 0.00081136]
Restart injection: Using best found point so far as starting point (f(x)=0.002594).
Global budget exhausted. Stopping restarts.
print(f"Best without injection: {res_no_inject.fun:.6f}")print(f"Best with injection: {res_inject.fun:.6f}")
Best without injection: 0.002594
Best with injection: 0.002594
8.5 When to Use Restarts?
Complex Landscapes: When the objective function has many local optima.
Stagnation: When the optimizer tends to “flatline” early but max_iter is large.
exploration vs Exploitation: Restarts favor exploration (by jumping to a new random initial design) when exploitation (local improvement) has seemingly exhausted the current basin of attraction.
Setting restart_after_n depends on your problem: - Low values (e.g., 10-20): Aggressive restarts. Good if function evaluation is cheap and you want to explore many basins. - High values (e.g., 100+): Conservative. Gives the optimizer plenty of time to refine the solution in the current basin before giving up.