SpotOptim provides sophisticated fallback strategies for handling acquisition function failures during optimization. This ensures robust optimization even when the surrogate model struggles to suggest new points.
13.1 What is Acquisition Failure?
During surrogate-based optimization, the acquisition function suggests new points to evaluate. However, sometimes the suggested point is too close to existing points (within tolerance_x distance), which would provide little new information. When this happens, SpotOptim uses a fallback strategy to propose an alternative point.
13.2 Fallback Strategies
SpotOptim supports two fallback strategies, controlled by the acquisition_failure_strategy parameter:
13.2.1 1. Random Space-Filling Design (Default)
Strategy name: "random"
This strategy uses Latin Hypercube Sampling (LHS) to generate a new space-filling point. LHS ensures good coverage of the search space by dividing each dimension into equal-probability intervals.
When to use:
General-purpose optimization
When you want simplicity and good space-filling properties
This strategy finds a point that maximizes the minimum distance to all existing points. It evaluates 100 candidate points and selects the one with the largest minimum distance to the already-evaluated points, providing excellent space-filling properties.
When to use:
When you want to ensure maximum exploration
For problems where avoiding clustering of points is critical
When the search space has been heavily sampled in some regions
Example:
from spotoptim import SpotOptimimport numpy as npdef rosenbrock(X): x = X[:, 0] y = X[:, 1]return (1- x)**2+100* (y - x**2)**2optimizer = SpotOptim( fun=rosenbrock, bounds=[(-2, 2), (-2, 2)], max_iter=100, n_initial=20, acquisition_failure_strategy="mm", # Morris-Mitchell verbose=True)result = optimizer.optimize()
TensorBoard logging disabled
Initial best: f(x) = 0.440548
Iteration 1: f(x) = 1.235225
Iteration 2: f(x) = 4.371481
Iteration 3: f(x) = 1.667697
Iteration 4: f(x) = 1.065327
Iteration 5: f(x) = 2.901391
Iteration 6: f(x) = 0.511068
Iteration 7: New best f(x) = 0.429923
Iteration 8: New best f(x) = 0.052073
Iteration 9: f(x) = 0.098584
Iteration 10: f(x) = 0.075653
Iteration 11: f(x) = 0.065722
Iteration 12: f(x) = 0.064649
Iteration 13: f(x) = 0.062015
Iteration 14: f(x) = 0.058424
Iteration 15: f(x) = 0.053189
Iteration 16: f(x) = 3.875663
Iteration 17: New best f(x) = 0.046849
Iteration 18: New best f(x) = 0.033155
Iteration 19: New best f(x) = 0.025345
Iteration 20: New best f(x) = 0.020750
Iteration 21: New best f(x) = 0.017991
Iteration 22: New best f(x) = 0.016077
Iteration 23: New best f(x) = 0.014782
Iteration 24: New best f(x) = 0.013644
Iteration 25: New best f(x) = 0.013327
Iteration 26: f(x) = 0.013471
Iteration 27: f(x) = 0.052578
Iteration 28: New best f(x) = 0.011582
Iteration 29: New best f(x) = 0.008352
Iteration 30: New best f(x) = 0.007337
Iteration 31: New best f(x) = 0.006013
Iteration 32: New best f(x) = 0.005583
Iteration 33: New best f(x) = 0.005229
Iteration 34: New best f(x) = 0.004778
Iteration 35: New best f(x) = 0.004531
Iteration 36: New best f(x) = 0.004393
Iteration 37: New best f(x) = 0.004204
Iteration 38: New best f(x) = 0.004137
Iteration 39: New best f(x) = 0.004003
Iteration 40: New best f(x) = 0.003920
Iteration 41: f(x) = 2.494902
Iteration 42: New best f(x) = 0.003881
Iteration 43: New best f(x) = 0.003868
Iteration 44: New best f(x) = 0.003787
Iteration 45: f(x) = 0.003821
Iteration 46: New best f(x) = 0.003776
Iteration 47: New best f(x) = 0.003758
Iteration 48: f(x) = 0.003762
Iteration 49: f(x) = 0.003758
Iteration 50: New best f(x) = 0.003702
Iteration 51: f(x) = 0.003737
Iteration 52: f(x) = 0.003703
Iteration 53: New best f(x) = 0.003616
Iteration 54: f(x) = 0.003664
Iteration 55: f(x) = 0.003626
Iteration 56: New best f(x) = 0.003612
Iteration 57: New best f(x) = 0.003546
Iteration 58: f(x) = 1.922605
Iteration 59: New best f(x) = 0.003438
Iteration 60: New best f(x) = 0.003399
Iteration 61: New best f(x) = 0.003320
Iteration 62: New best f(x) = 0.003222
Iteration 63: f(x) = 0.003320
Iteration 64: f(x) = 0.003248
Iteration 65: New best f(x) = 0.003188
Iteration 66: New best f(x) = 0.003176
Iteration 67: New best f(x) = 0.003164
Iteration 68: New best f(x) = 0.003137
Iteration 69: New best f(x) = 0.003022
Iteration 70: New best f(x) = 0.002911
Iteration 71: New best f(x) = 0.002690
Iteration 72: New best f(x) = 0.002585
Iteration 73: New best f(x) = 0.002500
Iteration 74: f(x) = 0.002612
Iteration 75: New best f(x) = 0.002421
Iteration 76: New best f(x) = 0.002386
Iteration 77: New best f(x) = 0.002306
Iteration 78: New best f(x) = 0.002040
Iteration 79: New best f(x) = 0.001531
Iteration 80: New best f(x) = 0.000928
13.3 How It Works
The acquisition failure handling is integrated into the optimization process:
Acquisition optimization: SpotOptim uses differential evolution to optimize the acquisition function
Distance check: The proposed point is checked against existing points using tolerance_x
Fallback activation: If the point is too close, _handle_acquisition_failure() is called
Strategy execution: The configured fallback strategy generates a new point
Evaluation: The fallback point is evaluated and added to the dataset
13.4 Comparison of Strategies
Aspect
Random (LHS)
Morris-Mitchell
Computation
Very fast
Moderate (100 candidates)
Space-filling
Good
Excellent
Exploration
Balanced
Maximum distance
Clustering avoidance
Good
Best
Recommended for
General use
Heavily sampled spaces
13.5 Complete Example: Comparing Strategies
import numpy as npfrom spotoptim import SpotOptimdef ackley(X):"""Ackley function - multimodal test function""" a =20 b =0.2 c =2* np.pi n = X.shape[1] sum_sq = np.sum(X**2, axis=1) sum_cos = np.sum(np.cos(c * X), axis=1)return-a * np.exp(-b * np.sqrt(sum_sq / n)) - np.exp(sum_cos / n) + a + np.e# Test with random strategyprint("="*60)print("Testing with Random Space-Filling Strategy")print("="*60)opt_random = SpotOptim( fun=ackley, bounds=[(-5, 5), (-5, 5)], max_iter=50, n_initial=15, acquisition_failure_strategy="random", tolerance_x=0.1, # Relatively large tolerance to trigger failures seed=42, verbose=True)result_random = opt_random.optimize()print(f"\nRandom Strategy Results:")print(f" Best value: {result_random.fun:.6f}")print(f" Best point: {result_random.x}")print(f" Total evaluations: {result_random.nfev}")# Test with Morris-Mitchell strategyprint("\n"+"="*60)print("Testing with Morris-Mitchell Strategy")print("="*60)opt_mm = SpotOptim( fun=ackley, bounds=[(-5, 5), (-5, 5)], max_iter=50, n_initial=15, acquisition_failure_strategy="mm", tolerance_x=0.1, # Same tolerance seed=42, verbose=True)result_mm = opt_mm.optimize()print(f"\nMorris-Mitchell Strategy Results:")print(f" Best value: {result_mm.fun:.6f}")print(f" Best point: {result_mm.x}")print(f" Total evaluations: {result_mm.nfev}")# Compareprint("\n"+"="*60)print("Comparison")print("="*60)print(f"Random strategy: {result_random.fun:.6f}")print(f"Morris-Mitchell strategy: {result_mm.fun:.6f}")if result_random.fun < result_mm.fun:print("→ Random strategy found better solution")else:print("→ Morris-Mitchell strategy found better solution")
============================================================
Testing with Random Space-Filling Strategy
============================================================
TensorBoard logging disabled
Initial best: f(x) = 7.177375
Iteration 1: New best f(x) = 5.975822
Iteration 2: New best f(x) = 4.585872
Iteration 3: New best f(x) = 3.624193
Iteration 4: f(x) = 3.775836
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 5: f(x) = 7.428910
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 6: f(x) = 11.307695
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 7: f(x) = 11.588702
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 8: f(x) = 7.741328
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 9: f(x) = 8.547707
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 10: f(x) = 10.258140
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 11: f(x) = 10.575833
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 12: f(x) = 3.973820
Iteration 13: New best f(x) = 3.254219
Iteration 14: f(x) = 3.726104
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 15: f(x) = 9.023210
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 16: f(x) = 4.449845
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 17: f(x) = 11.616761
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 18: f(x) = 8.521779
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 19: f(x) = 9.840296
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 20: f(x) = 12.575946
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 21: f(x) = 7.719827
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 22: f(x) = 6.588107
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 23: f(x) = 9.186450
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 24: f(x) = 7.957337
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 25: f(x) = 13.092857
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 26: f(x) = 10.747619
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 27: f(x) = 9.894215
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 28: f(x) = 9.482454
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 29: f(x) = 6.720219
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 30: f(x) = 6.321851
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 31: f(x) = 6.841321
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 32: f(x) = 11.918429
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 33: f(x) = 13.292524
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 34: f(x) = 8.518795
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 35: f(x) = 12.534822
Random Strategy Results:
Best value: 3.254219
Best point: [0.03533184 0.66379523]
Total evaluations: 50
============================================================
Testing with Morris-Mitchell Strategy
============================================================
TensorBoard logging disabled
Initial best: f(x) = 7.177375
Iteration 1: New best f(x) = 5.975822
Iteration 2: New best f(x) = 4.585872
Iteration 3: New best f(x) = 3.624193
Iteration 4: f(x) = 3.775836
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 5: f(x) = 12.759495
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 6: f(x) = 13.023769
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 7: f(x) = 10.937206
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 8: f(x) = 7.780549
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 9: f(x) = 12.396463
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 10: f(x) = 12.719725
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 11: f(x) = 13.078957
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 12: f(x) = 12.813177
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 13: f(x) = 11.674342
Iteration 14: f(x) = 3.967415
Iteration 15: f(x) = 3.756184
Iteration 16: New best f(x) = 3.159408
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 17: f(x) = 11.221948
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 18: f(x) = 8.775229
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 19: f(x) = 11.789243
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 20: f(x) = 4.971384
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 21: f(x) = 9.983363
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 22: f(x) = 10.021573
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 23: f(x) = 6.575245
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 24: f(x) = 11.527738
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 25: f(x) = 12.133804
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 26: f(x) = 10.457280
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 27: f(x) = 11.003827
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 28: f(x) = 13.704437
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 29: f(x) = 10.955961
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 30: f(x) = 12.121347
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 31: f(x) = 7.325542
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 32: f(x) = 4.492325
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 33: f(x) = 6.456039
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 34: f(x) = 10.870934
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using Morris-Mitchell minimizing point as fallback.
Iteration 35: f(x) = 10.677536
Morris-Mitchell Strategy Results:
Best value: 3.159408
Best point: [-0.00800104 0.71750603]
Total evaluations: 50
============================================================
Comparison
============================================================
Random strategy: 3.254219
Morris-Mitchell strategy: 3.159408
→ Morris-Mitchell strategy found better solution
13.6 Advanced Usage: Setting Tolerance
The tolerance_x parameter controls when the fallback strategy is triggered. A larger tolerance means points need to be farther apart, triggering the fallback more often:
def simple_objective(X):"""Simple quadratic function for demonstration"""return np.sum(X**2, axis=1)bounds_demo = [(-5, 5), (-5, 5)]# Strict tolerance (smaller value) - fewer fallbacksoptimizer_strict = SpotOptim( fun=simple_objective, bounds=bounds_demo, tolerance_x=1e-6, # Very small - almost never triggers fallback acquisition_failure_strategy="mm", max_iter=20, seed=42)# Relaxed tolerance (larger value) - more fallbacksoptimizer_relaxed = SpotOptim( fun=simple_objective, bounds=bounds_demo, tolerance_x=0.5, # Larger - triggers fallback more often acquisition_failure_strategy="mm", max_iter=20, seed=42)print(f"Strict tolerance setup complete")print(f"Relaxed tolerance setup complete")
TensorBoard logging disabled
Optimizer with verbose mode created
13.7.4 4. Adjust Tolerance Based on Problem Scale
For problems with small search spaces, use smaller tolerance:
def scale_objective(X):return np.sum(X**2, axis=1)# Small search spaceoptimizer_small = SpotOptim( fun=scale_objective, bounds=[(-1, 1), (-1, 1)], tolerance_x=0.01, # Small tolerance for small space acquisition_failure_strategy="random", max_iter=20, seed=42)# Large search spaceoptimizer_large = SpotOptim( fun=scale_objective, bounds=[(-100, 100), (-100, 100)], tolerance_x=1.0, # Larger tolerance for large space acquisition_failure_strategy="mm", max_iter=20, seed=42)print(f"Small space optimizer created (bounds: [-1, 1])")print(f"Large space optimizer created (bounds: [-100, 100])")
Small space optimizer created (bounds: [-1, 1])
Large space optimizer created (bounds: [-100, 100])
13.8 Technical Details
13.8.1 Morris-Mitchell Implementation
The Morris-Mitchell strategy:
Generates 100 candidate points using Latin Hypercube Sampling
For each candidate, calculates the minimum distance to all existing points
Selects the candidate with the maximum minimum distance
This ensures the new point is as far as possible from the densest region of evaluated points.
13.8.2 Random Strategy Implementation
The random strategy:
Generates a single point using Latin Hypercube Sampling
Ensures the point is within bounds
Applies variable type repairs (rounding for int/factor variables)
This is computationally efficient while maintaining good space-filling properties.
13.9 Summary
Default strategy ("random"): Fast, good space-filling, suitable for most problems
Trigger: Activated when acquisition-proposed point is too close to existing points (within tolerance_x)
Control: Set via acquisition_failure_strategy parameter
Monitoring: Enable verbose=True to see when fallbacks occur
Choose the strategy that best matches your optimization goals: - Use "random" for general-purpose optimization - Use "mm" when you want maximum exploration and have a generous function evaluation budget