15  Acquisition Failure Handling in SpotOptim

SpotOptim provides sophisticated fallback strategies for handling acquisition function failures during optimization. This ensures robust optimization even when the surrogate model struggles to suggest new points.

15.1 What is Acquisition Failure?

During surrogate-based optimization, the acquisition function suggests new points to evaluate. However, sometimes the suggested point is too close to existing points (within tolerance_x distance), which would provide little new information. When this happens, SpotOptim uses a fallback strategy to propose an alternative point.

15.2 Fallback Strategies

SpotOptim uses a fallback strategy to propose an alternative point. The acquisition_failure_strategy parameter controls this behavior, defaulting to "random".

15.2.1 Random Space-Filling Design (Default)

Strategy name: "random"

This strategy uses Latin Hypercube Sampling (LHS) to generate a new space-filling point. LHS ensures good coverage of the search space by dividing each dimension into equal-probability intervals.

When to use:

  • General-purpose optimization
  • When you want simplicity and good space-filling properties
  • Default choice for most problems

Example:

from spotoptim import SpotOptim
import numpy as np

def sphere(X):
    return np.sum(X**2, axis=1)

optimizer = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    max_iter=50,
    n_initial=10,
    acquisition_failure_strategy="random",  # Default
    verbose=True
)

result = optimizer.optimize()
TensorBoard logging disabled
Initial best: f(x) = 2.041862
Iter 1 | Best: 0.063869 | Rate: 1.00 | Evals: 22.0%
Iter 2 | Best: 0.000303 | Rate: 1.00 | Evals: 24.0%
Iter 3 | Best: 0.000078 | Rate: 1.00 | Evals: 26.0%
Iter 4 | Best: 0.000006 | Rate: 1.00 | Evals: 28.0%
Iter 5 | Best: 0.000000 | Rate: 1.00 | Evals: 30.0%
Iter 6 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.83 | Evals: 32.0%
Iter 7 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.71 | Evals: 34.0%
Iter 8 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.62 | Evals: 36.0%
Iter 9 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.56 | Evals: 38.0%
Iter 10 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.50 | Evals: 40.0%
Iter 11 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.45 | Evals: 42.0%
Iter 12 | Best: 0.000000 | Rate: 0.50 | Evals: 44.0%
Iter 13 | Best: 0.000000 | Rate: 0.54 | Evals: 46.0%
Iter 14 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.50 | Evals: 48.0%
Iter 15 | Best: 0.000000 | Rate: 0.53 | Evals: 50.0%
Iter 16 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.50 | Evals: 52.0%
Iter 17 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.47 | Evals: 54.0%
Iter 18 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.44 | Evals: 56.0%
Iter 19 | Best: 0.000000 | Rate: 0.47 | Evals: 58.0%
Iter 20 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.45 | Evals: 60.0%
Iter 21 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.43 | Evals: 62.0%
Iter 22 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.41 | Evals: 64.0%
Iter 23 | Best: 0.000000 | Rate: 0.43 | Evals: 66.0%
Iter 24 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.42 | Evals: 68.0%
Iter 25 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.40 | Evals: 70.0%
Iter 26 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.38 | Evals: 72.0%
Iter 27 | Best: 0.000000 | Curr: 0.000001 | Rate: 0.37 | Evals: 74.0%
Iter 28 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.36 | Evals: 76.0%
Iter 29 | Best: 0.000000 | Curr: 0.000004 | Rate: 0.34 | Evals: 78.0%
Iter 30 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.33 | Evals: 80.0%
Iter 31 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.32 | Evals: 82.0%
Iter 32 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.31 | Evals: 84.0%
Iter 33 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.30 | Evals: 86.0%
Iter 34 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.29 | Evals: 88.0%
Iter 35 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.29 | Evals: 90.0%
Iter 36 | Best: 0.000000 | Rate: 0.31 | Evals: 92.0%
Iter 37 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.30 | Evals: 94.0%
Iter 38 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.29 | Evals: 96.0%
Iter 39 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.28 | Evals: 98.0%
Iter 40 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.28 | Evals: 100.0%

15.3 How It Works

The acquisition failure handling is integrated into the optimization process:

  1. Acquisition optimization: SpotOptim uses differential evolution to optimize the acquisition function
  2. Distance check: The proposed point is checked against existing points using tolerance_x
  3. Fallback activation: If the point is too close, _handle_acquisition_failure() is called
  4. Strategy execution: The configured fallback strategy generates a new point
  5. Evaluation: The fallback point is evaluated and added to the dataset

15.4 Advanced Usage: Setting Tolerance

The tolerance_x parameter controls when the fallback strategy is triggered. A larger tolerance means points need to be farther apart, triggering the fallback more often:

def simple_objective(X):
    """Simple quadratic function for demonstration"""
    return np.sum(X**2, axis=1)

bounds_demo = [(-5, 5), (-5, 5)]

# Strict tolerance (smaller value) - fewer fallbacks
optimizer_strict = SpotOptim(
    fun=simple_objective,
    bounds=bounds_demo,
    tolerance_x=1e-6,  # Very small - almost never triggers fallback
    max_iter=20,
    seed=42
)

# Relaxed tolerance (larger value) - more fallbacks
optimizer_relaxed = SpotOptim(
    fun=simple_objective,
    bounds=bounds_demo,
    tolerance_x=0.5,  # Larger - triggers fallback more often
    max_iter=20,
    seed=42
)

print(f"Strict tolerance setup complete")
print(f"Relaxed tolerance setup complete")
Strict tolerance setup complete
Relaxed tolerance setup complete

15.5 Best Practices

15.5.1 1. Monitor Fallback Activations

Enable verbose mode to see when fallbacks are triggered:

def test_objective(X):
    return np.sum(X**2, axis=1)

optimizer = SpotOptim(
    fun=test_objective,
    bounds=[(-5, 5), (-5, 5)],
    max_iter=20,
    verbose=True,  # Shows fallback messages
    seed=42
)
print("Optimizer with verbose mode created")
TensorBoard logging disabled
Optimizer with verbose mode created

15.5.2 2. Adjust Tolerance Based on Problem Scale

For problems with small search spaces, use smaller tolerance:

def scale_objective(X):
    return np.sum(X**2, axis=1)

# Small search space
optimizer_small = SpotOptim(
    fun=scale_objective,
    bounds=[(-1, 1), (-1, 1)],
    tolerance_x=0.01,  # Small tolerance for small space
    max_iter=20,
    seed=42
)

# Large search space
optimizer_large = SpotOptim(
    fun=scale_objective,
    bounds=[(-100, 100), (-100, 100)],
    tolerance_x=1.0,  # Larger tolerance for large space
    max_iter=20,
    seed=42
)

print(f"Small space optimizer created (bounds: [-1, 1])")
print(f"Large space optimizer created (bounds: [-100, 100])")
Small space optimizer created (bounds: [-1, 1])
Large space optimizer created (bounds: [-100, 100])

15.6 Technical Details

15.6.1 Random Strategy Implementation

The fallback strategy:

  1. Generates a single point using Latin Hypercube Sampling
  2. Ensures the point is within bounds
  3. Applies variable type repairs (rounding for int/factor variables)

This is computationally efficient while maintaining good space-filling properties.

15.7 Summary

  • Strategy: SpotOptim uses a random space-filling strategy (LHS) when acquisition fails.
  • Trigger: Activated when acquisition-proposed point is too close to existing points (within tolerance_x)
  • Monitoring: Enable verbose=True to see when fallbacks occur

15.8 Jupyter Notebook

Note