17  Acquisition Failure Handling in SpotOptim

SpotOptim provides sophisticated fallback strategies for handling acquisition function failures during optimization. This ensures robust optimization even when the surrogate model struggles to suggest new points.

17.1 What is Acquisition Failure?

During surrogate-based optimization, the acquisition function suggests new points to evaluate. However, sometimes the suggested point is too close to existing points (within tolerance_x distance), which would provide little new information. When this happens, SpotOptim uses a fallback strategy to propose an alternative point.

17.2 Fallback Strategies

SpotOptim uses a fallback strategy to propose an alternative point. The acquisition_failure_strategy parameter controls this behavior, defaulting to "random".

17.2.1 Random Space-Filling Design (Default)

Strategy name: "random"

This strategy uses Latin Hypercube Sampling (LHS) to generate a new space-filling point. LHS ensures good coverage of the search space by dividing each dimension into equal-probability intervals.

When to use:

  • General-purpose optimization
  • When you want simplicity and good space-filling properties
  • Default choice for most problems

Example:

from spotoptim import SpotOptim
import numpy as np

def sphere(X):
    return np.sum(X**2, axis=1)

optimizer = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    max_iter=50,
    n_initial=10,
    acquisition_failure_strategy="random",  # Default
    verbose=True
)

result = optimizer.optimize()
TensorBoard logging disabled
Initial best: f(x) = 0.916782
Iter 1 | Best: 0.146862 | Rate: 1.00 | Evals: 22.0%
Iter 2 | Best: 0.016814 | Rate: 1.00 | Evals: 24.0%
Iter 3 | Best: 0.000716 | Rate: 1.00 | Evals: 26.0%
Iter 4 | Best: 0.000008 | Rate: 1.00 | Evals: 28.0%
Iter 5 | Best: 0.000000 | Rate: 1.00 | Evals: 30.0%
Iter 7 | Best: 0.000000 | Rate: 0.86 | Evals: 34.0%
Iter 12 | Best: 0.000000 | Rate: 0.58 | Evals: 44.0%
Iter 16 | Best: 0.000000 | Rate: 0.50 | Evals: 52.0%
Iter 24 | Best: 0.000000 | Rate: 0.38 | Evals: 68.0%
Iter 29 | Best: 0.000000 | Rate: 0.34 | Evals: 78.0%
Iter 31 | Best: 0.000000 | Rate: 0.35 | Evals: 82.0%

17.3 How It Works

The acquisition failure handling is integrated into the optimization process:

  1. Acquisition optimization: SpotOptim uses differential evolution to optimize the acquisition function
  2. Distance check: The proposed point is checked against existing points using tolerance_x
  3. Fallback activation: If the point is too close, _handle_acquisition_failure() is called
  4. Strategy execution: The configured fallback strategy generates a new point
  5. Evaluation: The fallback point is evaluated and added to the dataset

17.4 Advanced Usage: Setting Tolerance

The tolerance_x parameter controls when the fallback strategy is triggered. A larger tolerance means points need to be farther apart, triggering the fallback more often:

def simple_objective(X):
    """Simple quadratic function for demonstration"""
    return np.sum(X**2, axis=1)

bounds_demo = [(-5, 5), (-5, 5)]

# Strict tolerance (smaller value) - fewer fallbacks
optimizer_strict = SpotOptim(
    fun=simple_objective,
    bounds=bounds_demo,
    tolerance_x=1e-6,  # Very small - almost never triggers fallback
    max_iter=20,
    seed=42
)

# Relaxed tolerance (larger value) - more fallbacks
optimizer_relaxed = SpotOptim(
    fun=simple_objective,
    bounds=bounds_demo,
    tolerance_x=0.5,  # Larger - triggers fallback more often
    max_iter=20,
    seed=42
)

print(f"Strict tolerance setup complete")
print(f"Relaxed tolerance setup complete")
Strict tolerance setup complete
Relaxed tolerance setup complete

17.5 Best Practices

17.5.1 1. Monitor Fallback Activations

Enable verbose mode to see when fallbacks are triggered:

def test_objective(X):
    return np.sum(X**2, axis=1)

optimizer = SpotOptim(
    fun=test_objective,
    bounds=[(-5, 5), (-5, 5)],
    max_iter=20,
    verbose=True,  # Shows fallback messages
    seed=42
)
print("Optimizer with verbose mode created")
TensorBoard logging disabled
Optimizer with verbose mode created

17.5.2 2. Adjust Tolerance Based on Problem Scale

For problems with small search spaces, use smaller tolerance:

def scale_objective(X):
    return np.sum(X**2, axis=1)

# Small search space
optimizer_small = SpotOptim(
    fun=scale_objective,
    bounds=[(-1, 1), (-1, 1)],
    tolerance_x=0.01,  # Small tolerance for small space
    max_iter=20,
    seed=42
)

# Large search space
optimizer_large = SpotOptim(
    fun=scale_objective,
    bounds=[(-100, 100), (-100, 100)],
    tolerance_x=1.0,  # Larger tolerance for large space
    max_iter=20,
    seed=42
)

print(f"Small space optimizer created (bounds: [-1, 1])")
print(f"Large space optimizer created (bounds: [-100, 100])")
Small space optimizer created (bounds: [-1, 1])
Large space optimizer created (bounds: [-100, 100])

17.6 Technical Details

17.6.1 Random Strategy Implementation

The fallback strategy:

  1. Generates a single point using Latin Hypercube Sampling
  2. Ensures the point is within bounds
  3. Applies variable type repairs (rounding for int/factor variables)

This is computationally efficient while maintaining good space-filling properties.

17.7 Summary

  • Strategy: SpotOptim uses a random space-filling strategy (LHS) when acquisition fails.
  • Trigger: Activated when acquisition-proposed point is too close to existing points (within tolerance_x)
  • Monitoring: Enable verbose=True to see when fallbacks occur

17.8 Jupyter Notebook

Note