TensorBoard logging disabled
Initial best: f(x) = 4.333328
Iteration 1: f(x) = 11.153361
Iteration 2: New best f(x) = 3.565790
Iteration 3: New best f(x) = 0.645419
Iteration 4: New best f(x) = 0.020688
Iteration 5: f(x) = 0.037801
Iteration 6: New best f(x) = 0.000706
Iteration 7: New best f(x) = 0.000030
Iteration 8: New best f(x) = 0.000000
Iteration 9: New best f(x) = 0.000000
Iteration 10: New best f(x) = 0.000000
Iteration 11: New best f(x) = 0.000000
Iteration 12: f(x) = 0.000000
Iteration 13: f(x) = 0.000000
Iteration 14: New best f(x) = 0.000000
Iteration 15: f(x) = 0.000000
Best point found: [1.14507043e-04 9.21127653e-05]
Best value: 0.000000
Total evaluations: 20
Sequential iterations: 15
Success: True
Message: Optimization terminated: maximum evaluations (20) reached
1.3 2. Initial Design Methods
These methods handle the creation and validation of the initial design of experiments.
1.3.1 2.1 get_initial_design()
Purpose: Generate or process initial design points Used by: optimize() method at the start Calls: _generate_initial_design() if X0 is None
This method handles three scenarios: 1. Generate LHS design when X0=None 2. Include starting point x0 if provided 3. Transform user-provided X0
# Example 1: Generate default LHS designopt = SpotOptim( fun=sphere, bounds=[(-5, 5), (-5, 5)], n_initial=10, seed=42)X0 = opt.get_initial_design()print(f"Generated LHS design shape: {X0.shape}")print(f"First 3 points:\n{X0[:3]}")# Example 2: With starting point x0opt_x0 = SpotOptim( fun=sphere, bounds=[(-5, 5), (-5, 5)], n_initial=10, x0=[0.0, 0.0], seed=42)X0_with_x0 = opt_x0.get_initial_design()print(f"\nDesign with x0, first point: {X0_with_x0[0]}")# Example 3: Provide custom initial designX0_custom = np.array([[0, 0], [1, 1], [2, 2]])X0_processed = opt.get_initial_design(X0_custom)print(f"\nCustom design shape: {X0_processed.shape}")
Generated LHS design shape: (10, 2)
First 3 points:
[[-2.77395605 1.56112156]
[ 4.14140208 -1.69736803]
[-4.09417735 3.02437765]]
Design with x0, first point: [0. 0.]
Custom design shape: (3, 2)
1.3.2 2.2 _curate_initial_design()
Purpose: Remove duplicates and ensure sufficient unique points Used by: optimize() after get_initial_design() Handles: Duplicate removal, point generation, repetition for noisy functions
Original points: 5
After removing duplicates: 10
Original points: 5
After repeating (3x): 15
1.3.3 2.3 _handle_NA_initial_design()
Purpose: Remove NaN/inf values from initial design evaluations Used by: optimize() after evaluating initial design Returns: Cleaned arrays and original count
Purpose: Validate sufficient points for surrogate fitting Used by: optimize() after handling NaN values Raises: ValueError if insufficient points
opt = SpotOptim( fun=sphere, bounds=[(-5, 5), (-5, 5)], n_initial=10, seed=42)# Example 1: Sufficient points - no errory0_sufficient = np.array([1.0, 2.0, 3.0, 4.0, 5.0])try: opt._check_size_initial_design(y0_sufficient, n_evaluated=10)print("✓ Sufficient points - validation passed")exceptValueErroras e:print(f"✗ Error: {e}")# Example 2: Insufficient points - raises errory0_insufficient = np.array([1.0]) # Only 1 point, need at least 3 for 2Dtry: opt._check_size_initial_design(y0_insufficient, n_evaluated=10)print("✓ Validation passed")exceptValueErroras e:print(f"✗ Expected error: {e}")
✓ Sufficient points - validation passed
✗ Expected error: Insufficient valid initial design points: only 1 finite value(s) out of 10 evaluated. Need at least 3 points to fit surrogate model. Please check your objective function or increase n_initial.
1.3.5 2.5 _get_best_xy_initial_design()
Purpose: Determine and store the best point from initial design Used by: optimize() after initial design evaluation Updates: self.best_x_ and self.best_y_ attributes
opt = SpotOptim( fun=sphere, bounds=[(-5, 5), (-5, 5)], n_initial=5, verbose=True, seed=42)# Simulate initial design (normally done in optimize())opt.X_ = np.array([[1, 2], [0, 0], [2, 1]])opt.y_ = np.array([5.0, 0.0, 5.0])opt._get_best_xy_initial_design()print(f"\nBest x from initial design: {opt.best_x_}")print(f"Best y from initial design: {opt.best_y_}")
TensorBoard logging disabled
Initial best: f(x) = 0.000000
Best x from initial design: [0 0]
Best y from initial design: 0.0
1.4 3. Surrogate Model Methods
These methods handle surrogate model operations during the optimization loop.
1.4.1 3.1 _fit_surrogate()
Purpose: Fit surrogate model to data Used by: optimize() in main loop Calls: _selection_dispatcher() if max_surrogate_points exceeded
opt = SpotOptim( fun=sphere, bounds=[(-5, 5), (-5, 5)], max_surrogate_points=10, seed=42)# Generate some training dataX = np.random.rand(50, 2) *10-5# 50 points in [-5, 5]y = np.sum(X**2, axis=1)# Fit surrogate (will select 10 best points)opt._fit_surrogate(X, y)print(f"Surrogate fitted successfully!")print(f"Surrogate model: {type(opt.surrogate).__name__}")
Purpose: Predict with uncertainty estimates, handling surrogates without return_std Used by: _acquisition_function() and plot_surrogate() Returns: Predictions and standard deviations
Purpose: Compute acquisition function value Used by: _suggest_next_point() for optimization Calls: _predict_with_uncertainty() Supports: Expected Improvement (EI), Probability of Improvement (PI), Mean prediction
opt = SpotOptim( fun=sphere, bounds=[(-5, 5), (-5, 5)], surrogate=GaussianProcessRegressor(), acquisition='ei', seed=42)# SetupX_train = np.array([[0, 0], [1, 1], [2, 2]])y_train = np.array([0, 2, 8])opt._fit_surrogate(X_train, y_train)opt.y_ = y_train # Needed for acquisition function# Evaluate acquisition functionx_eval = np.array([1.5, 1.5])acq_value = opt._acquisition_function(x_eval)print(f"Point to evaluate: {x_eval}")print(f"Acquisition function value (EI): {acq_value:.6f}")print(f"(Lower is better for minimization)")
Point to evaluate: [1.5 1.5]
Acquisition function value (EI): -0.000000
(Lower is better for minimization)
1.4.4 3.4 _suggest_next_point()
Purpose: Suggest next point to evaluate using acquisition function optimization Used by: optimize() in main loop Calls: _acquisition_function(), _handle_acquisition_failure() if needed Handles: Integer/factor rounding, duplicate avoidance
opt = SpotOptim( fun=sphere, bounds=[(-5, 5), (-5, 5)], surrogate=GaussianProcessRegressor(), acquisition='ei', seed=42)# SetupX_train = np.array([[0, 0], [1, 1], [2, 2]])y_train = np.array([0, 2, 8])opt._fit_surrogate(X_train, y_train)opt.X_ = X_trainopt.y_ = y_train# Suggest next pointx_next = opt._suggest_next_point()print(f"Next point to evaluate: {x_next}")print(f"Expected to be between known points or in unexplored regions")
Next point to evaluate: [-1.6288683 1.99062025]
Expected to be between known points or in unexplored regions
Purpose: Apply OCBA for noisy functions to determine which points to re-evaluate Used by: optimize() in main loop when noise=True and ocba_delta > 0 Returns: Points to re-evaluate or None
Purpose: Replace NaN and infinite values with penalty plus random noise Used by: _handle_NA_new_points() and indirectly by optimize() Algorithm: penalty = max(finite_y) + 3 * std(finite_y) + noise
Original X shape: (3, 2)
Clean X shape: (1, 2)
Clean X:
[[1 2]]
Clean y: [1.]
1.6.3 5.3 _handle_NA_new_points()
Purpose: Handle NaN/inf values in new evaluation points during main loop Used by: optimize() after evaluating new points Calls: _apply_penalty_NA() and _remove_nan() Returns: None, None if all evaluations invalid (skip iteration)
TensorBoard logging disabled
Case 1: Some valid evaluations
Warning: Found 1 NaN/inf value(s), replacing with adaptive penalty (max + 3*std = 6.0000)
Valid points remaining: 3
Case 2: All invalid evaluations
Warning: Found 2 NaN/inf value(s), replacing with adaptive penalty (max + 3*std = 6.0000)
Result: Valid points
1.7 6. Main Loop Update Methods
1.7.1 6.1 _update_best_main_loop()
Purpose: Update best solution found during main optimization loop Used by: optimize() after each iteration Updates: self.best_x_ and self.best_y_ if improvement found
opt = SpotOptim( fun=sphere, bounds=[(-5, 5), (-5, 5)], n_initial=5, verbose=True, seed=42)# Simulate optimization stateopt.n_iter_ =1opt.best_x_ = np.array([1.0, 1.0])opt.best_y_ =2.0# Case 1: New best foundprint("Case 1: New best found")x_new = np.array([[0.1, 0.1], [0.5, 0.5]])y_new = np.array([0.02, 0.5])opt._update_best_main_loop(x_new, y_new)print(f"Updated best_y: {opt.best_y_}\n")# Case 2: No improvementprint("Case 2: No improvement")opt.n_iter_ =2x_no_improve = np.array([[1.5, 1.5]])y_no_improve = np.array([4.5])opt._update_best_main_loop(x_no_improve, y_no_improve)print(f"Best_y unchanged: {opt.best_y_}")
TensorBoard logging disabled
Case 1: New best found
Iteration 1: New best f(x) = 0.020000
Updated best_y: 0.02
Case 2: No improvement
Iteration 2: f(x) = 4.500000
Best_y unchanged: 0.02
1.8 7. Termination Method
1.8.1 7.1 _determine_termination()
Purpose: Determine termination reason for optimization Used by: optimize() at the end Checks: Max iterations, time limit, or successful completion
opt = SpotOptim( fun=sphere, bounds=[(-5, 5), (-5, 5)], max_iter=20, max_time=10.0, seed=42)# Case 1: Maximum evaluations reachedprint("Case 1: Maximum evaluations reached")opt.y_ = np.zeros(20)start_time = time.time()msg = opt._determine_termination(start_time)print(f"Message: {msg}\n")# Case 2: Time limit exceeded (simulated)print("Case 2: Time limit exceeded")opt.y_ = np.zeros(10)start_time = time.time() -700# 11.67 minutes agomsg = opt._determine_termination(start_time)print(f"Message: {msg}\n")# Case 3: Successful completionprint("Case 3: Successful completion")opt.y_ = np.zeros(10)start_time = time.time()msg = opt._determine_termination(start_time)print(f"Message: {msg}")
Case 1: Maximum evaluations reached
Message: Optimization terminated: maximum evaluations (20) reached
Case 2: Time limit exceeded
Message: Optimization terminated: time limit (10.00 min) reached
Case 3: Successful completion
Message: Optimization finished successfully
1.9 8. Utility Methods
1.9.1 8.1 _select_new()
Purpose: Select rows from A that are not in X (avoid duplicate evaluations) Used by: _suggest_next_point() to ensure new points are different from evaluated points
Original X:
[[1.2 2.5]
[3.7 4.1]
[5.9 6.8]]
Repaired X (first column rounded to int):
[[1. 2.5]
[4. 4.1]
[6. 6.8]]
1.9.3 8.3 _map_to_factor_values()
Purpose: Map internal integer values to original factor strings Used by: optimize() when preparing results for user Handles: Factor (categorical) variables
Starting point x0 validated and processed successfully.
Original scale: [0.5 0.5]
Internal scale: [0.5 0.5]
TensorBoard logging disabled
Including starting point x0 in initial design as first evaluation.
Initial best: f(x) = 0.500000
Iteration 1: f(x) = 0.691041
Iteration 2: New best f(x) = 0.149351
Iteration 3: New best f(x) = 0.015439
Iteration 4: New best f(x) = 0.000530
Iteration 5: New best f(x) = 0.000070
Iteration 6: New best f(x) = 0.000004
Iteration 7: New best f(x) = 0.000003
Iteration 8: New best f(x) = 0.000003
Iteration 9: New best f(x) = 0.000003
Iteration 10: f(x) = 0.000003
Best point: [0.00169077 0.0001854 ]
Best value: 0.000003
TensorBoard logging disabled
Initial best: f(x) = 4.000000
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 1: f(x) = 29.000000
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 2: New best f(x) = 1.000000
Iteration 3: New best f(x) = 0.000000
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 4: f(x) = 25.000000
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 5: f(x) = 9.000000
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 6: f(x) = 5.000000
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 7: f(x) = 13.000000
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 8: f(x) = 13.000000
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 9: f(x) = 10.000000
Attempt 2/10: Previous point was duplicate after rounding, trying fallback
Acquisition failure: Using random space-filling design as fallback.
Iteration 10: f(x) = 8.000000
Best point: [ 0. -0.]
Best value: 0.000000
Point has integers: True
# Function that sometimes returns NaNdef sometimes_nan(X): X = np.atleast_2d(X) y = np.sum(X**2, axis=1)# Return NaN for large values (penalty will be applied) y[y >100] = np.nanreturn yopt = SpotOptim( fun=sometimes_nan, bounds=[(-10, 10), (-10, 10)], max_iter=20, n_initial=10, seed=42, verbose=True)result = opt.optimize()print(f"\nBest point: {result.x}")print(f"Best value: {result.fun:.6f}")print(f"Optimization succeeded despite NaN values: {result.success}")
TensorBoard logging disabled
Warning: 1 initial design point(s) returned NaN/inf and will be ignored (reduced from 10 to 9 points)
Note: Initial design size (9) is smaller than requested (10) due to NaN/inf values
Initial best: f(x) = 9.683226
Iteration 1: New best f(x) = 8.496828
Iteration 2: New best f(x) = 4.452328
Iteration 3: New best f(x) = 0.174337
Iteration 4: New best f(x) = 0.038871
Iteration 5: New best f(x) = 0.000492
Iteration 6: f(x) = 0.000649
Iteration 7: New best f(x) = 0.000114
Iteration 8: New best f(x) = 0.000003
Iteration 9: New best f(x) = 0.000002
Iteration 10: f(x) = 0.000002
Iteration 11: f(x) = 0.000002
Best point: [-0.00082295 -0.00113022]
Best value: 0.000002
Optimization succeeded despite NaN values: True
1.11 10. Method Relationship Diagram
Here’s how all the methods relate to each other in the optimization workflow:
This notebook demonstrated all major methods in SpotOptim with executable examples:
Core Flow: 1. Initial Design: Generate/process → Curate → Evaluate → Handle NaN → Validate → Get best 2. Main Loop: Fit surrogate → Apply OCBA → Suggest point → Evaluate → Handle NaN → Update best 3. Termination: Determine reason → Prepare results
Key Features: - Automatic handling of NaN/inf values with penalties - Support for noisy functions with OCBA re-evaluation - Integer and factor variable support - Flexible initial design (LHS, custom, or with starting point) - Multiple acquisition functions (EI, PI, mean) - Termination by max iterations or time limit
All examples can be run independently and demonstrate the modular design of SpotOptim!