SpotOptim.SpotOptim

SpotOptim.SpotOptim(
    fun,
    bounds=None,
    max_iter=20,
    n_initial=10,
    surrogate=None,
    acquisition='y',
    var_type=None,
    var_name=None,
    var_trans=None,
    tolerance_x=None,
    max_time=np.inf,
    repeats_initial=1,
    repeats_surrogate=1,
    ocba_delta=0,
    tensorboard_log=False,
    tensorboard_path=None,
    tensorboard_clean=False,
    fun_mo2so=None,
    seed=None,
    verbose=False,
    warnings_filter='ignore',
    n_infill_points=1,
    max_surrogate_points=None,
    selection_method='distant',
    acquisition_failure_strategy='random',
    penalty=False,
    penalty_val=None,
    acquisition_fun_return_size=3,
    acquisition_optimizer='differential_evolution',
    restart_after_n=100,
    restart_inject_best=True,
    max_restarts=None,
    x0=None,
    de_x0_prob=0.1,
    tricands_fringe=False,
    prob_de_tricands=0.8,
    window_size=None,
    min_tol_metric='chebyshev',
    prob_surrogate=None,
    n_jobs=1,
    eval_batch_size=1,
    acquisition_optimizer_kwargs=None,
    args=(),
    kwargs=None,
)

SPOT optimizer compatible with scipy.optimize interface.

Parameters

Name Type Description Default
fun callable Objective function to minimize. Should accept array of shape (n_samples, n_features). required
bounds list of tuple Bounds for each dimension as [(low, high), …]. None
max_iter int Maximum number of total function evaluations (including initial design). For example, max_iter=30 with n_initial=10 will perform 10 initial evaluations plus 20 sequential optimization iterations. Defaults to 20. 20
n_initial int Number of initial design points. Defaults to 10. 10
surrogate object Surrogate model with scikit-learn interface (fit/predict methods). If None, uses a Gaussian Process Regressor with Matern kernel. Default configuration:: * from sklearn.gaussian_process import GaussianProcessRegressor * from sklearn.gaussian_process.kernels import Matern, ConstantKernel * kernel = ConstantKernel(1.0, (1e-2, 1e12)) * Matern(length_scale=1.0, length_scale_bounds=(1e-4, 1e2), nu=2.5) * surrogate = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=100) Alternative surrogates can be provided, including SpotOptim’s Kriging model, Random Forests, or any scikit-learn compatible regressor. See Examples section. Defaults to None (uses default Gaussian Process configuration). None
acquisition str Acquisition function (‘ei’, ‘y’, ‘pi’). Defaults to ‘y’. 'y'
var_type list of str Variable types for each dimension. Supported types: * ‘float’: Python floats, continuous optimization (no rounding) * ‘int’: Python int, float values will be rounded to integers * ‘factor’: Unordered categorical data, internally mapped to int values (e.g., “red”->0, “green”->1, etc.) Defaults to None (which sets all dimensions to ‘float’). None
var_name list of str Variable names for each dimension. If None, uses default names [‘x0’, ‘x1’, ‘x2’, …]. Defaults to None. None
tolerance_x float Minimum distance between points. Defaults to np.sqrt(np.spacing(1)) None
var_trans list of str Variable transformations for each dimension. Supported: It can be one of id, log10, log, ln, sqrt, exp, square, cube, inv, reciprocal, or None. Also supports dynamic strings like log(x), sqrt(x), pow(x, p). Defaults to None (no transformations). None
max_time float Maximum runtime in minutes. If np.inf (default), no time limit. The optimization terminates when either max_iter evaluations are reached OR max_time minutes have elapsed, whichever comes first. Defaults to np.inf. np.inf
repeats_initial int Number of times to evaluate each initial design point. Useful for noisy objective functions. If > 1, noise handling is activated and statistics (mean, variance) are tracked. Defaults to 1. 1
repeats_surrogate int Number of times to evaluate each surrogate-suggested point. Useful for noisy objective functions. If > 1, noise handling is activated and statistics (mean, variance) are tracked. Defaults to 1. 1
ocba_delta int Number of additional evaluations to allocate using Optimal Computing Budget Allocation (OCBA) when noise handling is active. OCBA determines which existing design points should be re-evaluated to best distinguish between alternatives. Only used when repeats_surrogate > 1 and ocba_delta > 0. Requires at least 3 design points with variance information. Defaults to 0 (no OCBA). 0
tensorboard_log bool Enable TensorBoard logging. If True, optimization metrics and hyperparameters are logged to TensorBoard. View logs by running: tensorboard --logdir=<tensorboard_path> in a separate terminal. Defaults to False. False
tensorboard_path str Path for TensorBoard log files. If None and tensorboard_log is True, creates a default path: runs/spotoptim_YYYYMMDD_HHMMSS. Defaults to None. None
tensorboard_clean bool If True, removes all old TensorBoard log directories from the ‘runs’ folder before starting optimization. Use with caution as this permanently deletes all subdirectories in ‘runs’. Defaults to False. False
fun_mo2so callable Function to convert multi-objective values to single-objective. Takes an array of shape (n_samples, n_objectives) and returns array of shape (n_samples,). If None and objective function returns multi-objective values, uses first objective. Defaults to None. None
seed int Random seed for reproducibility. Defaults to None. None
verbose bool Print progress information. Defaults to False. False
warnings_filter Literal['default', 'error', 'ignore'] Filter for warnings. One of “error”, “ignore”, “always”, “all”, “default”, “module”, or “once”. Defaults to “ignore”. 'ignore'
n_infill_points int Number of infill points to suggest at each iteration. Defaults to 1. If > 1, multiple distinct points are proposed using the optimizer and fallback strategies. 1
max_surrogate_points int Maximum number of points to use for surrogate model fitting. If None, all points are used. If the number of evaluated points exceeds this limit, a subset is selected using the selection method. Defaults to None. None
selection_method str Method for selecting points when max_surrogate_points is exceeded. Options: ‘distant’ (Select points that are distant from each other via K-means clustering) or ‘best’ (Select all points from the cluster with the best mean objective value). Defaults to ‘distant’. 'distant'
acquisition_failure_strategy str Strategy for handling acquisition function failures. Options: ‘random’ (space-filling design via Latin Hypercube Sampling) Defaults to ‘random’. 'random'
penalty bool Whether to use penalty for handling NaN/inf values in objective function evaluations. Defaults to False. False
penalty_val float Penalty value to replace NaN/inf values in objective function evaluations. When the objective function returns NaN or inf, these values are replaced with penalty plus a small random noise (sampled from N(0, 0.1)) to avoid identical penalty values. This allows optimization to continue despite occasional function evaluation failures. Defaults to None. None
acquisition_fun_return_size int Number of top candidates to return from acquisition function optimization. Defaults to 3. 3
acquisition_optimizer str or callable Optimizer to use for maximizing acquisition function. Can be “differential_evolution” (default) or any method name supported by scipy.optimize.minimize (e.g., “Nelder-Mead”, “L-BFGS-B”). Can also be a callable with signature compatible with scipy.optimize.minimize (fun, x0, bounds, …). A specific version is “de_tricands”, which combines DE with Tricands. It can be parameterized with “prob_de_tricands” (probability of using DE). Defaults to “differential_evolution”. 'differential_evolution'
acquisition_optimizer_kwargs dict Kwargs passed to the acquisition function optimizer and GPR surrogate optimizer. Defaults to {‘maxiter’: 10000, ‘gtol’: 1e-9}. None
restart_after_n int Number of consecutive iterations with zero success rate before triggering a restart. Defaults to 100. 100
restart_inject_best bool Whether to inject the best solution found so far as a starting point for the next restart. Defaults to True. True
max_restarts Optional[int] Patience-based early-stopping threshold. When set to a non-negative integer N, the optimizer terminates after N consecutive restarts that fail to improve the best objective value. The returned :class:scipy.optimize.OptimizeResult has success=True and a message of the form "Optimization early stopped: no improvement for N consecutive restarts". This rule complements restart_after_n and mirrors the no_progress_loss pattern in Hyperopt and plateau-based stopping in Ray Tune and SMAC. None (default) disables the rule so the optimizer runs until max_iter or max_time is reached. None
x0 array - like Starting point for optimization, shape (n_features,). If provided, this point will be evaluated first and included in the initial design. The point should be within the bounds and will be validated before use. Defaults to None (no starting point, uses only LHS design). None
de_x0_prob float Probability of using the best point as starting point for differential evolution. Defaults to 0.1. 0.1
tricands_fringe bool Whether to use the fringe of the design space for the initial design. Defaults to False. False
prob_de_tricands float Probability of using differential evolution as an optimizer on the surrogate model. 1 - prob_de_tricands is the probability of using tricands. Defaults to 0.8. 0.8
n_jobs int Number of parallel workers. 1 (default) runs sequentially. Values > 1 activate steady-state parallel optimization: objective evaluations and acquisition searches are dispatched across n_jobs processes. Pass -1 to use all available CPU cores (os.cpu_count()). 0 and values < -1 raise ValueError. Defaults to 1. 1
eval_batch_size int Number of candidate points gathered from search tasks before a single fun(X_batch) call is dispatched to the process pool. 1 (default) preserves one-point-per-call behavior. Set to n_jobs or higher to exploit vectorized objective functions and reduce process-spawn overhead. Ignored when n_jobs == 1. Must be >= 1. Defaults to 1. 1
window_size int Window size for success rate calculation. None
min_tol_metric str Distance metric used when checking tolerance_x for duplicate detection. Default is “chebyshev”. Supports all metrics from scipy.spatial.distance.cdist, including: * “chebyshev”: L-infinity distance (hypercube). Default. Matches previous behavior. * “euclidean”: L2 distance (hypersphere). * “minkowski”: Lp distance (default p=2). * “cityblock”: Manhattan/L1 distance. * “cosine”: Cosine distance. * “correlation”: Correlation distance. * “canberra”, “braycurtis”, “sqeuclidean”, etc. 'chebyshev'

Attributes

Name Type Description
X_ ndarray All evaluated points, shape (n_samples, n_features).
y_ ndarray Function values at X_, shape (n_samples,). For multi-objective problems, these are the converted single-objective values.
y_mo ndarray or None Multi-objective function values, shape (n_samples, n_objectives). None for single-objective problems.
best_x_ ndarray Best point found, shape (n_features,).
best_y_ float Best function value found.
n_iter_ int Number of iterations performed. This is not the same as counter. Provided for compatibility with scipy.optimize routines.
counter int Total number of function evaluations.
success_rate float Rolling success rate over the last window_size evaluations. A success is counted when a new evaluation improves upon the best value found so far.
warnings_filter Literal['default', 'error', 'ignore'] Filter for warnings during optimization.
max_surrogate_points int or None Maximum number of points for surrogate fitting.
selection_method str Point selection method.
acquisition_failure_strategy str Strategy for handling acquisition failures (‘random’).
mean_X ndarray or None Aggregated unique design points (if repeats_surrogate > 1).
mean_y ndarray or None Mean y values per design point (if repeats_surrogate > 1).
var_y ndarray or None Variance of y values per design point (if repeats_surrogate > 1).
min_mean_X ndarray or None X value of best mean y (if repeats_surrogate > 1).
min_mean_y float or None Best mean y value (if repeats_surrogate > 1).
min_var_y float or None Variance of best mean y (if repeats_surrogate > 1).
de_x0_prob float Probability of using the best point as starting point for differential evolution.
tricands_fringe bool Whether to use the fringe of the design space for the initial design.
prob_de_tricands float Probability of using differential evolution as an optimizer on the surrogate model.

Examples

import numpy as np
from spotoptim import SpotOptim

def objective(X):
    return np.sum(X**2, axis=1)

# Example 1: Basic usage (deterministic function)
bounds = [(-5, 5), (-5, 5)]
optimizer = SpotOptim(fun=objective, bounds=bounds, max_iter=10, n_initial=5, verbose=True)
result = optimizer.optimize()
print("Best x:", result.x)
print("Best f(x):", result.fun)
TensorBoard logging disabled
Initial best: f(x) = 6.442487
Iter 1 | Best: 6.442487 | Curr: 6.446503 | Rate: 0.00 | Evals: 60.0%
Iter 2 | Best: 6.306723 | Rate: 0.50 | Evals: 70.0%
Iter 3 | Best: 4.695878 | Rate: 0.67 | Evals: 80.0%
Iter 4 | Best: 1.473589 | Rate: 0.75 | Evals: 90.0%
Iter 5 | Best: 0.463397 | Rate: 0.80 | Evals: 100.0%
Best x: [ 0.32388117 -0.59874669]
Best f(x): 0.46339660706114805
import numpy as np
from spotoptim import SpotOptim

def objective(X):
    return np.sum(X**2, axis=1)

# Example 2: With custom variable names
optimizer = SpotOptim(
    fun=objective,
    bounds=[(-5, 5), (-5, 5)],
    var_name=["param1", "param2"],
    max_iter=10,
    n_initial=5
)
result = optimizer.optimize()
# Ensure we can use custom names in plots
optimizer.plot_surrogate(show=False)

import numpy as np
from spotoptim import SpotOptim

# Example 3: Noisy function with repeated evaluations
def noisy_objective(X):
    base = np.sum(X**2, axis=1)
    noise = np.random.normal(0, 0.1, size=base.shape)
    return base + noise

optimizer = SpotOptim(
    fun=noisy_objective,
    bounds=[(-5, 5), (-5, 5)],
    max_iter=10,
    n_initial=5,
    repeats_initial=1,      # Evaluate each initial point once
    repeats_surrogate=2,    # Evaluate each new point twice
    seed=42,                # For reproducibility
    verbose=True
)
result = optimizer.optimize()

# Access noise statistics
print("Unique design points:", optimizer.mean_X.shape[0])
print("Best mean value:", optimizer.min_mean_y)
print("Variance at best point:", optimizer.min_var_y)
TensorBoard logging disabled
Initial best: f(x) = 3.403652, mean best: f(x) = 3.403652
Iter 1 | Best: 3.279049 | Rate: 0.50 | Evals: 70.0% | Mean Best: 3.369717
Iter 2 | Best: 3.279049 | Curr: 3.392847 | Rate: 0.25 | Evals: 90.0% | Mean Curr: 3.454693
Iter 3 | Best: 1.563307 | Rate: 0.50 | Evals: 110.0% | Mean Best: 1.613606
Unique design points: 8
Best mean value: 1.6136057205973113
Variance at best point: 0.002529978015323257
import numpy as np
from spotoptim import SpotOptim

def noisy_objective(X):
    base = np.sum(X**2, axis=1)
    noise = np.random.normal(0, 0.1, size=base.shape)
    return base + noise

# Example 4: Noisy function with OCBA (Optimal Computing Budget Allocation)
optimizer_ocba = SpotOptim(
    fun=noisy_objective,
    bounds=[(-5, 5), (-5, 5)],
    max_iter=20,
    n_initial=5,
    repeats_initial=2,      # Initial repeats
    repeats_surrogate=1,    # Surrogate repeats
    ocba_delta=3,           # Allocate 3 additional evaluations per iteration
    seed=42,
    verbose=True
)
result = optimizer_ocba.optimize()

# OCBA intelligently re-evaluates promising points to reduce uncertainty
print("Total evaluations:", result.nfev)
print("Unique design points:", optimizer_ocba.mean_X.shape[0])
print("Best mean value:", optimizer_ocba.min_mean_y)
print("Variance at best point:", optimizer_ocba.min_var_y)
TensorBoard logging disabled
Initial best: f(x) = 3.328092, mean best: f(x) = 3.368681

In get_ocba():
means: [25.90094202 19.61660056 23.96405211  3.36868097 10.79578138]
vars: [6.73858271e-13 2.56053422e-03 1.00799409e-03 1.64745915e-03
 1.91555606e-03]
delta: 3
n_designs: 5
Ratios: [3.82210611e-11 2.79305049e-01 6.84325095e-02 9.58065217e-01
 1.00000000e+00]
Best: 3, Second best: 4
  OCBA: Adding 3 re-evaluation(s)
Iter 1 | Best: 3.103418 | Rate: 0.75 | Evals: 70.0% | Mean Best: 3.103418
Iter 2 | Best: 3.103418 | Curr: 3.354609 | Rate: 0.60 | Evals: 75.0% | Mean Curr: 3.354609
Iter 3 | Best: 1.613698 | Rate: 0.67 | Evals: 80.0% | Mean Best: 1.613698
Iter 4 | Best: 1.230184 | Rate: 0.71 | Evals: 85.0% | Mean Best: 1.230184
Iter 5 | Best: 0.449302 | Rate: 0.75 | Evals: 90.0% | Mean Best: 0.449302
Iter 6 | Best: 0.367152 | Rate: 0.78 | Evals: 95.0% | Mean Best: 0.367152
Iter 7 | Best: 0.367152 | Curr: 0.518600 | Rate: 0.70 | Evals: 100.0% | Mean Curr: 0.518600
Total evaluations: 20
Unique design points: 12
Best mean value: 0.3671523333490571
Variance at best point: 0.0
import numpy as np
import shutil
import os
from spotoptim import SpotOptim

def objective(X):
    return np.sum(X**2, axis=1)

# Example 5: With TensorBoard logging
tb_dir = "runs/my_optimization"
optimizer_tb = SpotOptim(
    fun=objective,
    bounds=[(-5, 5), (-5, 5)],
    max_iter=10,
    n_initial=5,
    tensorboard_log=True,   # Enable TensorBoard
    tensorboard_path=tb_dir,  # Optional custom path
    verbose=True
)
result = optimizer_tb.optimize()

# View logs in browser: tensorboard --logdir=runs/my_optimization
print("Logs saved to:", optimizer_tb.tensorboard_path)

# Cleanup log dir
if os.path.exists(tb_dir):
    shutil.rmtree(tb_dir)
TensorBoard logging enabled: runs/my_optimization
Initial best: f(x) = 7.484293
Iter 1 | Best: 7.484293 | Curr: 7.608008 | Rate: 0.00 | Evals: 60.0%
Iter 2 | Best: 7.484293 | Curr: 7.531619 | Rate: 0.00 | Evals: 70.0%
Iter 3 | Best: 4.087368 | Rate: 0.33 | Evals: 80.0%
Iter 4 | Best: 0.876063 | Rate: 0.50 | Evals: 90.0%
Iter 5 | Best: 0.257266 | Rate: 0.60 | Evals: 100.0%
TensorBoard writer closed. View logs with: tensorboard --logdir=runs/my_optimization
Logs saved to: runs/my_optimization
import numpy as np
from spotoptim import SpotOptim
from spotoptim.surrogate import Kriging

def objective(X):
    return np.sum(X**2, axis=1)

# Example 6: Using SpotOptim's Kriging surrogate
kriging_model = Kriging(
    noise=1e-10,           # Regularization parameter
    kernel='gauss',         # Gaussian/RBF kernel
    min_theta=-3.0,         # Min log10(theta) bound
    max_theta=2.0,          # Max log10(theta) bound
    seed=42
)
optimizer_kriging = SpotOptim(
    fun=objective,
    bounds=[(-5, 5), (-5, 5)],
    surrogate=kriging_model,
    max_iter=10,
    n_initial=5,
    seed=42,
    verbose=True
)
result = optimizer_kriging.optimize()
print("Best solution found:", result.x)
print("Best value:", result.fun)
TensorBoard logging disabled
Initial best: f(x) = 3.251349
Iter 1 | Best: 3.251349 | Curr: 4.425773 | Rate: 0.00 | Evals: 60.0%
Iter 2 | Best: 1.617684 | Rate: 0.50 | Evals: 70.0%
Iter 3 | Best: 1.617684 | Curr: 18.683325 | Rate: 0.33 | Evals: 80.0%
Iter 4 | Best: 0.840474 | Rate: 0.50 | Evals: 90.0%
Iter 5 | Best: 0.103058 | Rate: 0.60 | Evals: 100.0%
Best solution found: [0.00141839 0.3210233 ]
Best value: 0.10305797418752223
import numpy as np
from spotoptim import SpotOptim
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel, WhiteKernel

def objective(X):
    return np.sum(X**2, axis=1)

# Example 7: Using sklearn Gaussian Process with custom kernel
# Custom kernel: constant * RBF + white noise
custom_kernel = ConstantKernel(1.0, (1e-2, 1e2)) * RBF(
    length_scale=1.0, length_scale_bounds=(1e-1, 10.0)
) + WhiteKernel(noise_level=1e-5, noise_level_bounds=(1e-10, 1e-1))

gp_custom = GaussianProcessRegressor(
    kernel=custom_kernel,
    n_restarts_optimizer=15,
    normalize_y=True,
    random_state=42
)

optimizer_custom_gp = SpotOptim(
    fun=objective,
    bounds=[(-5, 5), (-5, 5)],
    surrogate=gp_custom,
    max_iter=10,
    n_initial=5,
    seed=42
)
result = optimizer_custom_gp.optimize()
import numpy as np
from spotoptim import SpotOptim
from sklearn.ensemble import RandomForestRegressor

def objective(X):
    return np.sum(X**2, axis=1)

# Example 8: Using Random Forest as surrogate
rf_model = RandomForestRegressor(
    n_estimators=100,
    max_depth=10,
    random_state=42
)

optimizer_rf = SpotOptim(
    fun=objective,
    bounds=[(-5, 5), (-5, 5)],
    surrogate=rf_model,
    max_iter=10,
    n_initial=5,
    seed=42
)
result = optimizer_rf.optimize()

# Note: Random Forests don't provide uncertainty estimates,
# so Expected Improvement (EI) may be less effective.
# Consider using acquisition='y' for pure exploitation.
import numpy as np
from spotoptim import SpotOptim
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import Matern, RationalQuadratic, ConstantKernel, RBF

def objective(X):
    return np.sum(X**2, axis=1)

# Example 9: Comparing different kernels for Gaussian Process
# Matern kernel with nu=1.5 (once differentiable)
kernel_matern15 = ConstantKernel(1.0) * Matern(length_scale=1.0, nu=1.5)
gp_matern15 = GaussianProcessRegressor(kernel=kernel_matern15, normalize_y=True)

# Matern kernel with nu=2.5 (twice differentiable, DEFAULT)
kernel_matern25 = ConstantKernel(1.0) * Matern(length_scale=1.0, nu=2.5)
gp_matern25 = GaussianProcessRegressor(kernel=kernel_matern25, normalize_y=True)

# RBF kernel (infinitely differentiable, smooth)
kernel_rbf = ConstantKernel(1.0) * RBF(length_scale=1.0)
gp_rbf = GaussianProcessRegressor(kernel=kernel_rbf, normalize_y=True)

# Rational Quadratic kernel (mixture of RBF kernels)
kernel_rq = ConstantKernel(1.0) * RationalQuadratic(length_scale=1.0, alpha=1.0)
gp_rq = GaussianProcessRegressor(kernel=kernel_rq, normalize_y=True)

# Use any of these as surrogate
optimizer_rbf = SpotOptim(fun=objective, bounds=[(-5, 5), (-5, 5)],
                          surrogate=gp_rbf, max_iter=10, n_initial=5)
result = optimizer_rbf.optimize()

Methods

Name Description
aggregate_mean_var Aggregate X and y values to compute mean and variance per group.
apply_ocba Apply Optimal Computing Budget Allocation for noisy functions.
apply_penalty_NA Replace NaN and infinite values with penalty plus random noise.
check_size_initial_design Validate that initial design has sufficient points for surrogate fitting.
curate_initial_design Remove duplicates and ensure sufficient unique points in initial design.
detect_var_type Auto-detect variable types based on factor mappings.
determine_termination Determine termination reason for optimization.
evaluate_function Evaluate objective function at points X.
execute_optimization_run Dispatcher for optimization run (Sequential vs Steady-State Parallel).
fit_scheduler Fit surrogate model using appropriate data based on noise handling.
fit_select_best_cluster Selects all points from the cluster with the smallest mean y value.
fit_select_distant_points Selects k points that are distant from each other using K-means clustering.
fit_selection_dispatcher Dispatcher for selection methods.
fit_surrogate Fit surrogate model to data.
gen_design_table Generate a table of the design or results.
generate_initial_design Generate initial space-filling design using Latin Hypercube Sampling.
get_best_hyperparameters Get the best hyperparameter configuration found during optimization.
get_best_xy_initial_design Determine and store the best point from initial design.
get_design_table Get a table string showing the search space design before optimization.
get_experiment_filename Generate experiment filename with ’_exp.pkl’ suffix.
get_importance Calculate variable importance scores.
get_initial_design Generate or process initial design points. Ensures that design points are in
get_ocba Optimal Computing Budget Allocation (OCBA).
get_ocba_X Calculate OCBA allocation and repeat input array X.
get_pickle_safe_optimizer Create a pickle-safe copy of the optimizer.
get_ranks Returns ranks of numbers within input array x.
get_result_filename Generate result filename with ’_res.pkl’ suffix.
get_results_table Get a comprehensive table string of optimization results.
get_shape Get the shape of the objective function output.
get_stars Converts a list of values to a list of stars.
get_success_rate Get the current success rate of the optimization process.
handle_default_var_trans Handle default variable transformations. Does not perform any transformations,
init_storage Initialize storage for optimization.
init_surrogate Initialize or configure the surrogate model for optimization. Handles three surrogate configurations:
inverse_transform_X Transform parameter array from internal to original scale.
inverse_transform_value Apply inverse transformation to a single float value.
load_experiment Load experiment configuration from a pickle file.
load_result Load complete optimization results from a pickle file.
map_to_factor_values Map internal integer factor values back to string labels.
mo2so Convert multi-objective values to single-objective.
modify_bounds_based_on_var_type Modify bounds based on variable types.
optimize Run the optimization process. The optimization terminates when either the total function evaluations reach
optimize_acquisition_func Optimize the acquisition function to find the next point to evaluate.
optimize_sequential_run Perform a single sequential optimization run.
optimize_steady_state Perform steady-state asynchronous optimization (n_jobs > 1).
plot_importance Plot variable importance.
plot_important_hyperparameter_contour Plot surrogate contours using spotoptim.plot.visualization.plot_important_hyperparameter_contour.
plot_parameter_scatter Plot parameter distributions showing relationship between each parameter and objective.
plot_progress Plot optimization progress using spotoptim.plot.visualization.plot_progress.
plot_surrogate Plot the surrogate model for two dimensions.
print_best Print the best solution found during optimization.
print_results Alias for print(get_results_table()) for compatibility.
process_factor_bounds Process bounds to handle factor variables.
reinitialize_components Reinitialize components that were excluded during pickling.
remove_nan Remove rows where y contains NaN or inf values.
repair_non_numeric Round non-numeric values to integers based on variable type.
rm_initial_design_NA_values Remove NaN/inf values from initial design evaluations.
save_experiment Save experiment configuration to a pickle file.
save_result Save complete optimization results to a pickle file.
select_new Select rows from A that are not in X.
sensitivity_spearman Compute and print Spearman correlation between parameters and objective values.
set_seed Set global random seeds for reproducibility.
setup_dimension_reduction Set up dimension reduction by identifying fixed dimensions.
store_mo Store multi-objective values in self.y_mo.
suggest_next_infill_point Suggest next point to evaluate (dispatcher).
to_all_dim Expand reduced-dimensional points to full-dimensional representation.
to_red_dim Reduce full-dimensional points to optimization space.
transform_X Transform parameter array from original (natural) to internal scale.
transform_bounds Transform bounds from original to internal scale.
transform_value Apply transformation to a single float value.
update_repeats_infill_points Repeat infill point for noisy function evaluation. Used in the sequential_loop.
update_stats Update optimization statistics.
update_storage Update storage (X_, y_) with new evaluation points.
update_success_rate Update the rolling success rate of the optimization process.
validate_x0 Validate and process starting point x0. Called in __init__ and optimize.

aggregate_mean_var

SpotOptim.SpotOptim.aggregate_mean_var(X, y)

Aggregate X and y values to compute mean and variance per group. For repeated evaluations at the same design point, this method computes the mean function value and variance (using population variance, ddof=0).

Parameters

Name Type Description Default
X ndarray Design points, shape (n_samples, n_features). required
y ndarray Function values, shape (n_samples,). required

Returns

Name Type Description
tuple Tuple[np.ndarray, np.ndarray, np.ndarray] A tuple containing: * X_agg (ndarray): Unique design points, shape (n_groups, n_features) * y_mean (ndarray): Mean y values per group, shape (n_groups,) * y_var (ndarray): Variance of y values per group, shape (n_groups,)

Examples

import numpy as np
from spotoptim import SpotOptim
opt = SpotOptim(fun=lambda X: np.sum(X**2, axis=1),
                bounds=[(-5, 5), (-5, 5)],
                repeats_initial=2)
X = np.array([[1, 2], [3, 4], [1, 2]])
y = np.array([1, 2, 3])
X_agg, y_mean, y_var = opt.aggregate_mean_var(X, y)
print(X_agg.shape)
print(y_mean)
print(y_var)
(2, 2)
[2. 2.]
[1. 0.]

apply_ocba

SpotOptim.SpotOptim.apply_ocba()

Apply Optimal Computing Budget Allocation for noisy functions.

apply_penalty_NA

SpotOptim.SpotOptim.apply_penalty_NA(
    y,
    y_history=None,
    penalty_value=None,
    sd=0.1,
)

Replace NaN and infinite values with penalty plus random noise. Used in the optimize() method after function evaluations. This method follows the approach from spotpython.utils.repair.apply_penalty_NA, replacing NaN/inf values with a penalty value plus random noise to avoid identical penalty values.

Parameters

Name Type Description Default
y ndarray Array of objective function values to be repaired. required
y_history ndarray Historical objective function values used for computing penalty statistics. If None, uses y itself. Default is None. None
penalty_value float Value to replace NaN/inf with. If None, computes penalty as: max(finite_y_history) + 3 * std(finite_y_history). If all values are NaN/inf or only one finite value exists, falls back to self.penalty_val. Default is None. None
sd float Standard deviation for normal distributed random noise added to penalty. Default is 0.1. 0.1

Returns

Name Type Description
ndarray np.ndarray Array with NaN/inf replaced by penalty_value + random noise (normal distributed with mean 0 and standard deviation sd).

Examples

import numpy as np
from spotoptim import SpotOptim
opt = SpotOptim(fun=lambda X: np.sum(X**2, axis=1), bounds=[(-5, 5)])
y_hist = np.array([1.0, 2.0, 3.0, 5.0])
y_new = np.array([4.0, np.nan, np.inf])
y_clean = opt.apply_penalty_NA(y_new, y_history=y_hist)
print(f"np.all(np.isfinite(y_clean)): {np.all(np.isfinite(y_clean))}")
print(f"y_clean: {y_clean}")
# NaN/inf replaced with worst value from history + 3*std + noise
print(f"y_clean[1] > 5.0: {y_clean[1] > 5.0}")  # Should be larger than max finite value in history
np.all(np.isfinite(y_clean)): True
y_clean: [ 4.         10.02958509 10.21671416]
y_clean[1] > 5.0: True

check_size_initial_design

SpotOptim.SpotOptim.check_size_initial_design(y0, n_evaluated)

Validate that initial design has sufficient points for surrogate fitting.

Checks if the number of valid initial design points meets the minimum requirement for fitting a surrogate model. The minimum required is the smaller of: * (a) typical minimum for surrogate fitting (3 for multi-dimensional, 2 for 1D), or * (b) what the user requested (n_initial).

Parameters

Name Type Description Default
y0 ndarray Function values at initial design points (after filtering), shape (n_valid,). required
n_evaluated int Original number of points evaluated before filtering. required

Returns

Name Type Description
None None

Raises

Name Type Description
ValueError If the number of valid points is less than the minimum required.

Examples

import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere

opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=10
)
# Sufficient points - no error
y0 = np.array([1.0, 2.0, 3.0, 4.0, 5.0])
opt.check_size_initial_design(y0, n_evaluated=10)

# Insufficient points - raises ValueError
y0_small = np.array([1.0])
try:
    opt.check_size_initial_design(y0_small, n_evaluated=10)
except ValueError as e:
    print(f"Error: {e}")

# With verbose output
opt_verbose = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=10,
    verbose=True
)
y0_reduced = np.array([1.0, 2.0, 3.0])  # Less than n_initial but valid
opt_verbose.check_size_initial_design(y0_reduced, n_evaluated=10)
Error: Insufficient valid initial design points: only 1 finite value(s) out of 10 evaluated. Need at least 3 points to fit surrogate model. Please check your objective function or increase n_initial.
TensorBoard logging disabled
Note: Initial design size (3) is smaller than requested (10) due to NaN/inf values

curate_initial_design

SpotOptim.SpotOptim.curate_initial_design(X0)

Remove duplicates and ensure sufficient unique points in initial design.

This method handles deduplication that can occur after rounding integer/factor variables. If duplicates are found, it generates additional points to reach the target n_initial unique points. Also handles repeating points when repeats_initial > 1.

Parameters

Name Type Description Default
X0 ndarray Initial design points in internal scale, shape (n_samples, n_features). required

Returns

Name Type Description
ndarray np.ndarray Curated initial design with duplicates removed and repeated if necessary, shape (n_unique * repeats_initial, n_features).

Examples

import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere

opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=10,
    var_type=['int', 'int']  # Integer variables may cause duplicates
)
X0 = opt.get_initial_design()
X0_curated = opt.curate_initial_design(X0)
X0_curated.shape[0] == 10  # Should have n_initial unique points
True
import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere
# With repeats
opt_repeat = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=5,
    repeats_initial=3
)
X0 = opt_repeat.get_initial_design()
X0_curated = opt_repeat.curate_initial_design(X0)
X0_curated.shape[0] == 15  # 5 unique points * 3 repeats
True

detect_var_type

SpotOptim.SpotOptim.detect_var_type()

Auto-detect variable types based on factor mappings.

Returns

Name Type Description
list list List of variable types (‘factor’ or ‘float’) for each dimension. Dimensions with factor mappings are assigned ‘factor’, others ‘float’.

Examples

from spotoptim import SpotOptim

# Define a simple objective mapping names to values for demonstration
def objective(X):
    # X has shape (n_samples, n_dimensions)
    return X[:, 0] + X[:, 1]

# The first dimension has factor levels ('red', 'green', 'blue')
# The second dimension is continuous bounds (0, 10)
spot = SpotOptim(fun=objective, bounds=[('red', 'green', 'blue'), (0, 10)])
print(spot.detect_var_type())
['factor', 'float']

determine_termination

SpotOptim.SpotOptim.determine_termination(timeout_start)

Determine termination reason for optimization. Checks the termination conditions and returns an appropriate message indicating why the optimization stopped. Three possible termination conditions are checked in order of priority: 1. Maximum number of evaluations reached 2. Maximum time limit exceeded 3. Successful completion (neither limit reached)

Parameters

Name Type Description Default
timeout_start float Start time of optimization (from time.time()). required

Returns

Name Type Description
str str Message describing the termination reason.

Examples

import numpy as np
import time
from spotoptim import SpotOptim
opt = SpotOptim(
    fun=lambda X: np.sum(X**2, axis=1),
    bounds=[(-5, 5), (-5, 5)],
    max_iter=10,
    max_time=10.0
)
# Case 1: Maximum evaluations reached
opt.y_ = np.zeros(20)  # Simulate 20 evaluations
start_time = time.time()
msg = opt.determine_termination(start_time)
print(msg)
Optimization terminated: maximum evaluations (10) reached
# Case 2: Time limit exceeded
import numpy as np
import time
from spotoptim import SpotOptim
opt.y_ = np.zeros(10)  # Only 10 evaluations
start_time = time.time() - 700  # Simulate 11.67 minutes elapsed
msg = opt.determine_termination(start_time)
print(msg)
Optimization terminated: maximum evaluations (10) reached
# Case 3: Successful completion
import numpy as np
import time
from spotoptim import SpotOptim
opt.y_ = np.zeros(10)  # Under max_iter
start_time = time.time()  # Just started
msg = opt.determine_termination(start_time)
print(msg)
Optimization terminated: maximum evaluations (10) reached

evaluate_function

SpotOptim.SpotOptim.evaluate_function(X)

Evaluate objective function at points X. Used in the optimize() method to evaluate the objective function.

Input Space: X is expected in Transformed and Mapped Space (Internal scale, Reduced dimensions). Process as follows: 1. Expands X to Transformed Space (Full dimensions) if dimension reduction is active. 2. Inverse transforms X to Natural Space (Original scale). 3. Evaluates the user function with points in Natural Space.

If dimension reduction is active, expands X to full dimensions before evaluation. Supports both single-objective and multi-objective functions. For multi-objective functions, converts to single-objective using mo2so method.

Parameters

Name Type Description Default
X ndarray Points to evaluate in Transformed and Mapped Space, shape (n_samples, n_reduced_features). required

Returns

Name Type Description
ndarray np.ndarray Function values, shape (n_samples,).

Examples

import numpy as np
from spotoptim import SpotOptim
# Single-objective function
opt_so = SpotOptim(
    fun=lambda X: np.sum(X**2, axis=1),
    bounds=[(-5, 5), (-5, 5)],
    max_iter=10,
    n_initial=5
)
X = np.array([[1.0, 2.0], [3.0, 4.0]])
y = opt_so.evaluate_function(X)
print(f"Single-objective output: {y}")
Single-objective output: [ 5. 25.]
import numpy as np
from spotoptim import SpotOptim
# Multi-objective function (default: use first objective)
opt_mo = SpotOptim(
    fun=lambda X: np.column_stack([
        np.sum(X**2, axis=1),
        np.sum((X-1)**2, axis=1)
    ]),
    bounds=[(-5, 5), (-5, 5)],
    max_iter=10,
    n_initial=5
)
y_mo = opt_mo.evaluate_function(X)
print(f"Multi-objective output (first obj): {y_mo}")
Multi-objective output (first obj): [ 5. 25.]

execute_optimization_run

SpotOptim.SpotOptim.execute_optimization_run(
    timeout_start,
    X0=None,
    y0_known=None,
    max_iter_override=None,
    shared_best_y=None,
    shared_lock=None,
)

Dispatcher for optimization run (Sequential vs Steady-State Parallel). Depending on n_jobs, calls optimize_steady_state (n_jobs > 1) or optimize_sequential_run (n_jobs == 1).

Parameters

Name Type Description Default
timeout_start float Start time for timeout. required
X0 Optional[np.ndarray] Initial design points in Natural Space, shape (n_initial, n_features). None
y0_known Optional[float] Known best value for initial design. None
max_iter_override Optional[int] Override for maximum number of iterations. None
shared_best_y Optional[float] Shared best value for parallel runs. None
shared_lock Optional[Lock] Shared lock for parallel runs. None

Returns

Name Type Description
Tuple[str, OptimizeResult] Tuple[str, OptimizeResult]: Tuple containing status and optimization result.

Examples

import time
import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=5,
    max_iter=10,
    seed=0,
    n_jobs=1,  # Use sequential optimization for deterministic output
    verbose=True
)
status, result = opt.execute_optimization_run(timeout_start=time.time())
print(status)
print(result.message.splitlines()[0])
TensorBoard logging disabled
Initial best: f(x) = 8.463203
Iter 1 | Best: 8.463203 | Curr: 18.224245 | Rate: 0.00 | Evals: 60.0%
Iter 2 | Best: 8.412459 | Rate: 0.50 | Evals: 70.0%
Iter 3 | Best: 5.623369 | Rate: 0.67 | Evals: 80.0%
Iter 4 | Best: 2.903005 | Rate: 0.75 | Evals: 90.0%
Iter 5 | Best: 0.022055 | Rate: 0.80 | Evals: 100.0%
FINISHED
Optimization terminated: maximum evaluations (10) reached

fit_scheduler

SpotOptim.SpotOptim.fit_scheduler()

Fit surrogate model using appropriate data based on noise handling. This method selects the appropriate training data for surrogate fitting: * For noisy functions (repeats_surrogate > 1): Uses mean_X and mean_y (aggregated values) * For deterministic functions: Uses X_ and y_ (all evaluated points) The data is transformed to internal scale before fitting the surrogate.

Returns

Name Type Description
None None

Examples

>>> import numpy as np
>>> from spotoptim import SpotOptim
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> # Deterministic function
>>> def sphere(X):
...     X = np.atleast_2d(X)
...     return np.sum(X**2, axis=1)
>>> opt = SpotOptim(
...     fun=sphere,
...     bounds=[(-5, 5), (-5, 5)],
...     surrogate=GaussianProcessRegressor(),
...     n_initial=5
... )
>>> # Simulate optimization state
>>> opt.X_ = np.array([[1, 2], [0, 0], [2, 1]])
>>> opt.y_ = np.array([5.0, 0.0, 5.0])
>>> opt.fit_scheduler()
>>> # Surrogate fitted with X_ and y_
>>>
>>> # Noisy function
>>> def sphere(X):
...     X = np.atleast_2d(X)
...     return np.sum(X**2, axis=1)
>>> opt_noise = SpotOptim(
...     fun=sphere,
...     bounds=[(-5, 5), (-5, 5)],
...     surrogate=GaussianProcessRegressor(),
...     n_initial=5,
...     repeats_initial=3,
... )
>>> # Simulate noisy optimization state
>>> opt_noise.mean_X = np.array([[1, 2], [0, 0]])
>>> opt_noise.mean_y = np.array([5.0, 0.0])
>>> opt_noise.fit_scheduler()
>>> # Surrogate fitted with mean_X and mean_y

fit_select_best_cluster

SpotOptim.SpotOptim.fit_select_best_cluster(X, y, k)

Selects all points from the cluster with the smallest mean y value. This method performs K-means clustering and selects all points from the cluster whose center corresponds to the best (smallest) mean objective function value.

Parameters

Name Type Description Default
X ndarray Design points, shape (n_samples, n_features). required
y ndarray Function values at X, shape (n_samples,). required
k int Number of clusters. required

Returns

Name Type Description
tuple Tuple[np.ndarray, np.ndarray] A tuple containing: * selected_X (ndarray): Selected design points from best cluster, shape (m, n_features). * selected_y (ndarray): Function values at selected points, shape (m,).

Examples

import numpy as np
from spotoptim import SpotOptim
opt = SpotOptim(fun=lambda X: np.sum(X**2, axis=1),
                bounds=[(-5, 5), (-5, 5)],
                max_surrogate_points=5,
                 selection_method='best')
X = np.random.rand(100, 2)
y = np.random.rand(100)
X_sel, y_sel = opt.fit_select_best_cluster(X, y, 5)
print(f"X_sel.shape: {X_sel.shape}")
print(f"y_sel.shape: {y_sel.shape}")
X_sel.shape: (25, 2)
y_sel.shape: (25,)

fit_select_distant_points

SpotOptim.SpotOptim.fit_select_distant_points(X, y, k)

Selects k points that are distant from each other using K-means clustering. This method performs K-means clustering to find k clusters, then selects the point closest to each cluster center. This ensures a space-filling subset of points for surrogate model training.

Parameters

Name Type Description Default
X ndarray Design points, shape (n_samples, n_features). required
y ndarray Function values at X, shape (n_samples,). required
k int Number of points to select. required

Returns

Name Type Description
tuple Tuple[np.ndarray, np.ndarray] A tuple containing: * selected_X (ndarray): Selected design points, shape (k, n_features). * selected_y (ndarray): Function values at selected points, shape (k,).

Examples

import numpy as np
from spotoptim import SpotOptim
opt = SpotOptim(fun=lambda X: np.sum(X**2, axis=1),
                bounds=[(-5, 5), (-5, 5)],
                max_surrogate_points=5)
X = np.random.rand(100, 2)
y = np.random.rand(100)
X_sel, y_sel = opt.fit_select_distant_points(X, y, 5)
print(X_sel.shape)
(5, 2)

fit_selection_dispatcher

SpotOptim.SpotOptim.fit_selection_dispatcher(X, y)

Dispatcher for selection methods. Depending on the value of self.selection_method, this method calls the appropriate selection function to choose a subset of points for surrogate model training when the total number of points exceeds self.max_surrogate_points.

Parameters

Name Type Description Default
X ndarray Design points, shape (n_samples, n_features). required
y ndarray Function values at X, shape (n_samples,). required

Returns

Name Type Description
tuple Tuple[np.ndarray, np.ndarray] A tuple containing: * selected_X (ndarray): Selected design points. * selected_y (ndarray): Function values at selected points.

Examples

import numpy as np
from spotoptim import SpotOptim
opt = SpotOptim(fun=lambda X: np.sum(X**2, axis=1),
                bounds=[(-5, 5), (-5, 5)],
                max_surrogate_points=5)
X = np.random.rand(100, 2)
y = np.random.rand(100)
X_sel, y_sel = opt.fit_selection_dispatcher(X, y)
print(X_sel.shape[0] <= 5)
True

fit_surrogate

SpotOptim.SpotOptim.fit_surrogate(X, y)

Fit surrogate model to data. Used by fit_scheduler() to fit the surrogate model. If the number of points exceeds self.max_surrogate_points, a subset of points is selected using the selection dispatcher.

Parameters

Name Type Description Default
X ndarray Design points, shape (n_samples, n_features). required
y ndarray Function values at X, shape (n_samples,). required

Returns

Name Type Description
None None

Examples

>>> import numpy as np
>>> from spotoptim import SpotOptim
>>> from sklearn.gaussian_process import GaussianProcessRegressor
>>> def sphere(X):
...     X = np.atleast_2d(X)
...     return np.sum(X**2, axis=1)
>>> opt = SpotOptim(fun=sphere,
...                 bounds=[(-5, 5), (-5, 5)],
...                 max_surrogate_points=10,
...                 surrogate=GaussianProcessRegressor())
>>> X = np.random.rand(50, 2)
>>> y = np.random.rand(50)
>>> opt.fit_surrogate(X, y)
>>> # Surrogate is now fitted

gen_design_table

SpotOptim.SpotOptim.gen_design_table(precision=4, tablefmt='github')

Generate a table of the design or results. If optimization has been run (results available), returns the results table. Otherwise, returns the design table (search space configuration).

Parameters

Name Type Description Default
tablefmt str Table format. Defaults to ‘github’. 'github'
precision int Number of decimal places for float values. Defaults to 4. 4

Returns

Name Type Description
str str Formatted table string.

Examples

import numpy as np
from spotoptim import SpotOptim

def sphere(X):
    X = np.atleast_2d(X)
    return np.sum(X**2, axis=1)

opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-10, 10), (0, 1)],
    var_name=["x1", "x2", "x3"],
    var_type=["float", "int", "float"],
    max_iter=10,
    n_initial=5
)
table = opt.gen_design_table()
print(table)
|   name |   type |   lower |   upper |   default |   transform |
|--------|--------|---------|---------|-----------|-------------|
|     x1 |  float |      -5 |       5 |         0 |           - |
|     x2 |    int |     -10 |      10 |         0 |           - |
|     x3 |  float |       0 |       1 |       0.5 |           - |

generate_initial_design

SpotOptim.SpotOptim.generate_initial_design()

Generate initial space-filling design using Latin Hypercube Sampling. Used in the optimize() method to create the initial set of design points.

Returns

Name Type Description
ndarray np.ndarray Initial design points, shape (n_initial, n_features). Points are in the intervals defined by self.bounds.

Examples

from spotoptim import SpotOptim
from spotoptim.function import sphere
opt = SpotOptim(fun=sphere,
                bounds=[(-5, 5), (-5, 5)],
                n_initial=3,
                var_type=['float', 'int'],
                var_trans=['log10', None])
X0 = opt.generate_initial_design()
print(X0.shape)
(3, 2)

get_best_hyperparameters

SpotOptim.SpotOptim.get_best_hyperparameters(as_dict=True)

Get the best hyperparameter configuration found during optimization. If noise handling is active (repeats_initial > 1 or OCBA), this returns the parameter configuration associated with the best mean objective value. Otherwise, it returns the configuration associated with the absolute best observed value.

Parameters

Name Type Description Default
as_dict bool If True, returns a dictionary mapping parameter names to their values. If False, returns the raw numpy array. Defaults to True. True

Returns

Name Type Description
Union[Dict[str, Any], np.ndarray, None] Union[Dict[str, Any], np.ndarray, None]: The best hyperparameter configuration. Returns None if optimization hasn’t started (no data).

Examples

from spotoptim import SpotOptim
from spotoptim.function import sphere

opt = SpotOptim(fun=sphere,
                bounds=[(-5, 5), (0, 10)],
                n_initial=5,
                var_name=["x", "y"],
                verbose=True)
opt.optimize()
best_params = opt.get_best_hyperparameters()
print(best_params['x']) # Should be close to 0
TensorBoard logging disabled
Initial best: f(x) = 9.011777
Iter 1 | Best: 8.092894 | Rate: 1.00 | Evals: 30.0%
Iter 2 | Best: 1.768456 | Rate: 1.00 | Evals: 35.0%
Iter 3 | Best: 1.768456 | Curr: 2.015312 | Rate: 0.67 | Evals: 40.0%
Iter 4 | Best: 0.590758 | Rate: 0.75 | Evals: 45.0%
Iter 5 | Best: 0.009846 | Rate: 0.80 | Evals: 50.0%
Iter 6 | Best: 0.002116 | Rate: 0.83 | Evals: 55.0%
Iter 7 | Best: 0.000007 | Rate: 0.86 | Evals: 60.0%
Iter 8 | Best: 0.000001 | Rate: 0.88 | Evals: 65.0%
Iter 9 | Best: 0.000000 | Rate: 0.89 | Evals: 70.0%
Iter 10 | Best: 0.000000 | Rate: 0.90 | Evals: 75.0%
Iter 11 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.82 | Evals: 80.0%
Iter 12 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.75 | Evals: 85.0%
Iter 13 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.69 | Evals: 90.0%
Iter 14 | Best: 0.000000 | Curr: 0.000000 | Rate: 0.64 | Evals: 95.0%
Iter 15 | Best: 0.000000 | Rate: 0.67 | Evals: 100.0%
-0.0005239999091366521

get_best_xy_initial_design

SpotOptim.SpotOptim.get_best_xy_initial_design()

Determine and store the best point from initial design. Finds the best (minimum) function value in the initial design, stores the corresponding point and value in instance attributes, and optionally prints the results if verbose mode is enabled. For noisy functions, also reports the mean best value.

Note

This method assumes self.X_ and self.y_ have been initialized with the initial design evaluations.

Returns

Name Type Description
None None

Examples

import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=5,
    verbose=True
)
# Simulate initial design (normally done in optimize())
opt.X_ = np.array([[1, 2], [0, 0], [2, 1]])
opt.y_ = np.array([5.0, 0.0, 5.0])
opt.get_best_xy_initial_design()
print(f"Best x: {opt.best_x_}")
print(f"Best y: {opt.best_y_}")
TensorBoard logging disabled
Initial best: f(x) = 0.000000
Best x: [0 0]
Best y: 0.0
import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import noisy_sphere
# With noisy function
opt_noise = SpotOptim(
    fun=noisy_sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=5,
    repeats_surrogate=2,
    verbose=True
)
opt_noise.X_ = np.array([[1, 2], [0, 0], [2, 1]])
opt_noise.y_ = np.array([5.0, 0.0, 5.0])
opt_noise.min_mean_y = 0.5  # Simulated mean best
opt_noise.get_best_xy_initial_design()
print(f"Best x: {opt_noise.best_x_}")
print(f"Best y: {opt_noise.best_y_}")
TensorBoard logging disabled
Initial best: f(x) = 0.000000, mean best: f(x) = 0.500000
Best x: [0 0]
Best y: 0.0

get_design_table

SpotOptim.SpotOptim.get_design_table(tablefmt='github', precision=4)

Get a table string showing the search space design before optimization. This method generates a table displaying the variable names, types, bounds, and defaults without requiring an optimization run. Useful for inspecting and documenting the search space configuration.

Parameters

Name Type Description Default
tablefmt str Table format for tabulate library. Defaults to ‘github’. 'github'
precision int Number of decimal places for float values. Defaults to 4. 4

Returns

Name Type Description
str str Formatted table string.

Examples

import numpy as np
from spotoptim import SpotOptim

def sphere(X):
    X = np.atleast_2d(X)
    return np.sum(X**2, axis=1)

opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-10, 10), (0, 1)],
    var_name=["x1", "x2", "x3"],
    var_type=["float", "int", "float"],
    max_iter=10,
    n_initial=5
)
table = opt.get_design_table()
print(table)
|   name |   type |   lower |   upper |   default |   transform |
|--------|--------|---------|---------|-----------|-------------|
|     x1 |  float |      -5 |       5 |         0 |           - |
|     x2 |    int |     -10 |      10 |         0 |           - |
|     x3 |  float |       0 |       1 |       0.5 |           - |

get_experiment_filename

SpotOptim.SpotOptim.get_experiment_filename(prefix)

Generate experiment filename with ’_exp.pkl’ suffix.

get_importance

SpotOptim.SpotOptim.get_importance()

Calculate variable importance scores. Importance is computed as the normalized sensitivity of each parameter based on the variation in objective values across the evaluated points. Higher scores indicate parameters that have more influence on the objective. The importance is calculated as: 1. For each dimension, compute the correlation between parameter values and objective values 2. Normalize to percentage scale (0-100) 3. Higher values indicate more important parameters

Returns

Name Type Description
List[float] List[float]: Importance scores for each dimension (0-100 scale).

Examples

import numpy as np
from spotoptim import SpotOptim

def test_func(X):
    # x0 has strong effect, x1 has weak effect
    return 10 * X[:, 0]**2 + 0.1 * X[:, 1]**2

opt = SpotOptim(
    fun=test_func,
    bounds=[(-5, 5), (-5, 5)],
    var_name=["x0", "x1"],
    max_iter=10,
    n_initial=5,
    seed=42
)
result = opt.optimize()
importance = opt.get_importance()
print(f"x0 importance: {importance[0]:.2f}")
print(f"x1 importance: {importance[1]:.2f}")

# Use table to display importance
table = opt.get_results_table(show_importance=True)
print(table)
x0 importance: 73.24
x1 importance: 26.76
|   name |   type |   default |   lower |   upper |   tuned |   transform |   importance |   stars |
|--------|--------|-----------|---------|---------|---------|-------------|--------------|---------|
|     x0 |  float |         0 |      -5 |       5 |    0.17 |           - |        73.24 |       * |
|     x1 |  float |         0 |      -5 |       5 |  1.9555 |           - |        26.76 |       . |

Interpretation: ***: >99%, **: >75%, *: >50%, .: >10%

get_initial_design

SpotOptim.SpotOptim.get_initial_design(X0=None)

Generate or process initial design points. Ensures that design points are in internal (transformed and reduced) scale. Calls generate_initial_design() if X0 is None, otherwise processes user-provided X0. Handles three scenarios: * X0 is None: Generate space-filling design using LHS * X0 is None but starting point(s) x0 is provided: Generate LHS and include x0 as first point(s) * X0 is provided: Transform and prepare user-provided initial design

Parameters

Name Type Description Default
X0 ndarray User-provided initial design points in original scale, shape (n_initial, n_features). If None, generates space-filling design. Defaults to None. None

Returns

Name Type Description
ndarray np.ndarray Initial design points in internal (transformed and reduced) scale, shape (n_initial, n_features_reduced).

Examples

import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere
from spotoptim.plot.visualization import plot_design_points
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=10
)
# Generate default LHS design
X0 = opt.get_initial_design()
print(X0.shape)
plot_design_points(X0)
(10, 2)

import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere
from spotoptim.plot.visualization import plot_design_points
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=10,
    x0=np.array([0, 0])  # Starting point to include in initial design
)
X0 = opt.get_initial_design()
print(X0.shape)
plot_design_points(X0)
(10, 2)

get_ocba

SpotOptim.SpotOptim.get_ocba(means, vars, delta, verbose=False)

Optimal Computing Budget Allocation (OCBA).

get_ocba_X

SpotOptim.SpotOptim.get_ocba_X(X, means, vars, delta, verbose=False)

Calculate OCBA allocation and repeat input array X.

get_pickle_safe_optimizer

SpotOptim.SpotOptim.get_pickle_safe_optimizer(
    unpickleables='file_io',
    verbosity=0,
)

Create a pickle-safe copy of the optimizer.

get_ranks

SpotOptim.SpotOptim.get_ranks(x)

Returns ranks of numbers within input array x.

get_result_filename

SpotOptim.SpotOptim.get_result_filename(prefix)

Generate result filename with ’_res.pkl’ suffix.

get_results_table

SpotOptim.SpotOptim.get_results_table(
    tablefmt='github',
    precision=4,
    show_importance=False,
)

Get a comprehensive table string of optimization results. This method generates a formatted table of the search space configuration, best values found, and optionally variable importance scores.

Parameters

Name Type Description Default
tablefmt str Table format for tabulate library. Options include: ‘github’, ‘grid’, ‘simple’, ‘plain’, ‘html’, ‘latex’, etc. Defaults to ‘github’. 'github'
precision int Number of decimal places for float values. Defaults to 4. 4
show_importance bool Whether to include importance scores. Importance is calculated as the normalized standard deviation of each parameter’s effect on the objective. Requires multiple evaluations. Defaults to False. False

Returns

Name Type Description
str str Formatted table string that can be printed or saved.

Examples

import numpy as np
from spotoptim import SpotOptim

# Example 1: Basic usage after optimization
def sphere(X):
    X = np.atleast_2d(X)
    return np.sum(X**2, axis=1)
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5), (-5, 5)],
    var_name=["x1", "x2", "x3"],
    var_type=["float", "float", "float"],
    max_iter=10,
    n_initial=5
)
result = opt.optimize()
table = opt.get_results_table()
print(table)
table = opt.get_results_table(show_importance=True)
print(table)
|   name |   type |   default |   lower |   upper |   tuned |   transform |
|--------|--------|-----------|---------|---------|---------|-------------|
|     x1 |  float |         0 |      -5 |       5 | -0.0368 |           - |
|     x2 |  float |         0 |      -5 |       5 | -0.6109 |           - |
|     x3 |  float |         0 |      -5 |       5 |   0.268 |           - |
|   name |   type |   default |   lower |   upper |   tuned |   transform |   importance |   stars |
|--------|--------|-----------|---------|---------|---------|-------------|--------------|---------|
|     x1 |  float |         0 |      -5 |       5 | -0.0368 |           - |        37.05 |       . |
|     x2 |  float |         0 |      -5 |       5 | -0.6109 |           - |         5.22 |         |
|     x3 |  float |         0 |      -5 |       5 |   0.268 |           - |        57.73 |       * |

Interpretation: ***: >99%, **: >75%, *: >50%, .: >10%

get_shape

SpotOptim.SpotOptim.get_shape(y)

Get the shape of the objective function output.

Parameters

Name Type Description Default
y ndarray Objective function output, shape (n_samples,) or (n_samples, n_objectives). required

Returns

Name Type Description
tuple Tuple[int, Optional[int]] (n_samples, n_objectives) where n_objectives is None for single-objective.

Examples

import numpy as np
from spotoptim import SpotOptim
opt = SpotOptim(
    fun=lambda X: np.sum(X**2, axis=1),
    bounds=[(-5, 5), (-5, 5)],
    max_iter=10,
    n_initial=5
)
y_single = np.array([1.0, 2.0, 3.0])
n, m = opt.get_shape(y_single)
print(f"n={n}, m={m}")
y_multi = np.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
n, m = opt.get_shape(y_multi)
print(f"n={n}, m={m}")
n=3, m=None
n=3, m=2

get_stars

SpotOptim.SpotOptim.get_stars(input_list)

Converts a list of values to a list of stars. Used to visualize the importance of a variable. Thresholds: >99: , >75: , >50: , >10: .

Parameters

Name Type Description Default
input_list list A list of importance scores (0-100). required

Returns

Name Type Description
list list A list of star strings.

Examples

from spotoptim import SpotOptim
import numpy as np

def test_func(X):
    return 10 * X[:, 0]**2 + 0.1 * X[:, 1]**2

opt = SpotOptim(
    fun=test_func,
    bounds=[(-5, 5), (-5, 5)],
    var_name=["x0", "x1"],
    max_iter=10,
    n_initial=5,
    seed=42
)
opt.optimize()
opt.get_stars([100, 75, 50, 10, 0])
['***', '*', '.', '', '']

get_success_rate

SpotOptim.SpotOptim.get_success_rate()

Get the current success rate of the optimization process.

Returns

Name Type Description
float float The current success rate.

Examples

from spotoptim import SpotOptim
opt = SpotOptim(fun=lambda x: x,
                bounds=[(-5, 5), (-5, 5)])
print(opt.get_success_rate())
0.0

handle_default_var_trans

SpotOptim.SpotOptim.handle_default_var_trans()

Handle default variable transformations. Does not perform any transformations, only sets var_trans to a list of None values if not specified, or normalizes transformation names by converting id, None, or None to None. Also validates that var_trans length matches the number of dimensions.

Returns

Name Type Description
None None

Raises

Name Type Description
ValueError If var_trans length doesn’t match n_dim.

Examples

from spotoptim import SpotOptim
# Default behavior - all None
spot = SpotOptim(fun=lambda x: x, bounds=[(0, 10), (0, 10)])
print(f"spot.var_trans (should be [None, None]): {spot.var_trans}")
spot.var_trans (should be [None, None]): [None, None]
from spotoptim import SpotOptim
# Normalize transformation names
spot = SpotOptim(fun=lambda x: x, bounds=[(1, 10), (1, 100)],
                 var_trans=['log10', 'id'])
print(f"spot.var_trans (should be ['log10', 'None']): {spot.var_trans}")
spot.var_trans (should be ['log10', 'None']): ['log10', None]

init_storage

SpotOptim.SpotOptim.init_storage(X0, y0)

Initialize storage for optimization. Sets up the initial data structures needed for optimization tracking: * X_: Evaluated design points (in original scale) * y_: Function values at evaluated points * n_iter_: Iteration counter Then updates statistics by calling update_stats().

Parameters

Name Type Description Default
X0 ndarray Initial design points in internal scale, shape (n_samples, n_features). required
y0 ndarray Function values at X0, shape (n_samples,). required

Returns

Name Type Description
None None

Examples

import numpy as np
from spotoptim import SpotOptim
opt = SpotOptim(fun=lambda X: np.sum(X**2, axis=1),
                bounds=[(-5, 5), (-5, 5)],
                n_initial=5)
X0 = np.array([[1, 2], [3, 4], [0, 1]])
y0 = np.array([5.0, 25.0, 1.0])
opt.init_storage(X0, y0)
print(f"X_ = {opt.X_}")
print(f"y_ = {opt.y_}")
print(f"n_iter_ = {opt.n_iter_}")
print(f"counter = {opt.counter}")
X_ = [[1 2]
 [3 4]
 [0 1]]
y_ = [ 5. 25.  1.]
n_iter_ = 0
counter = 0

init_surrogate

SpotOptim.SpotOptim.init_surrogate()

Initialize or configure the surrogate model for optimization. Handles three surrogate configurations: * List of surrogates: sets up multi-surrogate selection with probability weights and per-surrogate max_surrogate_points. * None (default): creates a GaussianProcessRegressor with a ConstantKernel * Matern(nu=2.5) kernel, 100 optimizer restarts, and normalize_y=True. * User-provided surrogate: accepted as-is; internal bookkeeping attributes (_max_surrogate_points_list, _active_max_surrogate_points) are still initialised. After this method returns the following attributes are set: * self.surrogate — the active surrogate model. * self._surrogates_listlist | None. * self._prob_surrogate — normalised selection probabilities or None. * self._max_surrogate_points_list — per-surrogate point caps or None. * self._active_max_surrogate_points — active cap.

Raises

Name Type Description
ValueError If the surrogate list is empty.
ValueError If ‘prob_surrogate’ length does not match the surrogate list length.
ValueError If ‘max_surrogate_points’ list length does not match the surrogate list length.

Returns

Name Type Description
None None

Examples

import numpy as np
from spotoptim import SpotOptim
# Default surrogate (GaussianProcessRegressor)
opt = SpotOptim(
    fun=lambda X: np.sum(X**2, axis=1),
    bounds=[(-5, 5), (-5, 5)],
    n_initial=5,
)
print(type(opt.surrogate).__name__)
GaussianProcessRegressor
import numpy as np
from spotoptim import SpotOptim
from sklearn.ensemble import RandomForestRegressor
# User-provided surrogate
rf = RandomForestRegressor(n_estimators=50, random_state=42)
opt = SpotOptim(
    fun=lambda X: np.sum(X**2, axis=1),
    bounds=[(-5, 5), (-5, 5)],
    n_initial=5,
    surrogate=rf,
)
print(type(opt.surrogate).__name__)
RandomForestRegressor
import numpy as np
from spotoptim import SpotOptim
from sklearn.ensemble import RandomForestRegressor
from sklearn.gaussian_process import GaussianProcessRegressor
# List of surrogates with selection probabilities
surrogates = [GaussianProcessRegressor(), RandomForestRegressor()]
opt = SpotOptim(
    fun=lambda X: np.sum(X**2, axis=1),
    bounds=[(-5, 5), (-5, 5)],
    n_initial=5,
    surrogate=surrogates,
    prob_surrogate=[0.7, 0.3],
)
print(opt._prob_surrogate)
print([type(s).__name__ for s in opt._surrogates_list])
[0.7, 0.3]
['GaussianProcessRegressor', 'RandomForestRegressor']

inverse_transform_X

SpotOptim.SpotOptim.inverse_transform_X(X)

Transform parameter array from internal to original scale. Converts from transformed space (full dimension) to natural space (original). Does NOT handle dimension expansion (un-mapping).

Parameters

Name Type Description Default
X ndarray Array in Transformed Space, shape (n_samples, n_features) required

Returns

Name Type Description
ndarray np.ndarray Array in Natural Space

Examples

from spotoptim import SpotOptim
from spotoptim.function import sphere
import numpy as np
spot = SpotOptim(fun=sphere, bounds=[(1, 10)], var_trans=['log10'])
X_trans = np.array([[0], [1], [2]])
spot.inverse_transform_X(X_trans)
array([[  1],
       [ 10],
       [100]])

inverse_transform_value

SpotOptim.SpotOptim.inverse_transform_value(x, trans)

Apply inverse transformation to a single float value.

Parameters

Name Type Description Default
x float Transformed value required
trans Optional[str] Transformation name. required

Returns

Name Type Description
float Original value

Notes

See also transform_value.

Examples

from spotoptim import SpotOptim
from spotoptim.function import sphere
spot = SpotOptim(fun=sphere, bounds=[(1, 10)])
spot.inverse_transform_value(10, 'log10')
spot.inverse_transform_value(100, 'log(x)')
np.float64(2.6881171418161356e+43)

load_experiment

SpotOptim.SpotOptim.load_experiment(filename)

Load experiment configuration from a pickle file.

load_result

SpotOptim.SpotOptim.load_result(filename)

Load complete optimization results from a pickle file.

map_to_factor_values

SpotOptim.SpotOptim.map_to_factor_values(X)

Map internal integer factor values back to string labels. For factor variables, converts integer indices back to original string values. Other variable types remain unchanged.

Parameters

Name Type Description Default
X ndarray Design points with integer values for factors, shape (n_samples, n_features). required

Returns

Name Type Description
ndarray np.ndarray Design points with factor integers replaced by string labels. Dtype will be object or string if mixed types are present.

Examples

from spotoptim import SpotOptim
from spotoptim.function import sphere
import numpy as np
spot = SpotOptim(
    fun=sphere,
    bounds=[('red', 'blue'), (0, 10)]
)
spot.process_factor_bounds()
X_int = np.array([[0, 5.0], [1, 8.0]])
X_str = spot.map_to_factor_values(X_int)
print(X_str[0])
['red' 5.0]

mo2so

SpotOptim.SpotOptim.mo2so(y_mo)

Convert multi-objective values to single-objective. Converts multi-objective values to a single-objective value by applying a user-defined function from fun_mo2so. If no user-defined function is given, the values in the first objective column are used.

This method is called after the objective function evaluation. It returns a 1D array with the single-objective values.

Parameters

Name Type Description Default
y_mo ndarray If multi-objective, shape (n_samples, n_objectives). If single-objective, shape (n_samples,). required

Returns

Name Type Description
ndarray np.ndarray Single-objective values, shape (n_samples,).

Examples

import numpy as np
from spotoptim import SpotOptim

# Multi-objective function
def mo_fun(X):
    return np.column_stack([
        np.sum(X**2, axis=1),
        np.sum((X-1)**2, axis=1)
    ])

# Example 1: Default behavior (use first objective)
opt1 = SpotOptim(
    fun=mo_fun,
    bounds=[(-5, 5), (-5, 5)],
    max_iter=10,
    n_initial=5
)
y_mo = np.array([[1.0, 2.0], [3.0, 4.0]])
y_so = opt1.mo2so(y_mo)
print(f"Single-objective (default): {y_so}")
Single-objective (default): [1. 3.]
import numpy as np
from spotoptim import SpotOptim
# Example 2: Custom conversion function (sum of objectives)
def custom_mo2so(y_mo):
    return y_mo[:, 0] + y_mo[:, 1]

opt2 = SpotOptim(
    fun=mo_fun,
    bounds=[(-5, 5), (-5, 5)],
    max_iter=10,
    n_initial=5,
    fun_mo2so=custom_mo2so
)
y_so_custom = opt2.mo2so(y_mo)
print(f"Single-objective (custom): {y_so_custom}")
Single-objective (custom): [3. 7.]

modify_bounds_based_on_var_type

SpotOptim.SpotOptim.modify_bounds_based_on_var_type()

Modify bounds based on variable types. Adjusts bounds for each dimension according to its var_type: * ‘int’: Ensures bounds are integers (ceiling for lower, floor for upper) * ‘factor’: Bounds already set to (0, n_levels-1) by process_factor_bounds * ‘float’: Explicitly converts bounds to float

Returns

Name Type Description
None None

Raises

Name Type Description
ValueError If an unsupported var_type is encountered.

Examples

from spotoptim import SpotOptim
spot = SpotOptim(fun=lambda x: x, bounds=[(0.5, 10.5)], var_type=['int'])
print(spot.bounds)
[(1, 10)]
from spotoptim import SpotOptim
spot = SpotOptim(fun=lambda x: x, bounds=[(0, 10)], var_type=['float'])
print(spot.bounds)
[(0.0, 10.0)]

optimize

SpotOptim.SpotOptim.optimize(X0=None)

Run the optimization process. The optimization terminates when either the total function evaluations reach max_iter (including initial design), or the runtime exceeds max_time minutes. Input/Output spaces are * Input X0: Expected in Natural Space (original scale, physical units). * Output result.x: Returned in Natural Space. * Output result.X: Returned in Natural Space. * Internal Optimization: Performed in Transformed and Mapped Space.

Parameters

Name Type Description Default
X0 ndarray Initial design points in Natural Space, shape (n_initial, n_features). If None, generates space-filling design. Defaults to None. None

Returns

Name Type Description
OptimizeResult OptimizeResult Optimization result with fields: * x: best point found in Natural Space * fun: best function value * nfev: number of function evaluations (including initial design) * nit: number of sequential optimization iterations (after initial design) * success: whether optimization succeeded * message: termination message indicating reason for stopping, including statistics (function value, iterations, evaluations) * X: all evaluated points in Natural Space * y: all function values

Examples

import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=5,
    max_iter=10,
    seed=0,
    x0=np.array([0.1, -0.1]),
    verbose=True
)
result = opt.optimize()
print(result.message.splitlines()[0])
print("Best point:", result.x)
print("Best value:", result.fun)
Starting point x0 validated and processed successfully.
  Original scale: [ 0.1 -0.1]
  Internal scale: [ 0.1 -0.1]
TensorBoard logging disabled
Including 1 starting points from x0 in initial design.
Initial best: f(x) = 0.020000
Iter 1 | Best: 0.020000 | Curr: 14.707944 | Rate: 0.00 | Evals: 60.0%
Optimizer candidate 1/3 was duplicate/invalid.
Iter 2 | Best: 0.020000 | Curr: 0.020020 | Rate: 0.00 | Evals: 70.0%
Iter 3 | Best: 0.020000 | Curr: 0.322913 | Rate: 0.00 | Evals: 80.0%
Iter 4 | Best: 0.002222 | Rate: 0.25 | Evals: 90.0%
Iter 5 | Best: 0.002222 | Curr: 0.002281 | Rate: 0.20 | Evals: 100.0%
Optimization terminated: maximum evaluations (10) reached
Best point: [0.03557244 0.03092452]
Best value: 0.002221724893276279

optimize_acquisition_func

SpotOptim.SpotOptim.optimize_acquisition_func()

Optimize the acquisition function to find the next point to evaluate.

Returns

Name Type Description
ndarray np.ndarray The optimized point(s). If acquisition_fun_return_size == 1, returns 1D array of shape (n_features,). If acquisition_fun_return_size > 1, returns 2D array of shape (N, n_features), where N is min(acquisition_fun_return_size, population_size).

Examples

import numpy as np
from spotoptim import SpotOptim
def sphere(X):
    X = np.atleast_2d(X)
    return np.sum(X**2, axis=1)
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=5,
    max_iter=10,
    seed=0,
)
opt.optimize()
x_next = opt.suggest_next_infill_point()
print("Next point to evaluate:", x_next)
Next point to evaluate: [[0.14350858 0.037982  ]]

optimize_sequential_run

SpotOptim.SpotOptim.optimize_sequential_run(
    timeout_start,
    X0=None,
    y0_known=None,
    max_iter_override=None,
    shared_best_y=None,
    shared_lock=None,
)

Perform a single sequential optimization run. Calls _initialize_run, rm_initial_design_NA_values, check_size_initial_design, init_storage, get_best_xy_initial_design, and _run_sequential_loop.

Parameters

Name Type Description Default
timeout_start float Start time for timeout. required
X0 Optional[np.ndarray] Initial design points in Natural Space, shape (n_initial, n_features). None
y0_known Optional[float] Known best value for initial design. None
max_iter_override Optional[int] Override for maximum number of iterations. None
shared_best_y Optional[float] Shared best value for parallel runs. None
shared_lock Optional[Lock] Shared lock for parallel runs. None

Returns

Name Type Description
Tuple[str, OptimizeResult] Tuple[str, OptimizeResult]: Tuple containing status and optimization result.

Raises

Name Type Description
ValueError If the initial design has no valid points after removing NaN/inf values, or if the initial design is too small to proceed.

Examples

import time
import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere
opt = SpotOptim(fun=sphere,
                bounds=[(-5, 5), (-5, 5)],
                n_initial=5,
                max_iter=10,
                seed=0,
                n_jobs=1,  # Use sequential optimization for deterministic output
                verbose=True
 )
status, result = opt.optimize_sequential_run(timeout_start=time.time())
print(status)
print(result.message.splitlines()[0])
TensorBoard logging disabled
Initial best: f(x) = 8.463203
Iter 1 | Best: 8.463203 | Curr: 18.224245 | Rate: 0.00 | Evals: 60.0%
Iter 2 | Best: 8.412459 | Rate: 0.50 | Evals: 70.0%
Iter 3 | Best: 5.623369 | Rate: 0.67 | Evals: 80.0%
Iter 4 | Best: 2.903005 | Rate: 0.75 | Evals: 90.0%
Iter 5 | Best: 0.022055 | Rate: 0.80 | Evals: 100.0%
FINISHED
Optimization terminated: maximum evaluations (10) reached

optimize_steady_state

SpotOptim.SpotOptim.optimize_steady_state(
    timeout_start,
    X0,
    y0_known=None,
    max_iter_override=None,
)

Perform steady-state asynchronous optimization (n_jobs > 1). This method implements a hybrid steady-state parallelization strategy. The executor types are selected at runtime based on GIL availability: Standard GIL build (Python ≤ 3.12 or GIL-enabled 3.13+): * ProcessPoolExecutor (eval_pool) — objective function evaluations. Process isolation ensures arbitrary callables (lambdas, closures) serialized with dill run safely without touching shared state. * ThreadPoolExecutor (search_pool) — surrogate search tasks. Threads share the main-process heap; zero dill overhead. A threading.Lock (_surrogate_lock) prevents a surrogate refit from racing with an in-flight search thread. Free-threaded build (python3.13t / --disable-gil): * Both eval_pool and search_pool are ThreadPoolExecutor instances. Threads achieve true CPU-level parallelism without the GIL. The dill serialization step for eval tasks is eliminated — fun is called directly from the shared heap. The _surrogate_lock is still used to serialize surrogate reads and refits. Pipeline: 1. Parallel Initial Design: n_initial points are dispatched to eval_pool. Results are collected via FIRST_COMPLETED until all initial evaluations finish. 2. First Surrogate Fit: Called on the main thread once all initial evaluations are in. No lock is needed here because no search threads are active yet. 3. Parallel Search (Thread Pool): Up to n_jobs search tasks are submitted to search_pool. Each acquires _surrogate_lock before calling suggest_next_infill_point(), serializing concurrent surrogate reads. 4. Steady-State Loop with Batch Dispatch: - Search completes → candidate appended to pending_cands. - When len(pending_cands) >= eval_batch_size (or no search tasks remain), all pending candidates are stacked into X_batch and dispatched as a single eval call to eval_pool. On GIL builds this calls remote_batch_eval_wrapper (dill); on free-threaded builds it calls fun directly in a thread. - Batch eval completes → storage updated for every point, surrogate refit once under _surrogate_lock, new search slots filled. - eval_batch_size=1 (default) dispatches immediately on each search completion, preserving the original one-point behavior. - This cycle continues until max_iter evaluations or max_time minutes is reached. The optimization terminates when either: - Total function evaluations reach max_iter (including initial design), OR - Runtime exceeds max_time minutes

Parameters

Name Type Description Default
timeout_start float Start time for timeout. required
X0 Optional[np.ndarray] Initial design points in Natural Space, shape (n_initial, n_features). required
y0_known Optional[float] Known best objective value from a previous run. When provided together with self.x0, the matching point in the initial design is pre-filled with this value and not re-submitted to the worker pool, saving one evaluation per restart (restart injection). None
max_iter_override Optional[int] Override for maximum number of iterations. None

Raises

Name Type Description
RuntimeError If all initial design evaluations fail, likely due to pickling issues or missing imports in the worker process. The error message provides guidance on how to address this issue.

Returns

Name Type Description
Tuple[str, OptimizeResult] Tuple[str, OptimizeResult]: Tuple containing status and optimization result.

Examples

import time
from spotoptim import SpotOptim
from spotoptim.function import sphere
opt = SpotOptim(
     fun=sphere,
     bounds=[(-5, 5), (-5, 5)],
     n_initial=5,
     max_iter=10,
     seed=0,
     n_jobs=2,
)
status, result = opt.optimize_steady_state(timeout_start=time.time(), X0=None)
print(status)
print(result.message.splitlines()[0])
FINISHED
Optimization finished (Steady State)

plot_importance

SpotOptim.SpotOptim.plot_importance(threshold=0.0, figsize=(10, 6))

Plot variable importance.

Parameters

Name Type Description Default
threshold float Minimum importance percentage to include in plot. 0.0
figsize tuple Figure size. (10, 6)

Returns

Name Type Description
None None

Examples

from spotoptim import SpotOptim
import numpy as np

def test_func(X):
    return 10 * X[:, 0]**2 + 0.1 * X[:, 1]**2

opt = SpotOptim(
    fun=test_func,
    bounds=[(-5, 5), (-5, 5)],
    var_name=["x0", "x1"],
    max_iter=10,
    n_initial=5,
    seed=42
)
opt.optimize()
opt.plot_importance()

plot_important_hyperparameter_contour

SpotOptim.SpotOptim.plot_important_hyperparameter_contour(
    max_imp=3,
    show=True,
    alpha=0.8,
    cmap='jet',
    num=100,
    add_points=True,
    grid_visible=True,
    contour_levels=30,
    figsize=(12, 10),
)

Plot surrogate contours using spotoptim.plot.visualization.plot_important_hyperparameter_contour.

Parameters

Name Type Description Default
max_imp int The maximum number of important hyperparameters to plot. 3
show bool Whether to show the plot. True
alpha float The alpha value for the plot. 0.8
cmap str The colormap to use. 'jet'
num int The number of points to use for the plot. 100
add_points bool Whether to add points to the plot. True
grid_visible bool Whether to show the grid. True
contour_levels int The number of contour levels to use. 30
figsize tuple The size of the plot. (12, 10)

Returns

Name Type Description
None None

Examples

from spotoptim import SpotOptim
import numpy as np

def test_func(X):
    return 10 * X[:, 0]**2 + 0.1 * X[:, 1]**2

# 2-D problem: max_imp must not exceed n_dim (2)
opt = SpotOptim(
    fun=test_func,
    bounds=[(-5, 5), (-5, 5)],
    var_name=["x0", "x1"],
    max_iter=10,
    n_initial=5,
    seed=42
)
opt.optimize()
opt.plot_important_hyperparameter_contour(max_imp=2)
Plotting surrogate contours for top 2 most important parameters:
  x0: importance = 73.24% (type: float)
  x1: importance = 26.76% (type: float)

Generating 1 surrogate plots...
  Plotting x0 vs x1

plot_parameter_scatter

SpotOptim.SpotOptim.plot_parameter_scatter(
    result=None,
    show=True,
    figsize=(12, 10),
    ylabel='Objective Value',
    cmap='viridis_r',
    show_correlation=False,
    log_y=False,
)

Plot parameter distributions showing relationship between each parameter and objective. Creates a grid of scatter plots, one for each parameter dimension, showing how the objective function value varies with each parameter. The best configuration is marked with a red star. Parameters with log-scale transformations (var_trans) are automatically displayed on a log x-axis. Optionally displays Spearman correlation coefficients in plot titles for sensitivity analysis. For factor (categorical) variables, correlation is not computed and they are displayed with discrete positions on the x-axis.

Parameters

Name Type Description Default
result OptimizeResult Optimization result containing best parameters. If None, uses the best found values from self.best_x_ and self.best_y_. None
show bool Whether to display the plot. Defaults to True. True
figsize tuple Figure size as (width, height). Defaults to (12, 10). (12, 10)
ylabel str Label for y-axis. Defaults to “Objective Value”. 'Objective Value'
cmap str Colormap for scatter plot. Defaults to “viridis_r”. 'viridis_r'
show_correlation bool Whether to compute and display Spearman correlation coefficients in plot titles. Requires scipy. Defaults to False. False
log_y bool Whether to use logarithmic scale for y-axis. Defaults to False. False

Raises

Name Type Description
ValueError If no optimization data is available.

Examples

import numpy as np
from spotoptim import SpotOptim
def objective(X):
    X = np.atleast_2d(X)
    return np.sum(X**2, axis=1)
opt = SpotOptim(
    fun=objective,
    bounds=[(-5, 5), (-5, 5), (-5, 5), (-5, 5)],
    var_name=["x0", "x1", "x2", "x3"],
    max_iter=10,
    n_initial=5,
    seed=42
)
result = opt.optimize()
# Plot parameter distributions
opt.plot_parameter_scatter(result)
# Plot with custom settings
opt.plot_parameter_scatter(result, cmap="plasma", ylabel="Error")

plot_progress

SpotOptim.SpotOptim.plot_progress(
    show=True,
    log_y=False,
    figsize=(10, 6),
    ylabel='Objective Value',
    mo=False,
)

Plot optimization progress using spotoptim.plot.visualization.plot_progress.

Parameters

Name Type Description Default
show bool Whether to show the plot. True
log_y bool Whether to use a logarithmic y-axis. False
figsize tuple The size of the plot. (10, 6)
ylabel str The label for the y-axis. 'Objective Value'
mo bool Whether the optimization is multi-objective. False

Returns

Name Type Description
None None

Examples

from spotoptim import SpotOptim
import numpy as np

def test_func(X):
    return 10 * X[:, 0]**2 + 0.1 * X[:, 1]**2

opt = SpotOptim(
    fun=test_func,
    bounds=[(-5, 5), (-5, 5)],
    var_name=["x0", "x1"],
    max_iter=10,
    n_initial=5,
    seed=42
)
opt.optimize()
opt.plot_progress()

plot_surrogate

SpotOptim.SpotOptim.plot_surrogate(
    i=0,
    j=1,
    show=True,
    alpha=0.8,
    var_name=None,
    cmap='jet',
    num=100,
    vmin=None,
    vmax=None,
    add_points=True,
    grid_visible=True,
    contour_levels=30,
    figsize=(12, 10),
)

Plot the surrogate model for two dimensions. Delegates to spotoptim.plot.visualization.plot_surrogate.

Parameters

Name Type Description Default
i int The index of the first dimension. 0
j int The index of the second dimension. 1
show bool Whether to show the plot. True
alpha float The alpha value for the plot. 0.8
var_name Optional[List[str]] The names of the variables. None
cmap str The colormap to use. 'jet'
num int The number of points to use for the plot. 100
vmin Optional[float] The minimum value for the plot. None
vmax Optional[float] The maximum value for the plot. None
add_points bool Whether to add points to the plot. True
grid_visible bool Whether to show the grid. True
contour_levels int The number of contour levels to use. 30
figsize tuple The size of the plot. (12, 10)

Returns

Name Type Description
None None

Examples

from spotoptim import SpotOptim
import numpy as np

def test_func(X):
    return 10 * X[:, 0]**2 + 0.1 * X[:, 1]**2

opt = SpotOptim(
    fun=test_func,
    bounds=[(-5, 5), (-5, 5)],
    var_name=["x0", "x1"],
    max_iter=10,
    n_initial=5,
    seed=42
)
opt.optimize()
opt.plot_surrogate()

print_best

SpotOptim.SpotOptim.print_best(
    result=None,
    transformations=None,
    show_name=True,
    precision=4,
)

Print the best solution found during optimization. This method displays the best hyperparameters and objective value in a formatted table. It supports custom transformations for parameters (e.g., converting log-scale values back to original scale).

Parameters

Name Type Description Default
result OptimizeResult Optimization result object from optimize(). If None, uses the stored best values from the optimizer. Defaults to None. None
transformations list of callable List of transformation functions to apply to each parameter. Each function takes a single value and returns the transformed value. Use None for parameters that don’t need transformation. Length must match number of dimensions. Example: [None, None, lambda x: 10**x] to convert the 3rd parameter from log10 scale. Defaults to None. None
show_name bool Whether to display variable names. If False, uses generic names like ‘x0’, ‘x1’, etc. Defaults to True. True
precision int Number of decimal places for floating point values. Defaults to 4. 4

Examples

import numpy as np
from spotoptim import SpotOptim

def sphere(X):
    X = np.atleast_2d(X)
    return np.sum(X**2, axis=1)
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    var_name=["x1", "x2"],
    max_iter=10,
    n_initial=5
)
result = opt.optimize()
opt.print_best(result)

Best Solution Found:
--------------------------------------------------
  x1: 0.0911
  x2: -0.0770
  Objective Value: 0.0142
  Total Evaluations: 10

print_results

SpotOptim.SpotOptim.print_results(*args, **kwargs)

Alias for print(get_results_table()) for compatibility. Prints the table.

process_factor_bounds

SpotOptim.SpotOptim.process_factor_bounds()

Process bounds to handle factor variables. For dimensions with tuple bounds (factor variables), creates internal integer mappings and replaces bounds with (0, n_levels-1). Stores mappings in self._factor_maps: {dim_idx: {int_val: str_val}}

Returns

Name Type Description
None None

Raises

Name Type Description
ValueError If bounds are invalidly formatted.

Examples

from spotoptim import SpotOptim
spot = SpotOptim(fun=lambda x: x, bounds=[('red', 'green', 'blue'), (0, 10)])
spot.process_factor_bounds()
print(f"spot.bounds (should be [(0, 2), (0, 10)]): {spot.bounds}")
spot.bounds (should be [(0, 2), (0, 10)]): [(0, 2), (0, 10)]

reinitialize_components

SpotOptim.SpotOptim.reinitialize_components()

Reinitialize components that were excluded during pickling.

remove_nan

SpotOptim.SpotOptim.remove_nan(X, y, stop_on_zero_return=True)

Remove rows where y contains NaN or inf values. Used in the optimize() method after function evaluations.

Parameters

Name Type Description Default
X ndarray Design matrix, shape (n_samples, n_features). required
y ndarray Objective values, shape (n_samples,). required
stop_on_zero_return bool If True, raise error when all values are removed. True

Returns

Name Type Description
tuple tuple (X_clean, y_clean) with NaN/inf rows removed.

Raises

Name Type Description
ValueError If all values are NaN/inf and stop_on_zero_return is True.

Examples

import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere
opt = SpotOptim(fun=sphere, bounds=[(-5, 5)])
X = np.array([[1, 2], [3, 4], [5, 6]])
y = np.array([1.0, np.nan, np.inf])
X_clean, y_clean = opt.remove_nan(X, y, stop_on_zero_return=False)
print("Clean X:", X_clean)
print("Clean y:", y_clean)
Clean X: [[1 2]]
Clean y: [1.]

repair_non_numeric

SpotOptim.SpotOptim.repair_non_numeric(X, var_type)

Round non-numeric values to integers based on variable type. This method applies rounding to variables that are not continuous: * ‘float’: No rounding (continuous values) * ‘int’: Rounded to integers * ‘factor’: Rounded to integers (representing categorical values)

Parameters

Name Type Description Default
X ndarray X array with values to potentially round. required
var_type list of str List with type information for each dimension. required

Returns

Name Type Description
ndarray np.ndarray X array with non-continuous values rounded to integers.

Examples

import numpy as np
from spotoptim import SpotOptim
opt = SpotOptim(fun=lambda X: np.sum(X**2, axis=1),
                 bounds=[(-5, 5), (-5, 5)],
                 var_type=['int', 'float'])
X = np.array([[1.2, 2.5], [3.7, 4.1], [5.9, 6.8]])
X_repaired = opt.repair_non_numeric(X, opt.var_type)
print(X_repaired)
[[1.  2.5]
 [4.  4.1]
 [6.  6.8]]

rm_initial_design_NA_values

SpotOptim.SpotOptim.rm_initial_design_NA_values(X0, y0)

Remove NaN/inf values from initial design evaluations. This method filters out design points that returned NaN or inf values during initial evaluation. Unlike the sequential optimization phase where penalties are applied, initial design points with invalid values are simply removed.

Parameters

Name Type Description Default
X0 ndarray Initial design points in internal scale, shape (n_samples, n_features). required
y0 ndarray Function values at X0, shape (n_samples,). required

Returns

Name Type Description
Tuple[np.ndarray, np.ndarray, int] Tuple[ndarray, ndarray, int]: Filtered (X0, y0) with only finite values and the original count before filtering. X0 has shape (n_valid, n_features), y0 has shape (n_valid,), and the int is the original size.

Examples

import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=10
)
X0 = np.array([[1, 2], [3, 4], [5, 6]])
y0 = np.array([5.0, np.nan, np.inf])
X0_clean, y0_clean, n_eval = opt.rm_initial_design_NA_values(X0, y0)
print(X0_clean.shape) # (1, 2)
print(y0_clean) # array([5.])
print(n_eval) # 3
# All valid values - no filtering
X0 = np.array([[1, 2], [3, 4]])
y0 = np.array([5.0, 25.0])
X0_clean, y0_clean, n_eval = opt.rm_initial_design_NA_values(X0, y0)
print(X0_clean.shape) # (2, 2)
print(n_eval) # 2
(1, 2)
[5.]
3
(2, 2)
2

save_experiment

SpotOptim.SpotOptim.save_experiment(
    filename=None,
    prefix='experiment',
    path=None,
    overwrite=True,
    unpickleables='all',
    verbosity=0,
)

Save experiment configuration to a pickle file.

save_result

SpotOptim.SpotOptim.save_result(
    filename=None,
    prefix='result',
    path=None,
    overwrite=True,
    verbosity=0,
)

Save complete optimization results to a pickle file.

select_new

SpotOptim.SpotOptim.select_new(A, X, tolerance=0)

Select rows from A that are not in X. Used in suggest_next_infill_point() to avoid duplicate evaluations.

Parameters

Name Type Description Default
A ndarray Array with new values. required
X ndarray Array with known values. required
tolerance float Tolerance value for comparison. Defaults to 0. 0

Returns

Name Type Description
tuple Tuple[np.ndarray, np.ndarray] A tuple containing: * ndarray: Array with unknown (new) values. * ndarray: Array with True if value is new, otherwise False.

Examples

import numpy as np
from spotoptim import SpotOptim
def sphere(X):
    X = np.atleast_2d(X)
    return np.sum(X**2, axis=1)
opt = SpotOptim(fun=sphere, bounds=[(-5, 5)])
A = np.array([[1, 2], [3, 4], [5, 6]])
X = np.array([[3, 4], [7, 8]])
new_A, is_new = opt.select_new(A, X)
print("New A:", new_A)
print("Is new:", is_new)
New A: [[1 2]
 [5 6]]
Is new: [ True False  True]

sensitivity_spearman

SpotOptim.SpotOptim.sensitivity_spearman()

Compute and print Spearman correlation between parameters and objective values. This method analyzes the sensitivity of the objective function to each hyperparameter by computing Spearman rank correlations. For categorical (factor) variables, correlation is not computed as they require visual inspection instead. The method automatically handles different parameter types: * Integer/float parameters: Direct correlation with objective values * Log-transformed parameters (log10, log, ln): Correlation in log-space * Factor (categorical) parameters: Skipped with informative message Significance levels: * : p < 0.001 (highly significant) : p < 0.01 (significant) * *: p < 0.05 (marginally significant)

Examples

from spotoptim import SpotOptim
import numpy as np

def test_func(X):
    # x0 has strong effect, x1 has weak effect
    X = np.atleast_2d(X)
    return 10 * X[:, 0]**2 + 0.1 * X[:, 1]**2

opt = SpotOptim(
    fun=test_func,
    bounds=[(-5, 5), (-5, 5)],
    var_name=["x0", "x1"],
    max_iter=10,
    n_initial=5,
    seed=42
)
opt.optimize()
opt.sensitivity_spearman()

Sensitivity Analysis (Spearman Correlation):
--------------------------------------------------
  x0                  : -0.188 (p=0.603)
  x1                  : -0.297 (p=0.405)

Note

Only meaningful after optimize() has been called with sufficient evaluations.

set_seed

SpotOptim.SpotOptim.set_seed()

Set global random seeds for reproducibility. Sets seeds for: * random * numpy.random * torch (cpu and cuda) Only performs actions if self.seed is not None.

Returns

Name Type Description
None None

Examples

from spotoptim import SpotOptim
import numpy as np
spot = SpotOptim(fun=lambda x: x, bounds=[(0, 1)], seed=42)
spot.set_seed()
np.random.rand()  # Should be deterministic
0.3745401188473625

setup_dimension_reduction

SpotOptim.SpotOptim.setup_dimension_reduction()

Set up dimension reduction by identifying fixed dimensions. Identifies dimensions where lower and upper bounds are equal in Transformed Space. Reduces self.bounds, self.lower, self.upper, etc., to the Mapped Space (active variables only). The resulting self.bounds defines the Transformed and Mapped Space used for optimization. This method identifies variables that are fixed (constant) and excludes them from the optimization process. It stores: * Original bounds and metadata in all_* attributes * Boolean mask of fixed dimensions in ident * Reduced bounds, types, and names for optimization * red_dim flag indicating if reduction occurred

Returns

Name Type Description
None None

Examples

from spotoptim import SpotOptim
spot = SpotOptim(fun=lambda x: x, bounds=[(1, 10), (5, 5), (0, 1)])
print("Original lower bounds:", spot.all_lower)
print("Original upper bounds:", spot.all_upper)
print("Fixed dimensions mask:", spot.ident)
print("Reduced lower bounds:", spot.lower)
print("Reduced upper bounds:", spot.upper)
print("Reduced variable names:", spot.var_name)
print("Is dimension reduction active?", spot.red_dim)
Original lower bounds: [1. 5. 0.]
Original upper bounds: [10.  5.  1.]
Fixed dimensions mask: [False  True False]
Reduced lower bounds: [1. 0.]
Reduced upper bounds: [10.  1.]
Reduced variable names: ['x0', 'x2']
Is dimension reduction active? True

store_mo

SpotOptim.SpotOptim.store_mo(y_mo)

Store multi-objective values in self.y_mo. If multi-objective values are present (ndim==2), they are stored in self.y_mo. New values are appended to existing ones. For single-objective problems, self.y_mo remains None.

Parameters

Name Type Description Default
y_mo ndarray If multi-objective, shape (n_samples, n_objectives). If single-objective, shape (n_samples,). required

Examples

import numpy as np
from spotoptim import SpotOptim
opt = SpotOptim(
    fun=lambda X: np.column_stack([
        np.sum(X**2, axis=1),
        np.sum((X-1)**2, axis=1)
    ]),
    bounds=[(-5, 5), (-5, 5)],
    max_iter=10,
    n_initial=5
)
y_mo_1 = np.array([[1.0, 2.0], [3.0, 4.0]])
opt.store_mo(y_mo_1)
print(f"y_mo after first call: {opt.y_mo}")
y_mo_2 = np.array([[5.0, 6.0], [7.0, 8.0]])
opt.store_mo(y_mo_2)
print(f"y_mo after second call: {opt.y_mo}")
y_mo after first call: [[1. 2.]
 [3. 4.]]
y_mo after second call: [[1. 2.]
 [3. 4.]
 [5. 6.]
 [7. 8.]]

suggest_next_infill_point

SpotOptim.SpotOptim.suggest_next_infill_point()

Suggest next point to evaluate (dispatcher). Used in both sequential and parallel optimization loops. This method orchestrates the process of generating candidate points from the acquisition function optimizer, handling any failures in the acquisition process with a fallback strategy, and ensuring that the returned point(s) are valid and ready for evaluation. The returned point is in the Transformed and Mapped Space (Internal Optimization Space). This means: 1. Transformations (e.g., log, sqrt) have been applied. 2. Dimension reduction has been applied (fixed variables removed). Process: 1. Try candidates from acquisition function optimizer. 2. Handle acquisition failure (fallback). 3. Return last attempt if all fails.

Returns

Name Type Description
ndarray np.ndarray Next point(s) to evaluate in Transformed and Mapped Space.
np.ndarray Shape is (n_infill_points, n_features).

Examples

import numpy as np
from spotoptim import SpotOptim
def sphere(X):
    X = np.atleast_2d(X)
    return np.sum(X**2, axis=1)
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_initial=5,
    n_infill_points=2
)
# Need to initialize optimization state (X_, y_, surrogate)
# Normally done inside optimize()
np.random.seed(0)
opt.X_ = np.random.rand(10, 2)
opt.y_ = np.random.rand(10)
opt.fit_surrogate(opt.X_, opt.y_)
x_next = opt.suggest_next_infill_point()
x_next.shape
(2, 2)

to_all_dim

SpotOptim.SpotOptim.to_all_dim(X_red)

Expand reduced-dimensional points to full-dimensional representation. This method restores points from the reduced optimization space to the full-dimensional space by inserting fixed values for constant dimensions.

Parameters

Name Type Description Default
X_red ndarray Points in reduced space, shape (n_samples, n_reduced_dims). required

Returns

Name Type Description
ndarray np.ndarray Points in full space, shape (n_samples, n_original_dims).

Examples

import numpy as np
from spotoptim import SpotOptim
def sphere(X):
    X = np.atleast_2d(X)
    return np.sum(X**2, axis=1)
# Create problem with one fixed dimension
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (2, 2), (-5, 5)],  # x1 is fixed at 2
    max_iter=10,
    n_initial=3
)
X_red = np.array([[1.0, 3.0], [2.0, 4.0]])  # Only x0 and x2
X_full = opt.to_all_dim(X_red)
print(X_full.shape)
print(X_full[:, 1])
(2, 3)
[2. 2.]

to_red_dim

SpotOptim.SpotOptim.to_red_dim(X_full)

Reduce full-dimensional points to optimization space. This method removes fixed dimensions from full-dimensional points, extracting only the varying dimensions used in optimization.

Parameters

Name Type Description Default
X_full ndarray Points in full space, shape (n_samples, n_original_dims). required

Returns

Name Type Description
ndarray np.ndarray Points in reduced space, shape (n_samples, n_reduced_dims).

Examples

import numpy as np
from spotoptim import SpotOptim
def sphere(X):
    X = np.atleast_2d(X)
    return np.sum(X**2, axis=1)
# Create problem with one fixed dimension
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (2, 2), (-5, 5)],  # x1 is fixed at 2
    max_iter=10,
    n_initial=3
)
X_full = np.array([[1.0, 2.0, 3.0], [4.0, 2.0, 5.0]])
X_red = opt.to_red_dim(X_full)
print(X_red.shape)
print(np.array_equal(X_red, np.array([[1.0, 3.0], [4.0, 5.0]])))
(2, 2)
True

transform_X

SpotOptim.SpotOptim.transform_X(X)

Transform parameter array from original (natural) to internal scale. Converts from natural space (Original) to transformed space (full dimension). Does NOT handle dimension reduction (mapping).

Parameters

Name Type Description Default
X ndarray Array in Natural Space, shape (n_samples, n_features) required

Returns

Name Type Description
ndarray np.ndarray Array in Transformed Space (Full Dimension)

Examples

from spotoptim import SpotOptim
import numpy as np
from spotoptim.function import sphere
spot = SpotOptim(fun=sphere, bounds=[(1, 10)], var_trans=['log10'])
X_orig = np.array([[1], [10], [100]])
spot.transform_X(X_orig)
array([[0],
       [1],
       [2]])

transform_bounds

SpotOptim.SpotOptim.transform_bounds()

Transform bounds from original to internal scale. Updates self.bounds (and self.lower, self.upper) from Natural Space to Transformed Space. Calls transform_value for each bound and converts numpy types to Python native types (int or float based on var_type). Handles also reversed bounds, e.g., as an effect of reciprocal transformation.

Returns

Name Type Description
None None

Notes

Uses settings in self.var_trans. It can be one of id, log10, log, ln, sqrt, exp, square, cube, inv, reciprocal, or None. Also supports dynamic strings like log(x), sqrt(x), pow(x, p).

Examples

from spotoptim import SpotOptim
from spotoptim.function import sphere
import numpy as np
spot = SpotOptim(fun=sphere, bounds=[(1, 10), (0.1, 100)])
spot.var_trans = ['log10', 'sqrt']
spot.transform_bounds()
print(f"spot.bounds: {spot.bounds}")
spot.bounds: [(0.0, 1.0), (0.31622776601683794, 10.0)]

transform_value

SpotOptim.SpotOptim.transform_value(x, trans)

Apply transformation to a single float value.

Parameters

Name Type Description Default
x float Value to transform required
trans Optional[str] Transformation name. Can be one of id, log10, log, ln, sqrt, exp, square, cube, inv, reciprocal, or None. Also supports dynamic strings like log(x), sqrt(x), pow(x, p). required

Returns

Name Type Description
float Transformed value

Raises

Name Type Description
TypeError If x is not a float.
ValueError If an unknown transformation is specified.

Notes

See also inverse_transform_value.

Examples

from spotoptim import SpotOptim
from spotoptim.function import sphere
spot = SpotOptim(fun=sphere, bounds=[(1, 10)])
spot.transform_value(10, 'log10')
spot.transform_value(100, 'log(x)')
np.float64(4.605170185988092)

update_repeats_infill_points

SpotOptim.SpotOptim.update_repeats_infill_points(x_next)

Repeat infill point for noisy function evaluation. Used in the sequential_loop. For noisy objective functions (repeats_surrogate > 1), creates multiple copies of the suggested point for repeated evaluation. Otherwise, returns the point in 2D array format.

Parameters

Name Type Description Default
x_next ndarray Next point to evaluate, shape (n_features,). required

Returns

Name Type Description
ndarray np.ndarray Points to evaluate, shape (repeats_surrogate, n_features) or (1, n_features) if repeats_surrogate == 1.

Examples

import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere, noisy_sphere
# Without repeats

opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    repeats_surrogate=1
)
x_next = np.array([1.0, 2.0])
x_repeated = opt.update_repeats_infill_points(x_next)
print(x_repeated.shape)

# With repeats for noisy function
opt_noisy = SpotOptim(
    fun=noisy_sphere,
    bounds=[(-5, 5), (-5, 5)],
    repeats_surrogate=3
)
x_next = np.array([1.0, 2.0])
x_repeated = opt_noisy.update_repeats_infill_points(x_next)
print(x_repeated.shape)
# All three copies should be identical
np.all(x_repeated[0] == x_repeated[1])
(1, 2)
(3, 2)
np.True_

update_stats

SpotOptim.SpotOptim.update_stats()

Update optimization statistics. Updates various statistics related to the optimization progress: * min_y: Minimum y value found so far * min_X: X value corresponding to minimum y * counter: Total number of function evaluations

Notes

success_rate is updated separately via update_success_rate() method, which is called after each batch of function evaluations.

If “noise” is True (repeats_initial > 1 or repeats_surrogate > 1), additionally computes: * mean_X: Unique design points (aggregated from repeated evaluations) * mean_y: Mean y values per design point * var_y: Variance of y values per design point * min_mean_X: X value of the best mean y value * min_mean_y: Best mean y value * min_var_y: Variance of the best mean y value

Returns

Name Type Description
None None

Examples

import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere
# Without noise
opt = SpotOptim(fun=sphere,
                bounds=[(-5, 5), (-5, 5)],
                max_iter=10, n_initial=5)
opt.optimize()
print("SpotOptim stats without noise:")
print(f"opt.X_: {opt.X_}")
print(f"opt.y_: {opt.y_}")
print(f"opt.min_y: {opt.min_y}")
print(f"opt.min_X: {opt.min_X}")
print(f"opt.counter: {opt.counter}")
SpotOptim stats without noise:
opt.X_: [[-3.96286708  1.24568752]
 [-0.83487501 -1.30004009]
 [ 4.10036371 -3.9267669 ]
 [ 2.85778476  4.70586494]
 [-1.00282746 -0.6813956 ]
 [-1.02842677 -0.73550033]
 [-0.50484458 -0.38570333]
 [-0.30673631 -0.23863804]
 [-0.17618783  0.08900232]
 [-0.10280377  0.0111577 ]]
opt.y_: [1.72560529e+01 2.38712052e+00 3.22324809e+01 3.03120986e+01
 1.46996286e+00 1.59862236e+00 4.03635105e-01 1.51035282e-01
 3.89635633e-02 1.06931102e-02]
opt.min_y: 0.010693110242866127
opt.min_X: [-0.10280377  0.0111577 ]
opt.counter: 10
import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import noisy_sphere
# With noise
opt_noise = SpotOptim(fun=noisy_sphere,
                      bounds=[(-5, 5), (-5, 5)],
                      n_initial=5,
                      repeats_surrogate=2,
                      repeats_initial=2)
opt_noise.optimize()
print("SpotOptim stats with noise:")
print(f"opt_noise.X_: {opt_noise.X_}")
print(f"opt_noise.y_: {opt_noise.y_}")
print(f"opt_noise.min_y: {opt_noise.min_y}")
print(f"opt_noise.min_X: {opt_noise.min_X}")
print(f"opt_noise.counter: {opt_noise.counter}")
print(f"opt_noise.mean_X: {opt_noise.mean_X}")
print(f"opt_noise.mean_y: {opt_noise.mean_y}")
print(f"opt_noise.var_y: {opt_noise.var_y}")
print(f"opt_noise.min_mean_X: {opt_noise.min_mean_X}")
print(f"opt_noise.min_mean_y: {opt_noise.min_mean_y}")
print(f"opt_noise.min_var_y: {opt_noise.min_var_y}")
SpotOptim stats with noise:
opt_noise.X_: [[-0.0099179   0.79265838]
 [-0.0099179   0.79265838]
 [ 2.97601781 -3.58643274]
 [ 2.97601781 -3.58643274]
 [-3.57261458 -2.72613996]
 [-3.57261458 -2.72613996]
 [-2.96576096  3.77343788]
 [-2.96576096  3.77343788]
 [ 4.01629323  1.54336164]
 [ 4.01629323  1.54336164]
 [-0.43634999 -2.78547085]
 [-0.43634999 -2.78547085]
 [-0.66564312  1.13580013]
 [-0.66564312  1.13580013]
 [-0.20007461  0.82039724]
 [-0.20007461  0.82039724]
 [-0.07923686  0.72245657]
 [-0.07923686  0.72245657]
 [-0.1149221   0.55909479]
 [-0.1149221   0.55909479]]
opt_noise.y_: [ 0.59720805  0.54437663 21.61849859 21.88733944 20.11618536 20.14225343
 23.07115636 23.16435401 18.560688   18.788512    8.07599032  7.99919898
  1.67419979  1.68827618  0.66820478  0.53866271  0.61327505  0.44430956
  0.228677    0.46842576]
opt_noise.min_y: 0.22867700323337886
opt_noise.min_X: [-0.1149221   0.55909479]
opt_noise.counter: 20
opt_noise.mean_X: [[-3.57261458 -2.72613996]
 [-2.96576096  3.77343788]
 [-0.66564312  1.13580013]
 [-0.43634999 -2.78547085]
 [-0.20007461  0.82039724]
 [-0.1149221   0.55909479]
 [-0.07923686  0.72245657]
 [-0.0099179   0.79265838]
 [ 2.97601781 -3.58643274]
 [ 4.01629323  1.54336164]]
opt_noise.mean_y: [20.12921939 23.11775518  1.68123799  8.03759465  0.60343375  0.34855138
  0.52879231  0.57079234 21.75291901 18.6746    ]
opt_noise.var_y: [1.69886138e-04 2.17145039e-03 4.95362454e-05 1.47422756e-03
 4.19528726e-03 1.43698661e-02 7.13733398e-03 6.97789966e-04
 1.80688502e-02 1.29759436e-02]
opt_noise.min_mean_X: [-0.1149221   0.55909479]
opt_noise.min_mean_y: 0.34855137951394977
opt_noise.min_var_y: 0.0143698660886559

update_storage

SpotOptim.SpotOptim.update_storage(X_new, y_new)

Update storage (X_, y_) with new evaluation points. Appends new design points and their function values to the storage arrays. Points are converted from internal scale to original scale before storage.

Parameters

Name Type Description Default
X_new ndarray New design points in internal scale, shape (n_new, n_features). required
y_new ndarray Function values at X_new, shape (n_new,). required

Returns

Name Type Description
None None

Examples

import numpy as np
from spotoptim import SpotOptim
opt = SpotOptim(fun=lambda X: np.sum(X**2, axis=1),
                bounds=[(-5, 5), (-5, 5)],
                n_initial=5)
# Initialize with some data
opt.X_ = np.array([[1, 2], [3, 4]])
opt.y_ = np.array([5.0, 25.0])
print("Initial storage:")
print(opt.X_)
print(opt.y_)
# Add new points
X_new = np.array([[0, 1], [2, 3]])
y_new = np.array([1.0, 13.0])
opt.update_storage(X_new, y_new)
print("Updated storage:")
print(opt.X_)
print(opt.y_)
Initial storage:
[[1 2]
 [3 4]]
[ 5. 25.]
Updated storage:
[[1 2]
 [3 4]
 [0 1]
 [2 3]]
[ 5. 25.  1. 13.]

update_success_rate

SpotOptim.SpotOptim.update_success_rate(y_new)

Update the rolling success rate of the optimization process. A success is counted only if the new value is better (smaller) than the best found y value so far. The success rate is calculated based on the last window_size successes. Important: This method should be called BEFORE updating self.y_ to correctly track improvements against the previous best value.

Parameters

Name Type Description Default
y_new ndarray The new function values to consider for the success rate update. required

Examples

import numpy as np
from spotoptim import SpotOptim
opt = SpotOptim(fun=lambda X: np.sum(X**2, axis=1),
                bounds=[(-5, 5), (-5, 5)],
                max_iter=10, n_initial=5)
print(opt.success_rate)
opt.X_ = np.array([[1, 2], [3, 4], [0, 1]])
opt.y_ = np.array([5.0, 3.0, 2.0])
opt.update_success_rate(np.array([1.5, 2.5]))
print(opt.success_rate)
0.0
0.5

validate_x0

SpotOptim.SpotOptim.validate_x0(x0)

Validate and process starting point x0. Called in __init__ and optimize. This method checks that x0: * Is a numpy array * Has the correct number of dimensions * Has values within bounds (in original scale) * Is properly transformed to internal scale

Parameters

Name Type Description Default
x0 array - like Starting point in original scale required

Returns

Name Type Description
ndarray np.ndarray Validated and transformed x0 in internal scale, shape (n_features,)

Raises

Name Type Description
ValueError If x0 is invalid

Examples

import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import sphere
opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (5,5), (-10, 10)],
    x0=np.array([1.0, 5.0, 9.0]),
    var_trans=["log10", "id", "sqrt"]
)
# x0 is validated during initialization and transformed to internal scale
print(f"x0 in internal scale: {opt.x0}")
x0 in internal scale: [0. 3.]