The SpotOptim Class

Central orchestrator for surrogate-model-based optimization: constructor, optimize(), and result inspection.

SpotOptim is the main entry point for all optimization in spotoptim. You create an instance with your objective function and bounds, call optimize(), and get back a scipy-compatible OptimizeResult.


Minimal Example

from spotoptim import SpotOptim
from spotoptim.function import sphere

opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    max_iter=20,
    n_initial=10,
    seed=0,
)
result = opt.optimize()

print(f"Best x    : {result.x}")
print(f"Best f(x) : {result.fun:.6f}")
print(f"Evaluations: {result.nfev}")
Best x    : [-0.00016718  0.00071419]
Best f(x) : 0.000001
Evaluations: 20

Three ingredients: a callable fun, a list of bounds, and a budget via max_iter (total evaluations including the initial design). The optimizer builds a Kriging surrogate by default and uses predicted-value acquisition (acquisition="y").


Key Constructor Parameters

The constructor accepts many parameters. The most commonly used are:

Parameter Default Description
fun (required) Objective function: accepts (n, d) array, returns (n,) array
bounds None List of (lower, upper) tuples, one per dimension
max_iter 20 Total evaluation budget (initial + sequential)
n_initial 10 Number of initial design points
surrogate None Surrogate model (default: Kriging(method="regression"))
acquisition "y" Acquisition function: "y", "ei", or "pi"
var_type None Variable types: list of "float", "int", "factor"
seed None Random seed for reproducibility

See the SpotOptim API for the full parameter list.


The OptimizeResult

optimize() returns a scipy.optimize.OptimizeResult with these fields:

Field Type Description
x ndarray Best point found (original scale)
fun float Best objective value
nfev int Total function evaluations
nit int Sequential iterations (after initial design)
success bool Whether optimization succeeded
message str Termination reason with statistics
X ndarray All evaluated points
y ndarray All objective values
from spotoptim import SpotOptim
from spotoptim.function import ackley

opt = SpotOptim(
    fun=ackley,
    bounds=[(-5, 5), (-5, 5)],
    max_iter=25,
    n_initial=10,
    seed=0,
)
result = opt.optimize()

print(f"Success   : {result.success}")
print(f"Best x    : {result.x}")
print(f"Best f(x) : {result.fun:.6f}")
print(f"Iterations: {result.nit}")
print(f"All points: {result.X.shape}")
Success   : True
Best x    : [ 0.00174499 -0.00130203]
Best f(x) : 0.006284
Iterations: 15
All points: (25, 2)

Variable Types

spotoptim supports three variable types via the var_type parameter:

  • "float" — continuous real-valued variables (default)
  • "int" — integer-constrained variables (rounded after surrogate prediction)
  • "factor" — categorical/unordered variables (encoded internally)
import numpy as np
from spotoptim import SpotOptim

def mixed_objective(X):
    X = np.atleast_2d(X)
    continuous = X[:, 0]
    integer_val = X[:, 1]
    factor_val = X[:, 2]
    return continuous**2 + (integer_val - 3)**2 + factor_val

opt = SpotOptim(
    fun=mixed_objective,
    bounds=[(-5.0, 5.0), (0, 10), (0, 4)],
    var_type=["float", "int", "factor"],
    var_name=["x_cont", "x_int", "x_cat"],
    max_iter=25,
    n_initial=10,
    seed=0,
)
result = opt.optimize()

print(f"Best x    : {result.x}")
print(f"Best f(x) : {result.fun:.6f}")
Best x    : [-9.98879117e-04  3.00000000e+00  0.00000000e+00]
Best f(x) : 0.000001

When var_type is not provided, all variables default to "float".


Variable Transformations

Use var_trans to apply log-transformations to parameters that span several orders of magnitude (e.g., learning rates):

import numpy as np
from spotoptim import SpotOptim

def obj_with_lr(X):
    X = np.atleast_2d(X)
    lr = X[:, 0]
    weight = X[:, 1]
    return (lr - 0.001)**2 + (weight - 0.5)**2

opt = SpotOptim(
    fun=obj_with_lr,
    bounds=[(1e-5, 1e-1), (0.0, 1.0)],
    var_type=["float", "float"],
    var_name=["lr", "weight"],
    var_trans=["log10", None],
    max_iter=20,
    n_initial=10,
    seed=0,
)
result = opt.optimize()

print(f"Best lr      : {result.x[0]:.6f}")
print(f"Best weight  : {result.x[1]:.4f}")
print(f"Best f(x)    : {result.fun:.8f}")
Best lr      : 0.000010
Best weight  : 0.4981
Best f(x)    : 0.00000477

With var_trans=["log10", None], the first variable is optimized in log10 space internally. The bounds are specified in natural (original) scale — here 1e-5 to 1e-1 for the learning rate.


Choosing an Acquisition Function

The acquisition parameter controls how the optimizer selects the next evaluation point:

  • "y" (default) — minimize the surrogate’s predicted value (pure exploitation)
  • "ei" — Expected Improvement: balances exploitation and exploration
  • "pi" — Probability of Improvement: probability of beating the current best
from spotoptim import SpotOptim
from spotoptim.function import rosenbrock

opt = SpotOptim(
    fun=rosenbrock,
    bounds=[(-2, 2), (-2, 2)],
    acquisition="ei",
    max_iter=25,
    n_initial=10,
    seed=0,
)
result = opt.optimize()

print(f"Best x    : {result.x}")
print(f"Best f(x) : {result.fun:.6f}")
Best x    : [1.24915117 1.47082529]
Best f(x) : 0.864057

See Acquisition and Infill for details on acquisition optimizers and infill strategies.


Choosing a Surrogate Model

The default surrogate is Kriging(method="regression"). You can pass any model that implements fit(X, y) and predict(X, return_std=False):

from spotoptim import SpotOptim
from spotoptim.surrogate import Kriging
from spotoptim.function import sphere

kriging = Kriging(method="interpolation", noise=1e-3, seed=0)

opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    surrogate=kriging,
    max_iter=20,
    n_initial=10,
    seed=0,
)
result = opt.optimize()

print(f"Best f(x) : {result.fun:.6f}")
Best f(x) : 0.006474

See Surrogate Models for Kriging options, MLPSurrogate, and how to plug in sklearn estimators.


Noisy Optimization

For noisy objective functions, use repeats_initial and repeats_surrogate to re-evaluate each design point multiple times. Add ocba_delta to enable Optimal Computing Budget Allocation, which intelligently allocates extra evaluations to the most promising points.

import numpy as np
from spotoptim import SpotOptim
from spotoptim.function import noisy_sphere

np.random.seed(0)

opt = SpotOptim(
    fun=noisy_sphere,
    bounds=[(-5, 5), (-5, 5)],
    repeats_initial=3,
    repeats_surrogate=2,
    ocba_delta=3,
    max_iter=30,
    n_initial=10,
    seed=0,
)
result = opt.optimize()

print(f"Best x    : {result.x}")
print(f"Best f(x) : {result.fun:.6f}")
print(f"Evaluations: {result.nfev}")
Best x    : [-0.44355715 -0.46646793]
Best f(x) : 0.395617
Evaluations: 30

The surrogate is fitted on the mean values across repeats, reducing the effect of noise. See Utilities for more on OCBA.


Time Budget

Use max_time (in minutes) to set a wall-clock time limit instead of or in addition to max_iter:

from spotoptim import SpotOptim
from spotoptim.function import sphere

opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    max_time=0.05,
    max_iter=1000,
    n_initial=10,
    seed=0,
)
result = opt.optimize()

print(f"Evaluations: {result.nfev}")
print(f"Best f(x) : {result.fun:.6f}")
Evaluations: 16
Best f(x) : 0.000001

The optimizer stops when either max_iter or max_time is reached, whichever comes first.


Restarts

When the optimizer gets stuck in a local minimum, automatic restarts can help. Set restart_after_n to trigger a restart after that many consecutive iterations without improvement:

from spotoptim import SpotOptim
from spotoptim.function import ackley

opt = SpotOptim(
    fun=ackley,
    bounds=[(-5, 5), (-5, 5)],
    restart_after_n=10,
    restart_inject_best=True,
    max_iter=30,
    n_initial=10,
    seed=0,
)
result = opt.optimize()

print(f"Best x    : {result.x}")
print(f"Best f(x) : {result.fun:.6f}")
Best x    : [1.37196759e-04 5.82683136e-05]
Best f(x) : 0.000422

With restart_inject_best=True (default), the best point found so far is injected into the new initial design after each restart.


Parallel Evaluation

Set n_jobs to evaluate multiple points in parallel. Use n_jobs=-1 to use all available CPU cores:

from spotoptim import SpotOptim
from spotoptim.function import sphere

opt = SpotOptim(
    fun=sphere,
    bounds=[(-5, 5), (-5, 5)],
    n_jobs=2,
    eval_batch_size=2,
    max_iter=20,
    n_initial=10,
    seed=0,
)
result = opt.optimize()

print(f"Best f(x) : {result.fun:.6f}")
print(f"Evaluations: {result.nfev}")
Best f(x) : 0.000001
Evaluations: 20

Configuration Access

All constructor parameters are stored in opt.config (a SpotOptimConfig dataclass). For convenience, you can access them directly on the optimizer:

from spotoptim import SpotOptim
from spotoptim.function import sphere

opt = SpotOptim(fun=sphere, bounds=[(-5, 5)], max_iter=15, seed=0)

print(f"max_iter  : {opt.max_iter}")
print(f"n_initial : {opt.n_initial}")
print(f"acquisition: {opt.acquisition}")
print(f"bounds    : {opt.bounds}")
max_iter  : 15
n_initial : 10
acquisition: y
bounds    : [(-5.0, 5.0)]

See Also