SpotOptim provides full support for reproducible optimization runs through the seed parameter. This is essential for:
Scientific research: Ensuring experiments can be replicated
Debugging: Reproducing specific optimization behaviors
Benchmarking: Fair comparison between different configurations
Production: Consistent results in deployed applications
When you specify a seed, SpotOptim guarantees that running the same optimization multiple times will produce identical results. Without a seed, each run explores the search space differently, which can be useful for robustness testing.
11.2 Basic Usage
11.2.1 Making Optimization Reproducible
To ensure reproducible results, simply specify the seed parameter when creating the optimizer:
TensorBoard logging disabled
Initial best: f(x) = 5.542803
Iteration 1: New best f(x) = 0.001070
Iteration 2: New best f(x) = 0.000089
Iteration 3: New best f(x) = 0.000066
Iteration 4: New best f(x) = 0.000036
Iteration 5: New best f(x) = 0.000001
Iteration 6: New best f(x) = 0.000000
Iteration 7: f(x) = 0.000000
Iteration 8: f(x) = 0.000000
Iteration 9: f(x) = 0.000000
Iteration 10: f(x) = 0.000000
Iteration 11: f(x) = 0.000000
Iteration 12: f(x) = 0.000000
Iteration 13: f(x) = 0.000000
Iteration 14: f(x) = 0.000000
Iteration 15: New best f(x) = 0.000000
Best solution: [3.31436760e-04 4.18312302e-05]
Best value: 1.1160017787260647e-07
Key Point: Running this code multiple times (even on different days or machines) will always produce the same result.
11.2.2 Running Independent Experiments
If you don’t specify a seed, each optimization run will explore the search space differently:
# Non-reproducible: different results each timeoptimizer = SpotOptim( fun=sphere, bounds=[(-5, 5), (-5, 5)], max_iter=30, n_initial=15# No seed specified)result = optimizer.optimize()# Results will vary between runs
This is useful when you want to: - Explore different regions of the search space - Test the robustness of your results - Run multiple independent optimization attempts
11.3 Practical Examples
11.3.1 Example 1: Comparing Different Configurations
When comparing different optimizer settings, use the same seed for fair comparison:
import numpy as npfrom spotoptim import SpotOptimdef rosenbrock(X):"""Rosenbrock function""" x = X[:, 0] y = X[:, 1]return (1- x)**2+100* (y - x**2)**2# Configuration 1: More initial pointsopt1 = SpotOptim( fun=rosenbrock, bounds=[(-2, 2), (-2, 2)], max_iter=50, n_initial=20, seed=42# Same seed for fair comparison)result1 = opt1.optimize()# Configuration 2: Fewer initial points, more iterationsopt2 = SpotOptim( fun=rosenbrock, bounds=[(-2, 2), (-2, 2)], max_iter=50, n_initial=10, seed=42# Same seed)result2 = opt2.optimize()print(f"Config 1 (more initial): {result1.fun:.6f}")print(f"Config 2 (fewer initial): {result2.fun:.6f}")
11.3.2 Example 2: Reproducible Research Experiment
For scientific papers or reports, always use a fixed seed and document it:
import numpy as npfrom spotoptim import SpotOptimdef rastrigin(X):"""Rastrigin function (multimodal)""" A =10 n = X.shape[1]return A * n + np.sum(X**2- A * np.cos(2* np.pi * X), axis=1)# Documented seed for reproducibilityRANDOM_SEED =12345optimizer = SpotOptim( fun=rastrigin, bounds=[(-5.12, 5.12), (-5.12, 5.12), (-5.12, 5.12)], max_iter=100, n_initial=30, seed=RANDOM_SEED, verbose=True)result = optimizer.optimize()print(f"\nExperiment Results (seed={RANDOM_SEED}):")print(f"Best solution: {result.x}")print(f"Best value: {result.fun}")print(f"Iterations: {result.nit}")print(f"Function evaluations: {result.nfev}")# These results can now be cited in a paper
To test robustness, run the same optimization with different seeds:
import numpy as npfrom spotoptim import SpotOptimdef ackley(X):"""Ackley function""" a =20 b =0.2 c =2* np.pi n = X.shape[1] sum_sq = np.sum(X**2, axis=1) sum_cos = np.sum(np.cos(c * X), axis=1)return-a * np.exp(-b * np.sqrt(sum_sq / n)) - np.exp(sum_cos / n) + a + np.e# Run 5 independent optimizationsresults = []seeds = [42, 123, 456, 789, 1011]for seed in seeds: optimizer = SpotOptim( fun=ackley, bounds=[(-5, 5), (-5, 5)], max_iter=40, n_initial=20, seed=seed, verbose=False ) result = optimizer.optimize() results.append(result.fun)print(f"Run with seed {seed:4d}: f(x) = {result.fun:.6f}")# Analyze robustnessprint(f"\nBest result: {min(results):.6f}")print(f"Worst result: {max(results):.6f}")print(f"Mean: {np.mean(results):.6f}")print(f"Std dev: {np.std(results):.6f}")
Run with seed 42: f(x) = 0.000907
Run with seed 123: f(x) = 0.001394
Run with seed 456: f(x) = 0.001941
Run with seed 789: f(x) = 0.000616
Run with seed 1011: f(x) = 0.003029
Best result: 0.000616
Worst result: 0.003029
Mean: 0.001578
Std dev: 0.000854
11.3.4 Example 4: Reproducible Initial Design
The seed ensures that even the initial design points are reproducible:
import numpy as npfrom spotoptim import SpotOptimdef simple_quadratic(X):return np.sum((X -1)**2, axis=1)# Create two optimizers with same seedopt1 = SpotOptim( fun=simple_quadratic, bounds=[(-5, 5), (-5, 5)], max_iter=25, n_initial=10, seed=999)opt2 = SpotOptim( fun=simple_quadratic, bounds=[(-5, 5), (-5, 5)], max_iter=25, n_initial=10, seed=999# Same seed)# Run both optimizationsresult1 = opt1.optimize()result2 = opt2.optimize()# Verify identical resultsprint("Initial design points are identical:", np.allclose(opt1.X_[:10], opt2.X_[:10]))print("All evaluated points are identical:", np.allclose(opt1.X_, opt2.X_))print("All function values are identical:", np.allclose(opt1.y_, opt2.y_))print("Best solutions are identical:", np.allclose(result1.x, result2.x))
Initial design points are identical: True
All evaluated points are identical: True
All function values are identical: True
Best solutions are identical: True
11.3.5 Example 5: Custom Initial Design with Seed
Even when providing a custom initial design, the seed ensures reproducible subsequent iterations:
import numpy as npfrom spotoptim import SpotOptimdef beale(X):"""Beale function""" x = X[:, 0] y = X[:, 1] term1 = (1.5- x + x * y)**2 term2 = (2.25- x + x * y**2)**2 term3 = (2.625- x + x * y**3)**2return term1 + term2 + term3# Custom initial design (e.g., from previous knowledge)X_start = np.array([ [0.0, 0.0], [1.0, 1.0], [2.0, 2.0], [-1.0, -1.0]])# Run twice with same seed and initial designopt1 = SpotOptim( fun=beale, bounds=[(-4.5, 4.5), (-4.5, 4.5)], max_iter=30, n_initial=10, seed=777)result1 = opt1.optimize(X0=X_start)opt2 = SpotOptim( fun=beale, bounds=[(-4.5, 4.5), (-4.5, 4.5)], max_iter=30, n_initial=10, seed=777# Same seed)result2 = opt2.optimize(X0=X_start)print("Results are identical:", np.allclose(result1.x, result2.x))print(f"Best value: {result1.fun:.6f}")
Results are identical: True
Best value: 3.201102
11.4 Advanced Topics
11.4.1 Seed and Noisy Functions
When optimizing noisy functions with repeated evaluations, the seed ensures reproducible noise:
import numpy as npfrom spotoptim import SpotOptimdef noisy_sphere(X):"""Sphere function with Gaussian noise""" base = np.sum(X**2, axis=1) noise = np.random.normal(0, 0.1, size=base.shape)return base + noiseoptimizer = SpotOptim( fun=noisy_sphere, bounds=[(-5, 5), (-5, 5)], max_iter=40, n_initial=20, repeats_initial=3, # 3 evaluations per point repeats_surrogate=2, seed=42# Ensures same noise pattern)result = optimizer.optimize()print(f"Best mean value: {optimizer.min_mean_y:.6f}")print(f"Variance at best: {optimizer.min_var_y:.6f}")
Best mean value: 0.056456
Variance at best: 0.003927
Important: With the same seed, even the noise will be identical across runs!
11.4.2 Different Seeds for Different Exploration
Use different seeds to explore different regions systematically:
# Configuration for experiment reported in Section 4.2EXPERIMENT_SEED =2024MAX_ITERATIONS =100optimizer = SpotOptim( fun=my_objective, bounds=my_bounds, max_iter=MAX_ITERATIONS, seed=EXPERIMENT_SEED)
11.5.3 3. Use Different Seeds for Different Experiments
# Different experiments should use different seedsBASELINE_SEED =100EXPERIMENT_A_SEED =200EXPERIMENT_B_SEED =300
11.5.4 4. Test Robustness Across Multiple Seeds
# Run same optimization with multiple seedsfor seed in [42, 123, 456, 789, 1011]: optimizer = SpotOptim(fun=objective, bounds=bounds, seed=seed) result = optimizer.optimize()# Analyze results
11.6 What the Seed Controls
The seed parameter ensures reproducibility by controlling:
Initial Design Generation: Latin Hypercube Sampling produces the same initial points
Surrogate Model: Gaussian Process random initialization is identical
Acquisition Optimization: Differential evolution explores the same candidates
Random Sampling: Any random exploration uses the same random numbers
This guarantees that the entire optimization pipeline is deterministic and reproducible.
11.7 Common Questions
Q: Can I use seed=0?
A: Yes, any integer (including 0) is a valid seed.
Q: Will different Python versions give the same results?
A: Generally yes, but minor numerical differences may occur due to underlying library changes. Use the same environment for exact reproducibility.
Q: Does the seed affect the objective function?
A: No, the seed only affects SpotOptim’s internal random processes. If your objective function has its own randomness, you’ll need to control that separately.
Q: How do I choose a good seed value?
A: Any integer works. Common choices are 42, 123, or dates (e.g., 20241112). What matters is consistency, not the specific value.