When tuning hyperparameters of machine learning models, you need a structured way to define the search space. The ParameterSet class provides a fluent API for declaring float, integer, and categorical variables with bounds, defaults, and optional transformations.
Building a ParameterSet
Chain add_float(), add_int(), and add_factor() calls to build a search space:
from spotoptim.hyperparameters.parameters import ParameterSet
ps = ParameterSet()
ps.add_float("learning_rate" , low=- 5 , high=- 1 , default=- 3 , transform= "log10" )
ps.add_int("num_layers" , low= 1 , high= 5 , default= 2 )
ps.add_float("dropout" , low= 0.0 , high= 0.5 , default= 0.1 )
print (f"Names : { ps. names()} " )
print (f"Bounds : { ps. bounds} " )
print (f"Var types: { ps. var_type} " )
print (f"Defaults : { ps. sample_default()} " )
Names : ['learning_rate', 'num_layers', 'dropout']
Bounds : [(-5, -1), (1, 5), (0.0, 0.5)]
Var types: ['float', 'int', 'float']
Defaults : {'learning_rate': -3, 'num_layers': 2, 'dropout': 0.1}
Connecting to SpotOptim
The ParameterSet properties (bounds, var_type, var_name, var_trans) map directly to SpotOptim constructor arguments:
from spotoptim.hyperparameters.parameters import ParameterSet
from spotoptim import SpotOptim
import numpy as np
ps = ParameterSet()
ps.add_float("x1" , low=- 5 , high= 5 , default= 0 )
ps.add_float("x2" , low=- 5 , high= 5 , default= 0 )
def objective(X):
X = np.atleast_2d(X)
return np.sum (X** 2 , axis= 1 )
opt = SpotOptim(
fun= objective,
bounds= ps.bounds,
var_type= ps.var_type,
var_name= ps.names(),
var_trans= ps.var_trans,
max_iter= 20 ,
n_initial= 10 ,
seed= 0 ,
)
result = opt.optimize()
print (f"Best x : { result. x} " )
print (f"Best f(x) : { result. fun:.6f} " )
Best x : [-0.00016718 0.00071419]
Best f(x) : 0.000001
Neural Network Default Parameters
The MLP and LinearRegressor classes provide get_default_parameters() which returns a pre-configured ParameterSet for their hyperparameters:
from spotoptim.nn import MLP
ps = MLP.get_default_parameters()
print (f"Names : { ps. names()} " )
print (f"Bounds: { ps. bounds} " )
Names : ['l1', 'num_hidden_layers', 'activation', 'lr', 'optimizer']
Bounds: [(16, 128), (1, 5), ['ReLU', 'Tanh', 'Sigmoid', 'LeakyReLU', 'ELU'], (0.0001, 100.0), ['Adam', 'SGD', 'RMSprop', 'AdamW']]
This makes it easy to set up a hyperparameter tuning loop for neural network architectures.