47  Hyperparameter Tuning with PyTorch Lightning and User Data Sets

In this section, we will show how user specfied data can be used for the PyTorch Lightning hyperparameter tuning workflow with spotpython.

47.1 Loading a User Specified Data Set

Using a user-specified data set is straightforward.

The user simply needs to provide a data set and loads is as a spotpython CVSDataset() class by specifying the path, filename, and target column.

Consider the following example, where the user has a data set stored in the userData directory. The data set is stored in a file named data.csv. The target column is named target. To show the data, it is loaded as a pandas data frame and the first 5 rows are displayed. This step is not necessary for the hyperparameter tuning process, but it is useful for understanding the data.

# load the csv data set as a pandas dataframe and dislay the first 5 rows
import pandas as pd
data = pd.read_csv("./userData/data.csv")
print(data.head())
        age       sex       bmi        bp        s1        s2        s3  \
0  0.038076  0.050680  0.061696  0.021872 -0.044223 -0.034821 -0.043401   
1 -0.001882 -0.044642 -0.051474 -0.026328 -0.008449 -0.019163  0.074412   
2  0.085299  0.050680  0.044451 -0.005670 -0.045599 -0.034194 -0.032356   
3 -0.089063 -0.044642 -0.011595 -0.036656  0.012191  0.024991 -0.036038   
4  0.005383 -0.044642 -0.036385  0.021872  0.003935  0.015596  0.008142   

         s4        s5        s6  target  
0 -0.002592  0.019907 -0.017646   151.0  
1 -0.039493 -0.068332 -0.092204    75.0  
2 -0.002592  0.002861 -0.025930   141.0  
3  0.034309  0.022688 -0.009362   206.0  
4 -0.002592 -0.031988 -0.046641   135.0  

Next, the data set is loaded as a spotpython CSVDataset() class. This step is necessary for the hyperparameter tuning process.

from spotpython.data.csvdataset import CSVDataset
import torch
data_set = CSVDataset(directory="./userData/",
                     filename="data.csv",
                     target_column="target",
                     feature_type=torch.float32,
                     target_type=torch.float32,
                     rmNA=True)
print(len(data_set))
442

The following step is not necessary for the hyperparameter tuning process, but it is useful for understanding the data. The data set is loaded as a DataLoader from torch.utils.data to check the data.

# Set batch size for DataLoader
batch_size = 5
# Create DataLoader
from torch.utils.data import DataLoader
dataloader = DataLoader(data_set, batch_size=batch_size, shuffle=False)

# Iterate over the data in the DataLoader
for batch in dataloader:
    inputs, targets = batch
    print(f"Batch Size: {inputs.size(0)}")
    print(f"Inputs Shape: {inputs.shape}")
    print(f"Targets Shape: {targets.shape}")
    print("---------------")
    print(f"Inputs: {inputs}")
    print(f"Targets: {targets}")
    break
Batch Size: 5
Inputs Shape: torch.Size([5, 10])
Targets Shape: torch.Size([5])
---------------
Inputs: tensor([[ 0.0381,  0.0507,  0.0617,  0.0219, -0.0442, -0.0348, -0.0434, -0.0026,
          0.0199, -0.0176],
        [-0.0019, -0.0446, -0.0515, -0.0263, -0.0084, -0.0192,  0.0744, -0.0395,
         -0.0683, -0.0922],
        [ 0.0853,  0.0507,  0.0445, -0.0057, -0.0456, -0.0342, -0.0324, -0.0026,
          0.0029, -0.0259],
        [-0.0891, -0.0446, -0.0116, -0.0367,  0.0122,  0.0250, -0.0360,  0.0343,
          0.0227, -0.0094],
        [ 0.0054, -0.0446, -0.0364,  0.0219,  0.0039,  0.0156,  0.0081, -0.0026,
         -0.0320, -0.0466]])
Targets: tensor([151.,  75., 141., 206., 135.])

Similar to the setting from Section 45.1, the hyperparameter tuning setup is defined. Instead of using the Diabetes data set, the user data set is used. The data_set parameter is set to the user data set. The fun_control dictionary is set up via the fun_control_init function.

Note, that we have modified the fun_evals parameter to 12 and the init_size to 7 to reduce the computational time for this example. The divergence_threshold is set to 5,000, which is based on some pre-experiments with the user data set.

from spotpython.hyperdict.light_hyper_dict import LightHyperDict
from spotpython.fun.hyperlight import HyperLight
from spotpython.utils.init import (fun_control_init, surrogate_control_init, design_control_init)
from spotpython.utils.eda import print_res_table
from spotpython.hyperparameters.values import set_hyperparameter
from spotpython.spot import Spot

fun_control = fun_control_init(
    PREFIX="601",
    fun_evals=12,
    max_time=1,
    data_set = data_set,
    core_model_name="light.regression.NNLinearRegressor",
    hyperdict=LightHyperDict,
    divergence_threshold=5_000,
    _L_in=10,
    _L_out=1)

design_control = design_control_init(init_size=7)

set_hyperparameter(fun_control, "initialization", ["Default"])

fun = HyperLight().fun

spot_tuner = Spot(fun=fun,fun_control=fun_control, design_control=design_control)
module_name: light
submodule_name: regression
model_name: NNLinearRegressor
res = spot_tuner.run()
print_res_table(spot_tuner)
spot_tuner.plot_important_hyperparameter_contour(max_imp=3)
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode   FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │ 51.9 K │ train │ 409 K │  [4, 10]     [4, 1] │
└───┴────────┴────────────┴────────┴───────┴───────┴──────────┴───────────┘
Trainable params: 51.9 K                                                                                           
Non-trainable params: 0                                                                                            
Total params: 51.9 K                                                                                               
Total estimated model params size (MB): 0                                                                          
Modules in train mode: 17                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 409 K                                                                                                 
train_model(): trainer.fit failed with exception: SparseAdam does not support dense gradients, please consider Adam instead
train_model result: {'val_loss': 23320.16015625, 'hp_metric': 23320.16015625}
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode   FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │ 12.8 M │ train │ 203 M │  [8, 10]     [8, 1] │
└───┴────────┴────────────┴────────┴───────┴───────┴──────────┴───────────┘
Trainable params: 12.8 M                                                                                           
Non-trainable params: 0                                                                                            
Total params: 12.8 M                                                                                               
Total estimated model params size (MB): 51                                                                         
Modules in train mode: 24                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 203 M                                                                                                 
train_model result: {'val_loss': nan, 'hp_metric': nan}
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode   FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │  205 K │ train │ 1.6 M │  [4, 10]     [4, 1] │
└───┴────────┴────────────┴────────┴───────┴───────┴──────────┴───────────┘
Trainable params: 205 K                                                                                            
Non-trainable params: 0                                                                                            
Total params: 205 K                                                                                                
Total estimated model params size (MB): 0                                                                          
Modules in train mode: 24                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 1.6 M                                                                                                 
train_model result: {'val_loss': 22162.58984375, 'hp_metric': 22162.58984375}
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode    FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │  3.2 M │ train │ 51.0 M │  [8, 10]     [8, 1] │
└───┴────────┴────────────┴────────┴───────┴────────┴──────────┴───────────┘
Trainable params: 3.2 M                                                                                            
Non-trainable params: 0                                                                                            
Total params: 3.2 M                                                                                                
Total estimated model params size (MB): 12                                                                         
Modules in train mode: 24                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 51.0 M                                                                                                
train_model result: {'val_loss': 16100.7783203125, 'hp_metric': 16100.7783203125}
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode    FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │  802 K │ train │ 12.8 M │  [8, 10]     [8, 1] │
└───┴────────┴────────────┴────────┴───────┴────────┴──────────┴───────────┘
Trainable params: 802 K                                                                                            
Non-trainable params: 0                                                                                            
Total params: 802 K                                                                                                
Total estimated model params size (MB): 3                                                                          
Modules in train mode: 17                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 12.8 M                                                                                                
train_model result: {'val_loss': nan, 'hp_metric': nan}
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode   FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │ 50.9 M │ train │ 203 M │  [2, 10]     [2, 1] │
└───┴────────┴────────────┴────────┴───────┴───────┴──────────┴───────────┘
Trainable params: 50.9 M                                                                                           
Non-trainable params: 0                                                                                            
Total params: 50.9 M                                                                                               
Total estimated model params size (MB): 203                                                                        
Modules in train mode: 17                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 203 M                                                                                                 
train_model result: {'val_loss': nan, 'hp_metric': nan}
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode    FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │  807 K │ train │ 25.6 M │ [16, 10]    [16, 1] │
└───┴────────┴────────────┴────────┴───────┴────────┴──────────┴───────────┘
Trainable params: 807 K                                                                                            
Non-trainable params: 0                                                                                            
Total params: 807 K                                                                                                
Total estimated model params size (MB): 3                                                                          
Modules in train mode: 24                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 25.6 M                                                                                                
train_model result: {'val_loss': 24040.451171875, 'hp_metric': 24040.451171875}
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode    FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │  3.2 M │ train │ 51.0 M │  [8, 10]     [8, 1] │
└───┴────────┴────────────┴────────┴───────┴────────┴──────────┴───────────┘
Trainable params: 3.2 M                                                                                            
Non-trainable params: 0                                                                                            
Total params: 3.2 M                                                                                                
Total estimated model params size (MB): 12                                                                         
Modules in train mode: 24                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 51.0 M                                                                                                
train_model result: {'val_loss': 16582.51171875, 'hp_metric': 16582.51171875}
spotpython tuning: 16100.7783203125 [####------] 41.67%. Success rate: 0.00% 
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode    FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │  3.2 M │ train │ 51.0 M │  [8, 10]     [8, 1] │
└───┴────────┴────────────┴────────┴───────┴────────┴──────────┴───────────┘
Trainable params: 3.2 M                                                                                            
Non-trainable params: 0                                                                                            
Total params: 3.2 M                                                                                                
Total estimated model params size (MB): 12                                                                         
Modules in train mode: 24                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 51.0 M                                                                                                
train_model result: {'val_loss': 22252.875, 'hp_metric': 22252.875}
spotpython tuning: 16100.7783203125 [#####-----] 50.00%. Success rate: 0.00% 
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode    FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │  3.2 M │ train │ 51.0 M │  [8, 10]     [8, 1] │
└───┴────────┴────────────┴────────┴───────┴────────┴──────────┴───────────┘
Trainable params: 3.2 M                                                                                            
Non-trainable params: 0                                                                                            
Total params: 3.2 M                                                                                                
Total estimated model params size (MB): 12                                                                         
Modules in train mode: 24                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 51.0 M                                                                                                
train_model result: {'val_loss': 14519.490234375, 'hp_metric': 14519.490234375}
spotpython tuning: 14519.490234375 [######----] 58.33%. Success rate: 33.33% 
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode    FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │ 12.8 M │ train │ 50.9 M │  [2, 10]     [2, 1] │
└───┴────────┴────────────┴────────┴───────┴────────┴──────────┴───────────┘
Trainable params: 12.8 M                                                                                           
Non-trainable params: 0                                                                                            
Total params: 12.8 M                                                                                               
Total estimated model params size (MB): 51                                                                         
Modules in train mode: 24                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 50.9 M                                                                                                
train_model result: {'val_loss': 5972.04296875, 'hp_metric': 5972.04296875}
spotpython tuning: 5972.04296875 [#######---] 66.67%. Success rate: 50.00% 
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode   FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │  205 K │ train │ 3.2 M │  [8, 10]     [8, 1] │
└───┴────────┴────────────┴────────┴───────┴───────┴──────────┴───────────┘
Trainable params: 205 K                                                                                            
Non-trainable params: 0                                                                                            
Total params: 205 K                                                                                                
Total estimated model params size (MB): 0                                                                          
Modules in train mode: 24                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 3.2 M                                                                                                 
train_model(): trainer.fit failed with exception: SparseAdam does not support dense gradients, please consider Adam instead
train_model result: {'val_loss': 22818.576171875, 'hp_metric': 22818.576171875}
spotpython tuning: 5972.04296875 [########--] 75.00%. Success rate: 40.00% 
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode   FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │ 50.9 M │ train │ 203 M │  [2, 10]     [2, 1] │
└───┴────────┴────────────┴────────┴───────┴───────┴──────────┴───────────┘
Trainable params: 50.9 M                                                                                           
Non-trainable params: 0                                                                                            
Total params: 50.9 M                                                                                               
Total estimated model params size (MB): 203                                                                        
Modules in train mode: 24                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 203 M                                                                                                 
train_model result: {'val_loss': 6787.0361328125, 'hp_metric': 6787.0361328125}
spotpython tuning: 5972.04296875 [########--] 83.33%. Success rate: 33.33% 
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode   FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │ 50.9 M │ train │ 203 M │  [2, 10]     [2, 1] │
└───┴────────┴────────────┴────────┴───────┴───────┴──────────┴───────────┘
Trainable params: 50.9 M                                                                                           
Non-trainable params: 0                                                                                            
Total params: 50.9 M                                                                                               
Total estimated model params size (MB): 203                                                                        
Modules in train mode: 24                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 203 M                                                                                                 
train_model result: {'val_loss': 6273.56005859375, 'hp_metric': 6273.56005859375}
spotpython tuning: 5972.04296875 [#########-] 91.67%. Success rate: 28.57% 
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode   FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │ 50.9 M │ train │ 203 M │  [2, 10]     [2, 1] │
└───┴────────┴────────────┴────────┴───────┴───────┴──────────┴───────────┘
Trainable params: 50.9 M                                                                                           
Non-trainable params: 0                                                                                            
Total params: 50.9 M                                                                                               
Total estimated model params size (MB): 203                                                                        
Modules in train mode: 17                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 203 M                                                                                                 
train_model result: {'val_loss': nan, 'hp_metric': nan}
Using spacefilling design as fallback.
┏━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┓
┃    Name    Type        Params  Mode   FLOPs  In sizes  Out sizes ┃
┡━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━┩
│ 0 │ layers │ Sequential │ 53.2 K │ train │ 409 K │  [4, 10]     [4, 1] │
└───┴────────┴────────────┴────────┴───────┴───────┴──────────┴───────────┘
Trainable params: 53.2 K                                                                                           
Non-trainable params: 0                                                                                            
Total params: 53.2 K                                                                                               
Total estimated model params size (MB): 0                                                                          
Modules in train mode: 24                                                                                          
Modules in eval mode: 0                                                                                            
Total FLOPs: 409 K                                                                                                 
train_model result: {'val_loss': 21099.947265625, 'hp_metric': 21099.947265625}
spotpython tuning: 5972.04296875 [##########] 100.00%. Success rate: 28.57% Done...

Experiment saved to 601_res.pkl
| name           | type   | default   |   lower |   upper | tuned               | transform             |   importance | stars   |
|----------------|--------|-----------|---------|---------|---------------------|-----------------------|--------------|---------|
| l1             | int    | 3         |     3.0 |     8.0 | 7.0                 | transform_power_2_int |         2.91 | *       |
| epochs         | int    | 4         |     4.0 |     9.0 | 7.0                 | transform_power_2_int |         0.08 |         |
| batch_size     | int    | 4         |     1.0 |     4.0 | 1.0                 | transform_power_2_int |         4.77 | *       |
| act_fn         | factor | ReLU      |     0.0 |     5.0 | Sigmoid             | None                  |        11.35 | *       |
| optimizer      | factor | SGD       |     0.0 |    11.0 | Adagrad             | None                  |         0.08 |         |
| dropout_prob   | float  | 0.01      |     0.0 |    0.25 | 0.06316793576530161 | None                  |       100.00 | ***     |
| lr_mult        | float  | 1.0       |     0.1 |    10.0 | 9.004716312092247   | None                  |         0.08 |         |
| patience       | int    | 2         |     2.0 |     6.0 | 3.0                 | transform_power_2_int |         0.08 |         |
| batch_norm     | factor | 0         |     0.0 |     1.0 | 1                   | None                  |         0.08 |         |
| initialization | factor | Default   |     0.0 |     0.0 | Default             | None                  |         0.00 |         |
l1:  2.9079631170943667
epochs:  0.07976513592982082
batch_size:  4.767854437417129
act_fn:  11.347665219718287
optimizer:  0.07976513592982082
dropout_prob:  100.0
lr_mult:  0.07976513592982082
patience:  0.07976513592982082
batch_norm:  0.07976513592982082

47.2 Summary

This section showed how to use user-specified data sets for the hyperparameter tuning process with spotpython. The user needs to provide the data set and load it as a spotpython CSVDataset() class.