31  Hyperparameter Tuning with spotpython and PyTorch Lightning for the Diabetes Data Set

In this section, we will show how spotpython can be integrated into the PyTorch Lightning training workflow for a regression task. It demonstrates how easy it is to use spotpython to tune hyperparameters for a PyTorch Lightning model.

31.1 The Basic Setting

import os
from math import inf
import warnings
warnings.filterwarnings("ignore")

After importing the necessary libraries, the fun_control dictionary is set up via the fun_control_init function. The fun_control dictionary contains

  • PREFIX: a unique identifier for the experiment
  • fun_evals: the number of function evaluations
  • max_time: the maximum run time in minutes
  • data_set: the data set. Here we use the Diabetes data set that is provided by spotpython.
  • core_model_name: the class name of the neural network model. This neural network model is provided by spotpython.
  • hyperdict: the hyperparameter dictionary. This dictionary is used to define the hyperparameters of the neural network model. It is also provided by spotpython.
  • _L_in: the number of input features. Since the Diabetes data set has 10 features, _L_in is set to 10.
  • _L_out: the number of output features. Since we want to predict a single value, _L_out is set to 1.

The HyperLight class is used to define the objective function fun. It connects the PyTorch and the spotpython methods and is provided by spotpython.

from spotpython.data.diabetes import Diabetes
from spotpython.hyperdict.light_hyper_dict import LightHyperDict
from spotpython.fun.hyperlight import HyperLight
from spotpython.utils.init import (fun_control_init, surrogate_control_init, design_control_init)
from spotpython.utils.eda import gen_design_table
from spotpython.spot import spot
from spotpython.utils.file import get_experiment_filename

PREFIX="601"

data_set = Diabetes()

fun_control = fun_control_init(
    PREFIX=PREFIX,
    save_experiment=True,
    fun_evals=inf,
    max_time=1,
    data_set = data_set,
    core_model_name="light.regression.NNLinearRegressor",
    hyperdict=LightHyperDict,
    _L_in=10,
    _L_out=1)

fun = HyperLight().fun
module_name: light
submodule_name: regression
model_name: NNLinearRegressor

The method set_hyperparameter allows the user to modify default hyperparameter settings. Here we modify some hyperparameters to keep the model small and to decrease the tuning time.

from spotpython.hyperparameters.values import set_hyperparameter
set_hyperparameter(fun_control, "optimizer", [ "Adadelta", "Adam", "Adamax"])
set_hyperparameter(fun_control, "l1", [3,4])
set_hyperparameter(fun_control, "epochs", [3,7])
set_hyperparameter(fun_control, "batch_size", [4,11])
set_hyperparameter(fun_control, "dropout_prob", [0.0, 0.025])
set_hyperparameter(fun_control, "patience", [2,3])

design_control = design_control_init(init_size=10)

print(gen_design_table(fun_control))
| name           | type   | default   |   lower |   upper | transform             |
|----------------|--------|-----------|---------|---------|-----------------------|
| l1             | int    | 3         |     3   |   4     | transform_power_2_int |
| epochs         | int    | 4         |     3   |   7     | transform_power_2_int |
| batch_size     | int    | 4         |     4   |  11     | transform_power_2_int |
| act_fn         | factor | ReLU      |     0   |   5     | None                  |
| optimizer      | factor | SGD       |     0   |   2     | None                  |
| dropout_prob   | float  | 0.01      |     0   |   0.025 | None                  |
| lr_mult        | float  | 1.0       |     0.1 |  10     | None                  |
| patience       | int    | 2         |     2   |   3     | transform_power_2_int |
| batch_norm     | factor | 0         |     0   |   1     | None                  |
| initialization | factor | Default   |     0   |   4     | None                  |

Finally, a Spot object is created. Calling the method run() starts the hyperparameter tuning process.

spot_tuner = spot.Spot(fun=fun,fun_control=fun_control, design_control=design_control)
res = spot_tuner.run()

In fun(): config:
{'act_fn': Sigmoid(),
 'batch_norm': False,
 'batch_size': 2048,
 'dropout_prob': np.float64(0.010469763733360567),
 'epochs': 16,
 'initialization': 'xavier_uniform',
 'l1': 16,
 'lr_mult': np.float64(4.135888451953213),
 'optimizer': 'Adam',
 'patience': 4}
train_model result: {'val_loss': 23995.974609375, 'hp_metric': 23995.974609375}

In fun(): config:
{'act_fn': ReLU(),
 'batch_norm': False,
 'batch_size': 64,
 'dropout_prob': np.float64(0.0184251494885258),
 'epochs': 32,
 'initialization': 'kaiming_normal',
 'l1': 8,
 'lr_mult': np.float64(3.1418668140600845),
 'optimizer': 'Adadelta',
 'patience': 8}
train_model result: {'val_loss': 23983.41015625, 'hp_metric': 23983.41015625}

In fun(): config:
{'act_fn': ELU(),
 'batch_norm': True,
 'batch_size': 256,
 'dropout_prob': np.float64(0.00996276270809942),
 'epochs': 64,
 'initialization': 'Default',
 'l1': 16,
 'lr_mult': np.float64(8.543578103398445),
 'optimizer': 'Adadelta',
 'patience': 4}
train_model result: {'val_loss': 23741.548828125, 'hp_metric': 23741.548828125}

In fun(): config:
{'act_fn': LeakyReLU(),
 'batch_norm': True,
 'batch_size': 512,
 'dropout_prob': np.float64(0.004305336774252681),
 'epochs': 8,
 'initialization': 'kaiming_normal',
 'l1': 8,
 'lr_mult': np.float64(0.3009268823483702),
 'optimizer': 'Adamax',
 'patience': 4}
train_model result: {'val_loss': 24071.533203125, 'hp_metric': 24071.533203125}

In fun(): config:
{'act_fn': Tanh(),
 'batch_norm': True,
 'batch_size': 128,
 'dropout_prob': np.float64(0.021718144359373085),
 'epochs': 32,
 'initialization': 'kaiming_uniform',
 'l1': 16,
 'lr_mult': np.float64(8.005670267977834),
 'optimizer': 'Adam',
 'patience': 8}
train_model result: {'val_loss': 23948.751953125, 'hp_metric': 23948.751953125}

In fun(): config:
{'act_fn': LeakyReLU(),
 'batch_norm': False,
 'batch_size': 32,
 'dropout_prob': np.float64(0.023931753071792624),
 'epochs': 16,
 'initialization': 'xavier_normal',
 'l1': 16,
 'lr_mult': np.float64(1.2532486761645163),
 'optimizer': 'Adamax',
 'patience': 8}
train_model result: {'val_loss': 24022.400390625, 'hp_metric': 24022.400390625}

In fun(): config:
{'act_fn': ELU(),
 'batch_norm': False,
 'batch_size': 512,
 'dropout_prob': np.float64(0.0074444117802003025),
 'epochs': 8,
 'initialization': 'kaiming_uniform',
 'l1': 8,
 'lr_mult': np.float64(9.535342719713716),
 'optimizer': 'Adam',
 'patience': 8}
train_model result: {'val_loss': 24036.333984375, 'hp_metric': 24036.333984375}

In fun(): config:
{'act_fn': Swish(),
 'batch_norm': True,
 'batch_size': 32,
 'dropout_prob': np.float64(0.0012790404219919403),
 'epochs': 128,
 'initialization': 'kaiming_uniform',
 'l1': 8,
 'lr_mult': np.float64(2.4659566199812857),
 'optimizer': 'Adadelta',
 'patience': 4}
train_model result: {'val_loss': 23058.591796875, 'hp_metric': 23058.591796875}

In fun(): config:
{'act_fn': Tanh(),
 'batch_norm': False,
 'batch_size': 128,
 'dropout_prob': np.float64(0.0153979445945591),
 'epochs': 32,
 'initialization': 'xavier_uniform',
 'l1': 8,
 'lr_mult': np.float64(6.089028896372417),
 'optimizer': 'Adamax',
 'patience': 4}
train_model result: {'val_loss': 23857.533203125, 'hp_metric': 23857.533203125}

In fun(): config:
{'act_fn': ReLU(),
 'batch_norm': True,
 'batch_size': 1024,
 'dropout_prob': np.float64(0.013939072152682473),
 'epochs': 64,
 'initialization': 'xavier_uniform',
 'l1': 16,
 'lr_mult': np.float64(5.8899766345108855),
 'optimizer': 'Adam',
 'patience': 8}
train_model result: {'val_loss': 23736.1171875, 'hp_metric': 23736.1171875}

In fun(): config:
{'act_fn': LeakyReLU(),
 'batch_norm': True,
 'batch_size': 32,
 'dropout_prob': np.float64(0.0012758353654068045),
 'epochs': 128,
 'initialization': 'xavier_uniform',
 'l1': 8,
 'lr_mult': np.float64(2.4660088468647756),
 'optimizer': 'Adadelta',
 'patience': 4}
train_model result: {'val_loss': 15469.21484375, 'hp_metric': 15469.21484375}
spotpython tuning: 15469.21484375 [#---------] 5.61% 

In fun(): config:
{'act_fn': LeakyReLU(),
 'batch_norm': True,
 'batch_size': 32,
 'dropout_prob': np.float64(0.0013261923342069866),
 'epochs': 128,
 'initialization': 'xavier_uniform',
 'l1': 8,
 'lr_mult': np.float64(2.511582050301212),
 'optimizer': 'Adadelta',
 'patience': 4}
train_model result: {'val_loss': 6582.068359375, 'hp_metric': 6582.068359375}
spotpython tuning: 6582.068359375 [#---------] 10.51% 

In fun(): config:
{'act_fn': LeakyReLU(),
 'batch_norm': False,
 'batch_size': 64,
 'dropout_prob': np.float64(0.0019916006183154002),
 'epochs': 64,
 'initialization': 'kaiming_uniform',
 'l1': 8,
 'lr_mult': np.float64(2.5380653530176125),
 'optimizer': 'Adamax',
 'patience': 8}
train_model result: {'val_loss': 22864.712890625, 'hp_metric': 22864.712890625}
spotpython tuning: 6582.068359375 [#---------] 14.39% 

In fun(): config:
{'act_fn': LeakyReLU(),
 'batch_norm': True,
 'batch_size': 32,
 'dropout_prob': np.float64(0.0013553079567303191),
 'epochs': 128,
 'initialization': 'xavier_uniform',
 'l1': 8,
 'lr_mult': np.float64(2.5380893038418693),
 'optimizer': 'Adadelta',
 'patience': 4}
train_model result: {'val_loss': 6037.4853515625, 'hp_metric': 6037.4853515625}
spotpython tuning: 6037.4853515625 [##--------] 18.75% 

In fun(): config:
{'act_fn': LeakyReLU(),
 'batch_norm': False,
 'batch_size': 128,
 'dropout_prob': np.float64(0.015332686834043598),
 'epochs': 64,
 'initialization': 'Default',
 'l1': 16,
 'lr_mult': np.float64(2.5266036088878816),
 'optimizer': 'Adamax',
 'patience': 4}
train_model result: {'val_loss': 6015.8017578125, 'hp_metric': 6015.8017578125}
spotpython tuning: 6015.8017578125 [##--------] 23.37% 

In fun(): config:
{'act_fn': LeakyReLU(),
 'batch_norm': True,
 'batch_size': 16,
 'dropout_prob': np.float64(0.0034129096422570665),
 'epochs': 128,
 'initialization': 'xavier_uniform',
 'l1': 8,
 'lr_mult': np.float64(2.5282923048422816),
 'optimizer': 'Adadelta',
 'patience': 4}
train_model result: {'val_loss': 6814.80859375, 'hp_metric': 6814.80859375}
spotpython tuning: 6015.8017578125 [###-------] 29.45% 

In fun(): config:
{'act_fn': LeakyReLU(),
 'batch_norm': True,
 'batch_size': 64,
 'dropout_prob': np.float64(0.0),
 'epochs': 128,
 'initialization': 'xavier_uniform',
 'l1': 8,
 'lr_mult': np.float64(2.528346925240782),
 'optimizer': 'Adadelta',
 'patience': 4}
train_model result: {'val_loss': 13690.265625, 'hp_metric': 13690.265625}
spotpython tuning: 6015.8017578125 [####------] 42.17% 

In fun(): config:
{'act_fn': LeakyReLU(),
 'batch_norm': False,
 'batch_size': 128,
 'dropout_prob': np.float64(0.015332681929620026),
 'epochs': 64,
 'initialization': 'Default',
 'l1': 16,
 'lr_mult': np.float64(2.526603603846932),
 'optimizer': 'Adamax',
 'patience': 4}
train_model result: {'val_loss': 12980.4453125, 'hp_metric': 12980.4453125}
spotpython tuning: 6015.8017578125 [####------] 44.23% 

In fun(): config:
{'act_fn': Swish(),
 'batch_norm': True,
 'batch_size': 16,
 'dropout_prob': np.float64(1.546708893594175e-05),
 'epochs': 8,
 'initialization': 'kaiming_normal',
 'l1': 8,
 'lr_mult': np.float64(6.891564061731185),
 'optimizer': 'Adamax',
 'patience': 4}
train_model result: {'val_loss': 23757.171875, 'hp_metric': 23757.171875}
spotpython tuning: 6015.8017578125 [#####-----] 47.78% 

In fun(): config:
{'act_fn': LeakyReLU(),
 'batch_norm': True,
 'batch_size': 16,
 'dropout_prob': np.float64(0.024993088686319435),
 'epochs': 128,
 'initialization': 'kaiming_uniform',
 'l1': 8,
 'lr_mult': np.float64(8.371227005925684),
 'optimizer': 'Adadelta',
 'patience': 4}
train_model result: {'val_loss': 4064.50146484375, 'hp_metric': 4064.50146484375}
spotpython tuning: 4064.50146484375 [#####-----] 52.34% 

In fun(): config:
{'act_fn': Tanh(),
 'batch_norm': True,
 'batch_size': 1024,
 'dropout_prob': np.float64(5.151999563513428e-05),
 'epochs': 8,
 'initialization': 'Default',
 'l1': 8,
 'lr_mult': np.float64(4.530302051251348),
 'optimizer': 'Adadelta',
 'patience': 4}
train_model result: {'val_loss': 23947.400390625, 'hp_metric': 23947.400390625}
spotpython tuning: 4064.50146484375 [#####-----] 54.71% 

In fun(): config:
{'act_fn': ELU(),
 'batch_norm': True,
 'batch_size': 2048,
 'dropout_prob': np.float64(0.0),
 'epochs': 8,
 'initialization': 'xavier_uniform',
 'l1': 8,
 'lr_mult': np.float64(5.21873051435292),
 'optimizer': 'Adam',
 'patience': 8}
train_model result: {'val_loss': 24075.640625, 'hp_metric': 24075.640625}
spotpython tuning: 4064.50146484375 [######----] 57.37% 

In fun(): config:
{'act_fn': ELU(),
 'batch_norm': True,
 'batch_size': 16,
 'dropout_prob': np.float64(0.0015974332892757896),
 'epochs': 128,
 'initialization': 'kaiming_normal',
 'l1': 16,
 'lr_mult': np.float64(5.558700357989107),
 'optimizer': 'Adamax',
 'patience': 4}
train_model result: {'val_loss': 16271.8310546875, 'hp_metric': 16271.8310546875}
spotpython tuning: 4064.50146484375 [########--] 79.47% 

In fun(): config:
{'act_fn': Swish(),
 'batch_norm': True,
 'batch_size': 16,
 'dropout_prob': np.float64(0.024998572397881703),
 'epochs': 8,
 'initialization': 'xavier_uniform',
 'l1': 16,
 'lr_mult': np.float64(3.5071702450968365),
 'optimizer': 'Adadelta',
 'patience': 8}
train_model result: {'val_loss': 23985.4765625, 'hp_metric': 23985.4765625}
spotpython tuning: 4064.50146484375 [########--] 84.37% 

In fun(): config:
{'act_fn': LeakyReLU(),
 'batch_norm': False,
 'batch_size': 2048,
 'dropout_prob': np.float64(2.7183124839878087e-05),
 'epochs': 8,
 'initialization': 'kaiming_uniform',
 'l1': 8,
 'lr_mult': np.float64(5.664809991767277),
 'optimizer': 'Adadelta',
 'patience': 4}
train_model result: {'val_loss': 24025.39453125, 'hp_metric': 24025.39453125}
spotpython tuning: 4064.50146484375 [#########-] 86.65% 

In fun(): config:
{'act_fn': Swish(),
 'batch_norm': True,
 'batch_size': 256,
 'dropout_prob': np.float64(0.0),
 'epochs': 128,
 'initialization': 'Default',
 'l1': 8,
 'lr_mult': np.float64(7.399207731691685),
 'optimizer': 'Adam',
 'patience': 8}
train_model result: {'val_loss': 23910.63671875, 'hp_metric': 23910.63671875}
spotpython tuning: 4064.50146484375 [#########-] 94.57% 

In fun(): config:
{'act_fn': Swish(),
 'batch_norm': True,
 'batch_size': 32,
 'dropout_prob': np.float64(0.0),
 'epochs': 32,
 'initialization': 'xavier_uniform',
 'l1': 8,
 'lr_mult': np.float64(5.16945653957238),
 'optimizer': 'Adadelta',
 'patience': 4}
train_model result: {'val_loss': 14667.853515625, 'hp_metric': 14667.853515625}
spotpython tuning: 4064.50146484375 [##########] 100.00% Done...

Experiment saved to spot_601_experiment.pickle

31.2 Looking at the Results

31.2.1 Tuning Progress

After the hyperparameter tuning run is finished, the progress of the hyperparameter tuning can be visualized with spotpython’s method plot_progress. The black points represent the performace values (score or metric) of hyperparameter configurations from the initial design, whereas the red points represents the hyperparameter configurations found by the surrogate model based optimization.

spot_tuner.plot_progress()

31.2.2 Tuned Hyperparameters and Their Importance

Results can be printed in tabular form.

from spotpython.utils.eda import gen_design_table
print(gen_design_table(fun_control=fun_control, spot=spot_tuner))
| name           | type   | default   |   lower |   upper | tuned                | transform             |   importance | stars   |
|----------------|--------|-----------|---------|---------|----------------------|-----------------------|--------------|---------|
| l1             | int    | 3         |     3.0 |     4.0 | 3.0                  | transform_power_2_int |         0.04 |         |
| epochs         | int    | 4         |     3.0 |     7.0 | 7.0                  | transform_power_2_int |         0.04 |         |
| batch_size     | int    | 4         |     4.0 |    11.0 | 4.0                  | transform_power_2_int |         0.03 |         |
| act_fn         | factor | ReLU      |     0.0 |     5.0 | LeakyReLU            | None                  |         0.03 |         |
| optimizer      | factor | SGD       |     0.0 |     2.0 | Adadelta             | None                  |       100.00 | ***     |
| dropout_prob   | float  | 0.01      |     0.0 |   0.025 | 0.024993088686319435 | None                  |         3.33 | *       |
| lr_mult        | float  | 1.0       |     0.1 |    10.0 | 8.371227005925684    | None                  |         0.03 |         |
| patience       | int    | 2         |     2.0 |     3.0 | 2.0                  | transform_power_2_int |         0.03 |         |
| batch_norm     | factor | 0         |     0.0 |     1.0 | 1                    | None                  |         0.05 |         |
| initialization | factor | Default   |     0.0 |     4.0 | kaiming_uniform      | None                  |         0.03 |         |

A histogram can be used to visualize the most important hyperparameters.

spot_tuner.plot_importance(threshold=1.0)

spot_tuner.plot_important_hyperparameter_contour(max_imp=3)
l1:  0.03845630549202069
epochs:  0.04165086536655265
batch_size:  0.03410753435468594
act_fn:  0.027112116829413817
optimizer:  99.99999999999999
dropout_prob:  3.3253236039589176
lr_mult:  0.0331592318086391
patience:  0.027357481342284592
batch_norm:  0.04633038292868045
initialization:  0.0275381826780748

31.2.3 Get the Tuned Architecture

import pprint
from spotpython.hyperparameters.values import get_tuned_architecture
config = get_tuned_architecture(spot_tuner, fun_control)
pprint.pprint(config)
{'act_fn': LeakyReLU(),
 'batch_norm': True,
 'batch_size': 16,
 'dropout_prob': np.float64(0.024993088686319435),
 'epochs': 128,
 'initialization': 'kaiming_uniform',
 'l1': 8,
 'lr_mult': np.float64(8.371227005925684),
 'optimizer': 'Adadelta',
 'patience': 4}

31.2.4 Test on the full data set

# set the value of the key "TENSORBOARD_CLEAN" to True in the fun_control dictionary and use the update() method to update the fun_control dictionary
fun_control.update({"TENSORBOARD_CLEAN": True})
fun_control.update({"tensorboard_log": True})
from spotpython.light.testmodel import test_model
from spotpython.utils.init import get_feature_names

test_model(config, fun_control)
get_feature_names(fun_control)
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃        Test metric               DataLoader 0        ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│         hp_metric              2834.123779296875     │
│         val_loss               2834.123779296875     │
└───────────────────────────┴───────────────────────────┘
test_model result: {'val_loss': 2834.123779296875, 'hp_metric': 2834.123779296875}
['age',
 'sex',
 'bmi',
 'bp',
 's1_tc',
 's2_ldl',
 's3_hdl',
 's4_tch',
 's5_ltg',
 's6_glu']

31.3 Cross Validation With Lightning

  • The KFold class from sklearn.model_selection is used to generate the folds for cross-validation.
  • These mechanism is used to generate the folds for the final evaluation of the model.
  • The CrossValidationDataModule class [SOURCE] is used to generate the folds for the hyperparameter tuning process.
  • It is called from the cv_model function [SOURCE].
config
{'l1': 8,
 'epochs': 128,
 'batch_size': 16,
 'act_fn': LeakyReLU(),
 'optimizer': 'Adadelta',
 'dropout_prob': np.float64(0.024993088686319435),
 'lr_mult': np.float64(8.371227005925684),
 'patience': 4,
 'batch_norm': True,
 'initialization': 'kaiming_uniform'}
from spotpython.light.cvmodel import cv_model
fun_control.update({"k_folds": 2})
fun_control.update({"test_size": 0.6})
cv_model(config, fun_control)
k: 0
train_model result: {'val_loss': 3373.724853515625, 'hp_metric': 3373.724853515625}
k: 1
train_model result: {'val_loss': 3305.375, 'hp_metric': 3305.375}
3339.5499267578125

31.4 Extending the Basic Setup

This basic setup can be adapted to user-specific needs in many ways. For example, the user can specify a custom data set, a custom model, or a custom loss function. The following sections provide more details on how to customize the hyperparameter tuning process. Before we proceed, we will provide an overview of the basic settings of the hyperparameter tuning process and explain the parameters used so far.

31.4.1 General Experiment Setup

To keep track of the different experiments, we use a PREFIX for the experiment name. The PREFIX is used to create a unique experiment name. The PREFIX is also used to create a unique TensorBoard folder, which is used to store the TensorBoard log files.

spotpython allows the specification of two different types of stopping criteria: first, the number of function evaluations (fun_evals), and second, the maximum run time in seconds (max_time). Here, we will set the number of function evaluations to infinity and the maximum run time to one minute.

max_time is set to one minute for demonstration purposes. For real experiments, this value should be increased. Note, the total run time may exceed the specified max_time, because the initial design is always evaluated, even if this takes longer than max_time.

31.4.2 Data Setup

Here, we have provided the Diabetes data set class, which is a subclass of torch.utils.data.Dataset. Data preprocessing is handled by Lightning and PyTorch. It is described in the LIGHTNINGDATAMODULE documentation.

The data splitting, i.e., the generation of training, validation, and testing data, is handled by Lightning.

31.4.3 Objective Function fun

The objective function fun from the class HyperLight [SOURCE] is selected next. It implements an interface from PyTorch’s training, validation, and testing methods to spotpython.

31.4.4 Core-Model Setup

By using core_model_name = "light.regression.NNLinearRegressor", the spotpython model class NetLightRegression [SOURCE] from the light.regression module is selected.

31.4.5 Hyperdict Setup

For a given core_model_name, the corresponding hyperparameters are automatically loaded from the associated dictionary, which is stored as a JSON file. The JSON file contains hyperparameter type information, names, and bounds. For spotpython models, the hyperparameters are stored in the LightHyperDict, see [SOURCE] Alternatively, you can load a local hyper_dict. The hyperdict uses the default hyperparameter settings. These can be modified as described in Section D.15.1.

31.4.6 Other Settings

There are several additional parameters that can be specified, e.g., since we did not specify a loss function, mean_squared_error is used, which is the default loss function. These will be explained in more detail in the following sections.

31.5 Tensorboard

The textual output shown in the console (or code cell) can be visualized with Tensorboard, if the argument tensorboard_log to fun_control_init() is set to True. The Tensorboard log files are stored in the runs folder. To start Tensorboard, run the following command in the terminal:

tensorboard --logdir="runs/"

Further information can be found in the PyTorch Lightning documentation for Tensorboard.

31.6 Loading the Saved Experiment and Getting the Hyperparameters of the Tuned Model

To get the tuned hyperparameters as a dictionary, the get_experiment_from_PREFIX function can be used.

from spotpython.utils.file import get_experiment_from_PREFIX
config = get_experiment_from_PREFIX("601")["config"]
config
Loaded experiment from spot_601_experiment.pickle
{'l1': 8,
 'epochs': 128,
 'batch_size': 16,
 'act_fn': LeakyReLU(),
 'optimizer': 'Adadelta',
 'dropout_prob': np.float64(0.024993088686319435),
 'lr_mult': np.float64(8.371227005925684),
 'patience': 4,
 'batch_norm': True,
 'initialization': 'kaiming_uniform'}

31.7 Using the spotgui

The spotgui [github] provides a convenient way to interact with the hyperparameter tuning process. To obtain the settings from Section 31.1, the spotgui can be started as shown in Figure 31.1.

Figure 31.1: spotgui

31.8 Summary

This section presented an introduction to the basic setup of hyperparameter tuning with spotpython and PyTorch Lightning.