nn_many_to_many_lstm_regressor
ManyToManyLSTM
¶
Bases: Module
A Many-to-Many LSTM model for sequence-to-sequence regression tasks.
This model uses an LSTM layer followed by a fully connected layer and an output layer.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_size |
int
|
The number of input features. |
required |
output_size |
int
|
The number of output features. Defaults to 1. |
1
|
lstm_units |
int
|
The number of units in the LSTM layer. Defaults to 256. |
256
|
fc_units |
int
|
The number of units in the fully connected layer. Defaults to 256. |
256
|
activation_fct |
Module
|
The activation function to use after the fully connected layer. Defaults to nn.ReLU(). |
ReLU()
|
dropout |
float
|
The dropout probability. Defaults to 0.0. |
0.0
|
bidirectional |
bool
|
Whether the LSTM is bidirectional. Defaults to True. |
True
|
num_layers |
int
|
The number of LSTM layers. Defaults to 1. |
1
|
Examples:
>>> from spotpython.light.regression.nn_many_to_many_lstm_regressor import ManyToManyLSTM
>>> import torch
>>> model = ManyToManyLSTM(input_size=10, output_size=1)
>>> x = torch.randn(16, 10, 10) # Batch of 16 sequences, each of length 10 with 10 features
>>> lengths = torch.tensor([10] * 16) # All sequences have length 10
>>> output = model(x, lengths)
>>> print(output.shape) # Output shape: (16, 10, 1)
Source code in spotpython/light/regression/nn_many_to_many_lstm_regressor.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
|
forward(x, lengths)
¶
Forward pass of the ManyToManyLSTM model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Tensor
|
Input tensor of shape (batch_size, seq_len, input_size). |
required |
lengths |
Tensor
|
Tensor containing the lengths of each sequence in the batch. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
torch.Tensor: Output tensor of shape (batch_size, seq_len, output_size). |
Raises:
Type | Description |
---|---|
ValueError
|
If the input tensor is empty or if the lengths tensor is empty. |
RuntimeError
|
If the lengths tensor does not match the batch size of the input tensor. |
Source code in spotpython/light/regression/nn_many_to_many_lstm_regressor.py
62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
|
ManyToManyLSTMRegressor
¶
Bases: LightningModule
A LightningModule for training and evaluating a Many-to-Many LSTM regressor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
_L_in |
int
|
The number of input features. |
required |
_L_out |
int
|
The number of output features. |
required |
l1 |
int
|
Unused parameter. Defaults to 8. |
8
|
lstm_units |
int
|
The number of units in the LSTM layer. Defaults to 128. |
128
|
fc_units |
int
|
The number of units in the fully connected layer. Defaults to 128. |
128
|
act_fn |
Module
|
The activation function to use after the fully connected layer. Defaults to nn.ReLU(). |
ReLU()
|
dropout_prob |
float
|
The dropout probability. Defaults to 0.2. |
0.2
|
bidirectional |
bool
|
Whether the LSTM is bidirectional. Defaults to True. |
True
|
num_layers |
int
|
The number of LSTM layers. Defaults to 2. |
2
|
optimizer |
str
|
The optimizer to use. Defaults to “Adam”. |
'Adam'
|
lr_mult |
float
|
Learning rate multiplier. Defaults to 1.0. |
1.0
|
patience |
int
|
Patience for learning rate scheduler. Defaults to 5. |
5
|
epochs |
int
|
Number of training epochs. Defaults to 100. |
100
|
batch_size |
int
|
Batch size for training. Defaults to 32. |
32
|
_torchmetric |
str
|
The metric to use for evaluation. Defaults to “mean_squared_error”. |
'mean_squared_error'
|
Examples:
>>> model = ManyToManyLSTMRegressor(_L_in=10, _L_out=1)
>>> x = torch.randn(16, 10, 10) # Batch of 16 sequences, each of length 10 with 10 features
>>> lengths = torch.tensor([10] * 16) # All sequences have length 10
>>> output = model(x, lengths)
>>> print(output.shape) # Output shape: (16, 10, 1)
Source code in spotpython/light/regression/nn_many_to_many_lstm_regressor.py
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 |
|
configure_optimizers()
¶
Configures the optimizer and learning rate scheduler.
Returns:
Name | Type | Description |
---|---|---|
dict |
dict
|
A dictionary containing the optimizer and learning rate scheduler configuration. |
Source code in spotpython/light/regression/nn_many_to_many_lstm_regressor.py
236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 |
|
forward(x, lengths)
¶
Forward pass of the ManyToManyLSTMRegressor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Tensor
|
Input tensor of shape (batch_size, seq_len, input_size). |
required |
lengths |
Tensor
|
Tensor containing the lengths of each sequence in the batch. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
torch.Tensor: Output tensor of shape (batch_size, seq_len, output_size). |
Source code in spotpython/light/regression/nn_many_to_many_lstm_regressor.py
160 161 162 163 164 165 166 167 168 169 170 171 |
|
test_step(batch, batch_idx, prog_bar=False)
¶
Performs a single test step.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
tuple
|
A tuple containing (x, lengths, y). |
required |
batch_idx |
int
|
The index of the batch. |
required |
prog_bar |
bool
|
Whether to log the loss to the progress bar. Defaults to False. |
False
|
Returns:
Type | Description |
---|---|
Tensor
|
torch.Tensor: The test loss. |
Source code in spotpython/light/regression/nn_many_to_many_lstm_regressor.py
220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 |
|
training_step(batch, batch_idx)
¶
Performs a single training step.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
tuple
|
A tuple containing (x, lengths, y). |
required |
batch_idx |
int
|
The index of the batch. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
torch.Tensor: The training loss. |
Source code in spotpython/light/regression/nn_many_to_many_lstm_regressor.py
191 192 193 194 195 196 197 198 199 200 201 202 |
|
validation_step(batch, batch_idx, prog_bar=False)
¶
Performs a single validation step.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
tuple
|
A tuple containing (x, lengths, y). |
required |
batch_idx |
int
|
The index of the batch. |
required |
prog_bar |
bool
|
Whether to log the loss to the progress bar. Defaults to False. |
False
|
Returns:
Type | Description |
---|---|
Tensor
|
torch.Tensor: The validation loss. |
Source code in spotpython/light/regression/nn_many_to_many_lstm_regressor.py
204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 |
|