zamba.pytorch.utils¶
build_multilayer_perceptron(input_size, hidden_layer_sizes, output_size, activation=torch.nn.ReLU, dropout=None, output_dropout=None, output_activation=None)
¶
Builds a multilayer perceptron.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_size |
int
|
Size of first input layer. |
required |
hidden_layer_sizes |
tuple of int
|
If provided, size of hidden layers. |
required |
output_size |
int
|
Size of the last output layer. |
required |
activation |
Module
|
Activation layer between each pair of layers. |
ReLU
|
dropout |
float
|
If provided, insert dropout layers with the following dropout rate in between each pair of layers. |
None
|
output_dropout |
float
|
If provided, insert a dropout layer with the following dropout rate before the output. |
None
|
output_activation |
Module
|
Activation layer after the final layer. |
None
|
Returns: torch.nn.Sequential
Source code in zamba/pytorch/utils.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
|