Which base class is used to derive all custom networks, layers or modules?
import torch
import torch.nn as nn
torch.nn.Module
How to create a module in Pytorch, which methods should be implemented for a neural network?
class ProductModule(nn.Module):
def __init__(self, a: float):
super().__init__()
# intializes the architecture of our module,
# do not forget to call the init method of the parent class
def forward(self, x: torch.Tensor):
# represents the flow through the architecture,
# method that will be called if our module is applied
# to input tensors
How can you use a Pytorch module?
product_module = ProductModule(a=2)
# create an instance of out module
input_tensor = torch.arange(5, dtype=torch.float32)
# create an input tensor
output_tensor = product_module(input_tensor)
# apply input tensor to the module, i.e. invoke forward method
How can you train the weights in a neural network?
self.weights = nn.Parameter(data=weight_tensor, requires_grad=True)
# if you would reassign self.weights,
# this would raise an exception
What are useful predefined pytorch modules?
nn.Linear(in_features=5, out_features=2, bias=False)
How can we nest different Pytorch modules?
class SNN(nn.Module):
def __init__(self, n_input_features: int, n_hidden_units: int, n_output_features: int):
self.layer_0 = nn.Linear(in_features=n_input_features, out_features=n_hidden_units)
# first layer, derived from nn.Linear
self.layer_1 = nn.Linear(in_features=n_hidden_units, out_features=n_output_features)
# reassigning layers to another variable, raises an exception
x = self.layer_0(x) nn.functional.selu(x, inplace=True)
# apply non-linearity
output = self.layer_1(x)
How can we sent Pytorch modules to a device?
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# for large neural nets, GPU is much more faster than CPU,
# since it is specialized on matrix operations
snn.to(device=device)
How can we convert a Pytorch module to a different datatype?
snn.to(device=torch.device("cpu"), dtype=torch.float64)
# not all devices suppert float16
How can we broadcast over a sample distribution in a Pytorch module?
input_tensor = torch.arange(4 * 5, device=device, dtype=torch.float32).reshape((4, 5))
# first dimension is used for the sample distribution,
# input is then sent by minibatches and not each sample at a time anymore
How can you stack arbitrarily many layers in a neural network?
hidden_layers = []
for _ in range(n_hidden_layers):
layer = nn.Linear(in_features=n_input_features, out_features=n_hidden_units)
torch.nn.init.normal_(layer.weight, 0, 1 / np.sqrt(layer.in_features))
hidden_layers.append(layer)
hidden_layers.append(nn.SELU())
n_input_features = n_hidden_units
self.hidden_layers = nn.Sequential(*hidden_layers)
# the output layer is usually seperated from the loop
What are typical hyperparameters for a CNN?
The kernel size
The number of kernels
The stride
Number of CNN layers
How can you make sure that the size of the vectors does not change during a CNN procedure?
With padding: usually zero padding is applied with “kernel_size // 2” symmetrically added values
What does the init() method of a pytorch nn.module define?
The architecture of a neural net
What are advantages of the torch.nn module?
It can be used to create neural nets in python
Trainable parameters can be automatically registered
Other pytorch modules can be automatically registered
What does the forward() method of a pytorch nn.Module do?
specifies how a PyTorch module is applied (the "flow" through the module architecture)
Last changed2 years ago