Model Training Settings
Last updated
Was this helpful?
Last updated
Was this helpful?
The Model Training Settings popup is where you define the settings to train with when running (training) your model in PerceptiLabs. This popup is displayed when you click Run in the to train your model:
The main elements of this screen are as follows:
Batch size: the number of samples that the algorithm should train on at a time, before updating the weights in the model. Higher values will speed up the training and may make your model generalize better. However, too high of values may prevent your model from learning the data.
Shuffle: randomizes the order to train the data on, to make the model more robust.
The following loss functions are available in the Training Settings popup:
Quadratic: Also known as Mean Squared Error, is often used for regression tasks where the loss is based on the mean squared difference between the predicted value(s) and the label(s)
Dice: Dice loss is often used for binary segmentation tasks, where it measures how much the items/objects in the image overlap. This is useful compared to pixel accuracy as it then becomes robust to class imbalance, in cases where the objects you are trying to segment are over or under represented compared to the background. PerceptiLabs automatically ignores the background channel when using the Dice loss for this exact reason.
Epochs: sets the number of epochs to perform. One epoch corresponds to the number of iterations it takes to go through the entire dataset one time. The higher the number, the better the model will learn your training data. Note that training too long may your model to your training data.
Loss: specifies which to apply.
Learning rate: sets the for the algorithm. The value must be between 0 and 1 (default is 0.001). The higher the value, the quicker your model will learn. However, if the value is too high, training can skip over good local minimas. If the value is too low, training can get stuck in a poor local minima.
Save checkpoint every epoch: when enabled, saves a every epoch.
Optimizer: specifies which optimizer algorithm to use for the model. The optimizer continually tries new weights and biases during training until it reaches its goal of finding the optimal values for the model to make accurate predictions. Optimizers available in PerceptiLabs' Training components include: , (SGD), , , and .
Beta1: optimizer-specific parameter. See the for optimizer-specific definitions.
Beta2: optimizer-specific parameter. See the for optimizer-specific definitions.
Run model: starts training the model and displays the where you can see how training is progressing.
Cross entropy: Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverges from the actual label. So predicting a probability of .012 when the actual observation label is 1, would be bad and result in a high loss value. A perfect model would have a log loss of 0. See for more information.