Evaluate View
Last updated
Last updated
After you've trained your model, you will be provided with the option to run a test of the model using the Evaluate screen. Alternatively, you can navigate directly to the Evaluate screen at any time to run tests on trained models.
Running tests on a model performs inference on the model using the data which you allocated in the test partition via the Data Settings (i.e., data which the model hasn't seen before during training and validation).
PerceptiLabs' Evaluate View allows you to run tests on one or more trained models and provides the following:
The following are the main components of the Evaluate View:
New Test: click to configure and run a new test.
Labels Classification Metrics Table (shown for classification models): displays the following metrics:
Categorical accuracy: accuracy for each category averaged over all of them.
Precision: accuracy of positive predictions.
Recall: percentage of positives found (i.e., not misclassified as negatives instead of positives).
Top K Categorical Accuracy: frequency of the correct category among the top K predicted categories.
Confusion Matrix (shown for classification models): displays an interactive confusion matrix for the label predictions.
Note: the classification metrics table and confusion matrix are shown for classification models. Other statistics may be shown for other models. See Types of Tests for more information.