Evaluate View

After you've trained your model, you will be provided with the option to run a test of the model using the Evaluate screen. Alternatively, you can navigate directly to the Evaluate screen at any time to run tests on trained models.

Running tests on a model performs inference on the model using the data which you allocated in the test partition via the Data Settings (i.e., data which the model hasn't seen before during training and validation).

PerceptiLabs' Evaluate View allows you to run tests on one or more trained models and provides the following:

The following are the main components of the Evaluate View:

  1. New Test: click to configure and run a new test.

  2. Labels Classification Metrics Table (shown for classification models): displays the following metrics:

    1. Categorical accuracy: accuracy for each category averaged over all of them.

    2. Precision: accuracy of positive predictions.

    3. Recall: percentage of positives found (i.e., not misclassified as negatives instead of positives).

    4. Top K Categorical Accuracy: frequency of the correct category among the top K predicted categories.

  3. Confusion Matrix (shown for classification models): displays an interactive confusion matrix for the label predictions.

Note: the classification metrics table and confusion matrix are shown for classification models. Other statistics may be shown for other models. See Types of Tests for more information.

Last updated