Test View

After the model has been trained, you will be provided with the option to run a test of the model. This navigates you to PerceptiLabs' Test View screen where you can run tests on one or more trained models:

The following are the main components of the Test View:

  1. Test dataset: ?

  2. Selected model(s): allows you to select the model(s) to test.

  3. Select test: provides the following outputs to display when training completes. Note that their availability will depend on the type of Deep Learning Component(s) in the selected model(s):

    1. Confusion Matrix: displays an interactive confusion matrix for the label predictions.

    2. Classification Metrics: displays the following metrics:

      1. Categorical accuracy: accuracy for each category averaged over all of them.

      2. Top K Categorical Accuracy: frequency of the correct category among the top K predicted categories.

      3. Precision: accuracy of positive predictions.

      4. Recall: percentage of positives found (i.e., not misclassified as negatives instead of positives).

    3. Segmentation Metrics: displays segmentation metrics.

    4. Output Visualization: displays a visualization of the output.

  4. Run test: starts the test(s) on the selected model(s).

  5. Test screen tab: tab where you can access the Test View screen. You can access this screen at anytime to run tests on trained models.

  6. New Test: click to configure and run a new test.

  7. Test results area: displays the various test results.

  8. Screenshot (available for some test results): downloads a screenshot of the test result.

Running a Test

When you create a new test, the Test Configuration popup appears with the following options:

  • Test dataset: set to Use Partitioned Dataset.

  • Selected model(s): dropdown allowing you to select the trained model(s) to test.

  • Select tests: allows you to select one or more tests to perform. The tests available depend on the type(s) of model(s) selected for testing (e.g., classification models will have different tests available versus segmentation models).

Click Run Test after you have completed the configuration to start running the configured test(s).

Types of Tests

Depending on the type(s) of models and the options selected in the test configuration, the following test results may be available in the Test view.

Confusion Matrix

The Confusion Matrix lets you see, at a glance, how predicted classifications compare to actual classifications, on a per-class level:

The colors correspond to the number of samples that are classified as one thing or another. For example, if you have many samples for one class and few for another, then you have an unbalanced test dataset. If the samples are not on the diagonal, then they are falsely classified as another class, making it easy to see if any one class has better classifications than any other.

Classification Metrics

The Classification Metrics pane (aka Label Metrics Table) displays information for classification models:

  • Model Name: the name of the model for which the metrics apply. Multiple models will be listed if tests were performed on more than one model.

  • Categorical Accuracy: accuracy for each category averaged over all of them.

  • Top K Categorical Accuracy: frequency of the correct category among the top K predicted categories.

  • Precision: accuracy of positive predictions.

  • Recall: percentage of positives found (i.e., not misclassified as negatives instead of positives).

Tip: Precision and Recall often are more telling than the accuracy, especially for unbalanced datasets, and generally you want both to be as high as possible. They help you find as many of the classes as possible that are correctly classified as often as possible.

Segmentation Metrics

The Segmentation Metrics pane displays information for image segmentation models:

  • Model Name: the name of the model for which the metrics apply. Multiple models will be listed if tests were performed on more than one model.

  • Intersection over Union: ?

  • Dice coefficient: ?

Output Visualization

The Output Visualization pane displays visualizations of the input data, final transformed target data:

You can hover the mouse over this pane to display < > buttons and a scrollbar for scrolling through each data sample.

Last updated