Test View
Last updated
Last updated
After the model has been trained, you will be provided with the option to run a test of the model. This navigates you to PerceptiLabs' Test View screen where you can run tests on one or more trained models:
The following are the main components of the Test View:
Test dataset: ?
Selected model(s): allows you to select the model(s) to test.
Select test: provides the following outputs to display when training completes. Note that their availability will depend on the type of Deep Learning Component(s) in the selected model(s):
Confusion Matrix: displays an interactive confusion matrix for the label predictions.
Classification Metrics: displays the following metrics:
Categorical accuracy: accuracy for each category averaged over all of them.
Top K Categorical Accuracy: frequency of the correct category among the top K predicted categories.
Precision: accuracy of positive predictions.
Recall: percentage of positives found (i.e., not misclassified as negatives instead of positives).
Segmentation Metrics: displays segmentation metrics.
Output Visualization: displays a visualization of the output.
Run test: starts the test(s) on the selected model(s).
Test screen tab: tab where you can access the Test View screen. You can access this screen at anytime to run tests on trained models.
New Test: click to configure and run a new test.
Test results area: displays the various test results.
Screenshot (available for some test results): downloads a screenshot of the test result.
When you create a new test, the Test Configuration popup appears with the following options:
Test dataset: set to Use Partitioned Dataset.
Selected model(s): dropdown allowing you to select the trained model(s) to test.
Select tests: allows you to select one or more tests to perform. The tests available depend on the type(s) of model(s) selected for testing (e.g., classification models will have different tests available versus segmentation models).
Click Run Test after you have completed the configuration to start running the configured test(s).
Depending on the type(s) of models and the options selected in the test configuration, the following test results may be available in the Test view.
The Confusion Matrix lets you see, at a glance, how predicted classifications compare to actual classifications, on a per-class level:
The colors correspond to the number of samples that are classified as one thing or another. For example, if you have many samples for one class and few for another, then you have an unbalanced test dataset. If the samples are not on the diagonal, then they are falsely classified as another class, making it easy to see if any one class has better classifications than any other.
The Classification Metrics pane (aka Label Metrics Table) displays information for classification models:
Model Name: the name of the model for which the metrics apply. Multiple models will be listed if tests were performed on more than one model.
Categorical Accuracy: accuracy for each category averaged over all of them.
Top K Categorical Accuracy: frequency of the correct category among the top K predicted categories.
Precision: accuracy of positive predictions.
Recall: percentage of positives found (i.e., not misclassified as negatives instead of positives).
Tip: Precision and Recall often are more telling than the accuracy, especially for unbalanced datasets, and generally you want both to be as high as possible. They help you find as many of the classes as possible that are correctly classified as often as possible.
The Segmentation Metrics pane displays information for image segmentation models:
Model Name: the name of the model for which the metrics apply. Multiple models will be listed if tests were performed on more than one model.
Intersection over Union: ?
Dice coefficient: ?
The Output Visualization pane displays visualizations of the input data, final transformed target data:
You can hover the mouse over this pane to display < > buttons and a scrollbar for scrolling through each data sample.