- 31 Oct 2024
- 10 Minutes to read
- Print
- DarkLight
- PDF
Model Reports
- Updated on 31 Oct 2024
- 10 Minutes to read
- Print
- DarkLight
- PDF
This article applies to these versions of LandingLens:
LandingLens | LandingLens on Snowflake |
✓ | ✓ |
Each model you train in a project displays as a tile in the Model List on the right of the Build page. Each model tile shows high-level performance metrics, including the F1 or IoU score for each split. Click a model tile to see detailed performance data, including the Precision score, Recall score, and confusion matrix for that model.
Click Try Model to see how the model works on new images that aren't in the dataset.
Model List Overview
Here's a quick introduction to the elements of the Model List:
# | Item | Description |
---|---|---|
1 | Model List | Click the Model List button to show/hide the model tiles. |
2 | Name | The model name. |
3 | Performance scores for splits | The performance score for each split. Object Detection and Classification projects show the F1 score. Segmentation projects show the Intersection over Union (IoU) score. |
4 | More Actions | Click the Actions icon (...) to access these tools or shortcuts: Download CSV, View on Models Page, Go to Snapshot Page. |
5 | Predictions | The number of times the model made each of these predictions: False Positive, False Negative, Misclassified, and Correct. (Some predictions aren't applicable to certain project types.) For Segmentation projects, the number is the number of pixels. For more information, go to Confusion Matrix. |
6 | Try Model | Click Try Model to see how the model performs on new images. For more information, go to Try Model. |
7 | Collapse and expand tile | Click to show/hide the predictions. |
8 | Load more models | Click the Load button to show more model tiles. |
Try Model
After you train a model, you can test its performance by using the Try Model tool. Using Try Model is a good way to "spot-check" a model's performance.
When you click Try Model, you can upload a few images to see how the model performs on them. Ideally, you should upload images that aren't already in the dataset and that match your real-world use case. If the model performs well on the new images, you can deploy it. If the model doesn't perform well on the images, try uploading and labeling more images in your project. Then run Try Model again.
The Try Model tool runs inference on each image, so using this tool costs 1 credit per image. (The credit cost is not applicable when using LandingLens on Snowflake.)
To use Try Model:
- Open a project to the Build tab.
- Click Model List to view all models in the project.
- Click Try Model on the model you want to use. (You can also click a model tile to open the model, and then click Try Model.)
- Upload images.
- LandingLens runs the model and shows you the results. If you have an Object Detection or Segmentation project, adjust the Confidence Threshold slider to see how the model performs with different thresholds. Typically, a lower confidence threshold means that you will see more predictions, while a higher confidence threshold means you will see fewer.
Model Report
Click a model tile to see the model's performance report. The report includes the model's:
Performance
The Performance section shows how the model performed on the Train, Dev, and Test sets (see more information about splits here). The number in parentheses is the number of images in the split.
For Object Detection and Segmentation projects, the scores are based on the confidence threshold that displays. This is the confidence threshold with the best F1 score for all labeled data.
The Performance score unit depends on the project type:
Object Detection and Classification: F1 Score
The Performance section for Object Detection and Classification projects shows the F1 score for each split.
For Object Detection, the F1 score combines precision and recall into a single score, creating a unified measure that assesses the model’s effectiveness in minimizing false positives and false negatives. A higher F1 score indicates the model is balancing the two factors well. LandingLens uses micro-averaging to calculate the F1 score.
For Classification, the F1, Precision, and Recall scores are identical. This is because Classification models have only two prediction outcomes: "Correct" and "Misclassified". Therefore, the F1, Precision, and Recall scores for Classification models are all calculated using this algorithm:
Segmentation: Intersection Over Union (IoU)
The Performance section for Segmentation projects shows the Intersection over Union (IoU) score for each split.
Intersection over Union (IoU) is used to measure the accuracy of the model by measuring the overlap between the predicted and actual masks in an image. A higher IoU indicates better agreement between the ground truth and predicted mask. LandingLens does not include the implicit background and micro-averaging in the calculation of the IoU.
Precision
Select Precision from the drop-down in the Performance section to see the Precision scores for each split.
Precision is the model’s ability to be accurate when it says something is true. This metric shows how accurate the model predictions are. The higher the Precision score, the more accurate the predictions are.
For Object Detection and Segmentation, Precision is calculated using this algorithm:
For Classification, the F1, Precision, and Recall scores are identical. This is because Classification models have only two prediction outcomes: "Correct" and "Misclassified". Therefore, the F1, Precision, and Recall scores for Classification models are all calculated using this algorithm:
Recall
Select Recall from the drop-down in the Performance section to see the Recall scores for each split.
Recall is the model’s ability to find all objects of interest. It conveys how accurately the model can correctly identify all the actual positive instances in the dataset. The higher the Recall score, the lower the chance the model will have a false negative.
For Object Detection and Segmentation, Recall is calculated using this algorithm:
For Classification, the F1, Precision, and Recall scores are identical. This is because Classification models have only two prediction outcomes: "Correct" and "Misclassified". Therefore, the F1, Precision, and Recall scores for Classification models are all calculated using this algorithm:
Confusion Matrix
LandingLens shows the confusion matrix for the model's performance. A confusion matrix is a table that visualizes the performance of an algorithm—in this case, the computer vision models you selected.
Data is grouped into tables (confusion matrices) based on prediction outcome. The prediction outcomes include:
- False Positive: The model predicted that an object of interest was present, but the model was incorrect. This is only applicable to Object Detection and Segmentation projects.
- False Negative: The model predicted that an object of interest was not present, but the model was incorrect. This is only applicable to Object Detection and Segmentation projects.
- Misclassified: The model correctly predicted that an object of interest was present, but it predicted the wrong class.
- Correct: The model’s prediction was correct. This includes True Positives and True Negatives.
Ground Truth, Prediction, and Count
Each confusion matrix focuses on a specific prediction outcome (False Positive, False Negative, etc). Each row in a matrix represents each instance of the outcome that occurred in. The first column is the Ground Truth, which is the labeled class on the image in the dataset. The second column is the Prediction, which is the class that the model predicted.
The third column is Count, which is how often the prediction occurred. The unit depends on the project type:
- Object Detection and Classification: The number of times that the model made that prediction for the specific Ground Truth / Prediction pairing.
- Segmentation: The number of pixels for which the model made that prediction for the specific Ground Truth / Prediction pairing.
Download CSV of Model Predictions
For Object Detection and Classification projects, you can download a CSV that shows the ground truth labels and model predictions for images. You can download the CSV:
Download CSV: Model Predictions for Images in a Model Dataset
You can download a CSV of model predictions for the dataset of images that a model was trained on. This is available for Object Detection and Classification projects.
The prediction data in the CSV will be based on the selected model and its default confidence threshold.
To download the CSV for images in a model's dataset:
- Open a project to the Build tab.
- Click Model List to view all models in the project.
- Click the Actions icon (...) on the model tile and click Download CSV. (You can also click a model tile to open the model, and then click Download CSV).
- The file is downloaded to your computer. For a description of all data in the file, go to CSV Data.
Download CSV: Model Predictions for Select Images
You can download a CSV of model predictions for select images in your Object Detection or Classification dataset.
The prediction data in the CSV will be based on the selected model and confidence threshold (if you manually change the threshold, that threshold is used in the CSV).
If a model hasn't been created in the project yet, the prediction fields in the CSV will be blank.
To download the CSV for select images in a dataset:
- Open a project to the Build tab.
- Select the model you want to see the predictions for from the Prediction/Model drop-down menu.
- Select the images you want to download the CSV for.
- Click Options in the action bar near the bottom of the screen and select Download CSV.
- Click Download on the pop-up window that opens.
- The file is downloaded to your computer. For a description of all data in the file, go to CSV Data.
CSV Data
When you download a CSV of a dataset, the file includes the information described in the following table.
Item | Description | Example |
---|---|---|
Project Name | Name of the LandingLens project. | Defect Detection |
Project Type | Project type ("bounding_box" is Object Detection). | classification |
Image Name | The file name of the image uploaded to LandingLens. | sample_003.jpg |
Image ID | Unique ID assigned to the image. | 29786892 |
Split | The split assigned to the image. | train |
Upload Time | The time the image was uploaded to LandingLens. All times are in Coordinated Universal Time (UTC). | Mon Jun 26 2023 16:37:10 GMT+0000 (Coordinated Universal Time) |
Image Width | The width (in pixels) of the image when it was uploaded to LandingLens. | 4771 |
Image Height | The height (in pixels) of the image when it was uploaded to LandingLens. | 2684 |
Model Name | The name of the model in LandingLens. | 100% Precision and Recall |
Metadata | Any metadata assigned to the image. If the image doesn't have any metadata, the value is "{}". | {"Author":"Eric Smith","Organization":"QA"} |
GT_Class | The Classes you assigned to the image (ground truth or “GT”) . For Object Detection, this also includes the number of objects you labeled. | {"Screw":3} |
PRED_Class | The Classes the model predicted. For Object Detection, this also includes the number of objects predicted. If the model didn't predict any objects, the value is {"null":1}. | {"Screw":2} |
Model_Correct | If the model's prediction matched the original label (ground truth or “GT”), the value is true. If the model's prediction didn't match the original label (ground truth or “GT”), the value is false. Only applicable to Classification projects. | true |
PRED_Class_Confidence / PRED_Confidence | The model's Confidence Score for each object predicted. If the model didn't predict any objects, the value is {}. | [{"Screw":0.94796216},{"Screw":0.9787127}] |
Class_TotalArea | The total area (in pixels) of the model's predicted area. If the model didn't predict any objects, the value is {}. Only applicable to Object Detection projects. | {"Screw":76060} |
GT-PRED JSON | The JSON output comparing the original labels (ground truth or "GT") to the model's predictions. For more information, go to JSON Output. | {"gtDefectName":"No Fire","predDefectName":"No Fire","predConfidence":0.9684047} |
View on Models Page
To adjust the confidence threshold, view visual predictions, or compare the model to other models in the same project, open the model in the Models tab.
The Model List has a few shortcuts to the Models tab:
- Click the Actions icon (...) on the model tile and click View on Models Page.
- Click a model tile and click View Full Report.
- Click a model tile, click the Actions icon (...) and click View on Models Page.
Go to Snapshot Page
The Model List has a few shortcuts to the Snapshot page:
- Click the Actions icon (...) on the model tile and click Go to Snapshot Page.
- Click a model tile, click the Actions icon (...) and click Go to Snapshot Page.