- 05 Aug 2024
- 3 Minutes to read
- Print
- DarkLight
- PDF
Troubleshoot Inaccurate Predictions
- Updated on 05 Aug 2024
- 3 Minutes to read
- Print
- DarkLight
- PDF
This article applies to these versions of LandingLens:
LandingLens | LandingLens on Snowflake |
✓ | ✓ |
If a model has many False Positives, False Negatives, or a combination of the two, use the troubleshooting tips below to help you improve model performance.
Model Has Many False Positives and False Negatives
If a model has many False Positives and False Negatives, it’s possible that it is confusing one class for another. This might indicate that the classes are too similar to each other, or that there aren’t enough examples of at least one of the classes in the dataset.
For example, let’s say that you have two classes in your project: Water Spot and Oil Spot. After training a model, you notice that it locates some water spots but predicts that they are Oil Spots.
Possible Root Causes and Solutions
The following table lists possible root causes that could lead to this issue. Check which root cause seems likely, and then try the related troubleshooting tips.
Root Cause | Troubleshooting Tips |
---|---|
Images in the dataset are mislabeled. | Check if images are labeled correctly, and fix any mislabels. |
Add details to the Label Book to help eliminate confusion and get consistent labeling. | |
Model needs more data. | Add and label more images of the classes that the model predicted incorrectly. |
There are no visual/distinguishable differences between two classes. | Consider merging the classes, if that makes sense for your use case. For example, if you currently have Water Spot and Oil Spot classes, consider using a class called Defect instead. |
Model Has Many False Negatives
If a model has many False Negatives (but not False Positives), it’s possible that there aren’t enough examples of the classes that it’s failing to detect. You can think of this scenario as the model not predicting anything in an area where something should have been identified.
A False Negative means that the model didn’t detect an object. For example, let’s say you have a class called Crease. After training a model, you notice that it doesn’t detect Creases.
Possible Root Causes and Solutions
The following table lists possible root causes that could lead to this issue. Check which root cause seems likely, and then try the related troubleshooting tips.
Root Cause | Troubleshooting Tips |
---|---|
Images in the dataset are mislabeled. | Check if images are labeled correctly, and fix any mislabels. |
Add details to the Label Book to help eliminate confusion and get consistent labeling. | |
Model needs more data. | Add and label more images of the classes that the model didn’t predict. |
Environmental noise causes labeled regions to blend into non-labeled regions. The model can’t tell the difference between the two. | Improve the environmental and lighting conditions. For tips, go to Image Capture Best Practices. |
If you’re using Custom Training, consider adding or increasing the strength of augmentations that mimic the real-world image capture conditions. | |
Object is too small or not visible after resizing. | Increase the image size or improve the image resolution. |
Applicable to Custom Training: a data augmentation is too strong, and the object to identify is no longer visible. | If you’re using Custom Training, review the augmentation settings. Consider if any could “hide” the object; if so, you can remove that augmentation or decrease its strength. |
The confidence threshold is too high. | Use a lower confidence threshold if that makes sense for your use case. |
Model Has Many False Positives
If a model has many False Positives (but not False Negatives), it’s possible that there aren’t enough examples of the classes that it’s incorrectly identifying. You can think of this scenario as the model predicting something in an area where nothing should have been identified.
A False Positive means that the model detected an object that wasn’t there. For example, let’s say you labeled a few areas of an image. The model then predicts the class in areas that you didn’t label.
Possible Root Causes and Solutions
The following table lists possible root causes that could lead to this issue. Check which root cause seems likely, and then try the related troubleshooting tips.
Root Cause | Troubleshooting Tips |
---|---|
Images in the dataset are mislabeled. | Check if images are labeled correctly, and fix any mislabels. |
Add details to the Label Book to help eliminate confusion and get consistent labeling. | |
Model needs more data. | Add and label more images of the classes that the model predicted incorrectly. |
If you have multiple classes, try to have the same number of examples of each class. | |
Environmental noise causes labeled regions to blend into non-labeled regions. The model can’t tell the difference between the two. | Improve the environmental and lighting conditions. For tips, go to Image Capture Best Practices. |
If you’re using Custom Training, consider adding or increasing the strength of augmentations that mimic the real-world image capture conditions. | |
The confidence threshold is too low. | Use a higher confidence threshold if that makes sense for your use case. |