Models Page
  • 16 Jan 2025
  • 8 Minutes to read
  • Dark
    Light
  • PDF

Models Page

  • Dark
    Light
  • PDF

Article summary

This article applies to these versions of LandingLens:

LandingLensLandingLens on Snowflake

Use the Models page to analyze and compare model performance across multiple datasets in a project. The Models page gives you the tools to:

  • Analyze how a model performed. Quickly see how the model performed on its Train, Dev, and Test sets. You can also view the model's Loss chart, Validation chart, F1 or IoU score, and predictions. For more information, go to Model Information.
  • See how a model performs on different datasets. When you train a model, you can see how it performed on the dataset it was trained with. On the Models page, you can add more datasets (called "evaluation sets") and run the models on those images to see how the model performs. To get started, go to Evaluation Sets.
  • Compare two models. When you run a model comparison, LandingLens shows you the differences in F1 or IoU score and the number of correct and incorrect predictions. You can use this information to fine-tune your labels, datasets, and hyperparameters. To compare models, go to Compare Models.
  • Deploy models: After analyzing and comparing model performance, choose which model or models you want to deploy. Go to Cloud Deployment.
Analyze and Compare Model Performance Across Different Datasets on the Models Page
Note:
Due to the unique nature of Visual Prompting, the Models tab is not available for Visual Prompting projects.  

How do I use the Models table to see what model is the best for my project?

You can use the Models table to quickly evaluate model performance across different datasets. You can also see how the same model—but with different confidence scores—performs on the same datasets.

There is no one-size-fits-all solution, but quickly comparing model performance can help you identify 1) what model works best for your use case and 2) what models might need better images or labels.

Here are some considerations:

  • If two models have the same confidence threshold but different scores on the same datasets, view the predictions for the model with the lower score. Are the labels correct? Do you need more images of a specific class?
  • If a model has a higher score on a dataset that is most like your real-world scenario, that model might be the best one for your use case.

Models Table Overview

Here's a quick orientation to the Models table:

Key Parts of the Models Table
#ItemDescription
1ModelThe model name and training method (customized or default).
2Evaluation setsThese columns consist of your evaluation sets, which are sets of images used to evaluate model performance.

The model's Train, Dev, and Test sets display by default. You can add more datasets and run the models on those sets.

Shows the F1 score (for Object Detection and Classification projects) and IoU score (for Segmentation projects).
3Confidence ThresholdThe Confidence Threshold for the model.

The confidence score indicates how confident the model is that its prediction is correct.

The confidence threshold is the minimum confidence score the model must assign to a prediction in order for it to believe that its prediction is correct. Typically, a lower confidence threshold means that you will see more predictions, while a higher confidence threshold means you will see fewer.

When LandingLens creates a model, it selects the confidence threshold with the best F1 score for all labeled data.

Confidence thresholds are only applicable to Object Detection and Segmentation projects.

4DeploymentDeploy the model via Cloud Deployment. If the model has been deployed with Cloud Deployment, an icon for each endpoint displays.
5More ActionsFavorite, deploy, and delete models. Can also copy the Model ID.

Model Information

The Model column displays the model name and its training method:

Models

Click the cell to see the model's Training Information and Performance Report.

A model can have multiple rows. For example, if you deploy a model and select a confidence score that is not the default one, then two rows for the model display in the table. The first row has the default confidence threshold, and the second has the custom confidence threshold.

For example, in the screenshot below, the default confidence threshold is 0.71, and the custom confidence threshold is 0.99.

Compare How the Same Model Performs with Different Confidence Thresholds

Evaluation Sets

The Models table shows how each model performs on different sets of images. These image sets are called evaluation sets, because they're used to evaluate model performance. 

The default evaluation sets are the Train, Dev, and Test splits for the models. You can add evaluation sets.

Click a cell to see the Performance Report for that evaluation set.

Evaluation Sets

Evaluation Set Scores

A good indication that a model performs well is that its Train and Dev set scores are high and similar to each other.

The score for the Train set might be higher than the scores for the other splits, because these are the images that the model trains on. It is normal for the Train set score to be less than 100% because models usually make mistakes during the training process. 

In fact, a score of 100% on the Train might indicate overfitting, especially if the Dev set score is much lower. If the two scores are very different, try adding more images to these sets.

Similarly, the score for the Test set might be lower than the scores for the other splits, because the model is not trained on these images.

The following image and table explain the evaluation set scores.

Evaluation Set Scores
#ItemDescription
1PercentageShows the F1 score (for Object Detection and Classification projects) and IoU score (for Segmentation projects). Learn more about these scores in Overall Score for the Evaluation Set.
2--The subset doesn't have any images.

If you don't assign splits to a dataset before you train a model, LandingLens automatically assigns images to the Train and Dev splits, but not the Test split. Therefore, you will see "--" for the Test split in that situation.
3BlankThe model hasn't run on the set yet. To run the model, hover over the cell and click Evaluate. For more information, go here.

Run the Model on a "Blank" Set

If an evaluation set cell is blank, hover over the cell and click Evaluate. The model runs inference on the images in that evaluation set and displays the score.

Run the Model on a "Blank" Evaluation Set

Add Evaluation Sets and Run Models on Them

By default, each model's performance score for its Train, Dev, and Test set scores displays in the Models table. You can add more datasets. These are called evaluation sets, because they're used to evaluate model performance.

To add an evaluation set: 

  1. Open the project to the Models tab.
  2. Click Add Evaluation Set. If you've already dismissed this message, click + in the table header.
    Add an Evaluation Set
    Add an Evaluation Set (If You've Already Dismissed the Message)
  3. Select a snapshot.
  4. If you want to run the model only on one of the splits, click that split.
  5. Click Add to the Table.
    Select a Snapshot to Use as an Evaluation Set
  6. LandingLens adds a column for that dataset. To run a model on the dataset, hover over the cell and click Evaluate. (To prevent slowing down the system, LandingLens doesn't automatically run each model on the evaluation sets. Click Evaluate for each model / evaluation set combination that you want to run.)
    Run the Model on a Specific Evaluation Set
  7. The model runs inference on the images in that evaluation set and displays the F1 or IoU score.
    The Score Displays
  8. Click the percentage to open the Performance Report.
    View the Performance Report for the Evaluation Set

Archive Evaluation Sets

You can archive evaluation sets. This removes the evaluation set column from the Models table. You can later add the evaluation set to the table again.

To archive an evaluation set:

  1. Open the project to the Models tab.
  2. Hover of the area to the left of the evaluation set name. 
  3. Click the Archive icon that appears.
    Hover to See the Archive Icon and Click It
  4. Click Yes on the pop-up window to confirm the action.

Confidence Threshold

The Confidence Threshold column shows the Confidence Threshold for that model. 

The confidence score indicates how confident the model is that its prediction is correct.

The confidence threshold is the minimum confidence score the model must assign to a prediction in order for it to believe that its prediction is correct. Typically, a lower confidence threshold means that you will see more predictions, while a higher confidence threshold means you will see fewer.

When LandingLens creates a model, it selects the confidence threshold with the best F1 score for all labeled data.

Confidence thresholds are only applicable to Object Detection and Segmentation projects.

Confidence Threshold

Cloud Deployment

The Deployment column allows you to deploy a model via Cloud Deployment, and to see how many times the model has been deployed via Cloud Deployment.

To start the deployment process, click the Deploy or + button in the Deployment column. For more information, go to Cloud Deployment

A Cloud icon displays for each deployment. Click an icon to see the deployment details for the model. LandingLens cycles through seven colors for the Cloud icon.

View and Start Deployments
Note:
Icons don't display for LandingEdge or Docker deployments.

More Actions

In the last column, you can:

More Actions for Models

Favorite Models

To mark a model as a "favorite", click the Favorite (star) icon. This changes the star color to yellow, so that you can easily see which models in the table you've marked as favorites. You can favorite multiple models. To unfavorite a model, click the Favorite icon again.

Click the Star to Favorite and Unfavorite Models

To filter by favorites, select the Only show favorite models checkbox.

Filter by Favorites

Copy Model ID

If you're deploying a model via Docker, the Model ID is included in the deployment command. The Model ID tells the application which model to download from LandingLens. To locate the Model ID on the Models page, click the Actions (...) icon and select Copy Model ID.

Copy the Model ID

Delete Models

You can delete a model from the table. This action removes the model only from the table; you can still deploy it and access it from other areas in LandingLens, like Dataset Snapshots.

To delete a model, click the Actions (...) icon and select Delete. A model can't be re-added to this table after it's been deleted.

Delete a Model

Was this article helpful?