- 17 Jul 2023
- 11 Minutes to read
- Updated on 17 Jul 2023
- 11 Minutes to read
Visual Prompting is a paradigm shift in the field of computer vision. You label only a few small areas of an object in a few images, and the Model almost immediately detects the whole object in all of your images. In most cases, the Model’s Predictions aren’t 100% accurate the first time around, but you can easily label a few more small areas, re-run the Model, and check your results.
Visual Prompting is a highly accurate and fast Model that enables you to quickly create and deploy your own custom computer vision Model.
Because of its speed and ease of use, we encourage you to try Visual Prompting before creating any other Project Type.
What does the phrase “Visual Prompting” mean?
In the field of artificial intelligence (AI), prompting refers to providing input to an AI Model for that Model to generate a response. It's like prompting a person to answer a question, or giving a student a prompt to write an essay about.
You might have already heard about AI prompting without realizing what it was called. For example, the popular chatbot ChatGPT relies on prompting: you ask ChatGPT a question or tell it to describe something, and the Model responds. The application DALL-E is similar: you describe what you want to see, and the Model generates a digital image based on your description.
Both of these types of applications rely on textual prompts; you need to write a sentence or series of words for the AI Model to work. ChatGPT has a text-to-text workflow, and DALL-E has a text-to-image workflow.
Now, with the release of Visual Prompting, Landing AI has introduced an image-to-image AI prompting workflow. You give a visual prompt to the Visual Prompting Model, and it provides a visual output. In practice, you label a small section of an object in an image, and Visual Prompting Model is able to detect the whole object you marked, not just in the original image, but in others too.
Visual Prompting is the next step in broadening the applications of AI prompting.
Visual Prompting Workflow
The Visual Prompting workflow is an iterative process. After you label your first image, run your Model. When you see the results, fine-tune your labels and re-run the Model. It is normal for this process of re-labeling and re-running to occur a few times before your Model is ready to be deployed.
Visual Prompting Models run quickly, so even though this is an iterative process, you will be able to deploy even complex Models in minutes.
Here's an overview of the Visual Prompting workflow:
- Upload images
- Label small portions of an image. Use at least two Classes.
- Click Run.
- Review the Predictions of that image.
- If the results aren't quite right yet, fine-tune your labeling and run the Model again. Repeat until you're happy with the results.
- Deploy the Model.
If you've created computer vision projects before, you'll immediately notice that labeling images with Visual Prompting is drastically different—and easier and cooler—than with those other projects. Because of the powerful and intuitive algorithms built into Visual Prompting, you only need to label a few small areas to get lightning-fast accurate results.
Accurately Label Small Areas
Visual Prompting introduces a fast new way to label images called Prompting. Prompting is the act of only labeling a small area of the object you want to identify. Your Model learns from each pixel you label, so it's important that your labels are precise.
Say you build a Model to detect birds. Take a look at the image below. There are two Classes labeled: Bird (Purple) and Background (Yellow). Do you notice anything wrong? If you look closely, you can see that the Bird Class is not labeled precisely because the Purple line stretches into the background. The Model will think that portions of the background belong to the Bird Class, and it won't be able to accurately detect birds.
Now let's look at the next image. The Eraser tool was used to delete the purple "Bird" label that included the background, so now the label only covers the bird. This labeling is more precise, and will create a more accurate Model.
It's generally okay if you make a couple of minor mistakes in other Project Types, like Segmentation or Object Detection. However, if you make a mistake in Visual Prompting, the Model won't be as forgiving and may show some inaccurate results.
Label at Least Two Classes
Visual Prompting Projects require at least two Classes. By establishing two Classes, the Model will be able to "understand" where one Class ends and another begins. Even if you only want to detect one specific object, you need to give the Class some content to compare itself against. A Visual Prompting Model predicts a Class for each pixel in an image. So if you had only one Class, the whole image would be predicted as belonging to that Class. Therefore, you must have at least two Classes.
If you only want to detect one object and aren't sure what to name your second Class, consider creating a Class with one of these names:
- Not [object]
- Nothing to Label
For example, let's say you want to detect apples during the manufacturing process. You don't need to detect the machinery, conveyor belt, workers, or anything else. You could set up a Visual Prompting Project that results in the following images. The first image shows the original image. The second shows the image with apples in pink, and all other items in blue. Even though the machinery and conveyor belt look very different from each other, they are both categorized as part of the background.
Only Label a Few Images
Visual Prompting doesn't require many images. We recommend you run the Model after adding labels to just one image. Review the Predictions, and relabel that image based on any incorrect Predictions. Then run the Model again, and view the Predictions for a few other images. Label one or two more images, and run the Model again. Continue to iterate like this until the Predictions are accurate.
Use Case: Images Are Similar
If all of your images are of the same objects, with the same backgrounds, in similar conditions (like lighting), then you will need fewer labeled images.
For example, if you're creating a Model to detect issues on PCB boards on an assembly line at the same inspection point, you can expect all of the images to be very similar to each other.
Use Case: Images Are Very Different
If all of your images are of different objects, with different backgrounds, in different conditions (like lighting), then you will need more labeled images.
For example, if you're creating a Model to detect dogs, you will need more images to account for all the different variables. You could upload images of the following:
- Different dog breeds
- Different colored dogs
- Dogs from different angles
- Dogs with collars
- Dogs without collars
- Dogs in different settings
- Dogs in sunny conditions
- Dogs in overcast conditions
LandingLens offers you several tools to help you label and navigate images. Refer to the image and table below to learn more about these tools.
|Zoom||Zoom in and out.|
Mouse wheel up: Zoom out
Mouse wheel down: Zoom in
|Pan||Click the image and move it. This is especially useful if the image is zoomed in and you want to see part of the image that is out of the frame.||V: Select|
|Brush||"Paint" over an area that you want to identify.||B: Select|
|Eraser||Remove part of a brush stroke.||E: Select|
|Size Slider||Move the slider to change the size of the Brush or Eraser.||]: Make larger|
[: Make smaller
|Undo||Undo the last action.|
|Redo||Redo the action that you undid.|
|Clear All Labels||Remove all labels from the image.||None|
|Labels||View the labels you added. This toggle is only visible after you run the Model.||None|
|Prediction||View the Predictions from the Model. This toggle is only visible after you run the Model.||None|
|Class||Select a Class. Any labels added with the Brush are applied to the selected Class.|
Up arrow key: Select the Class above
Down arrow key: Select the Class below
|Guides||Use the white dotted guides to align the Brush or Eraser with any vertical or horizontal features. (Not marked on the image above.)||None|
Navigate Images in Labeling View
When you're labeling an image, you can easily see and navigate to the other images in your dataset.
Click the Browse Images in Sidebar icon to view the images in your dataset in a bar above the opened image. These images are grouped by whether they are Labeled or Unlabeled.
Click the Previous and Next icons to navigate to other images. Use the left arrow and right arrow keys as shortcuts.
Run the Model, Review Predictions, and Iterate Your Labels
After you've labeled at least two Classes, click Run at the bottom of the page (or press Enter).
In only a few seconds, the Model's Predictions overlay on the image.
Start with labeling a few areas that have incorrect Predictions. For example, in the GIF below, an area is incorrectly predicted as House (Blue). To correct the Model, I label that section as Farmland (Purple) and click Run. You can see that the Model now accurately predicts that area as Farmland (Purple).
We recommend you run through this cycle a few times: run the Model, review, label, repeat. For many datasets, this process only takes a few minutes.
Deploy Your Visual Prompting Model
After you're happy with your Visual Prompting Model, you are ready to use it! To use a Model, you deploy it, which means you put the Model in a virtual location so that you can upload images to it. When you upload images, the Model runs inferences, which means it detects what it was trained to look for.
Unlike other Project Types in LandingLens, you don't need to select a specific Model when deploying Visual Prompting. This is because you're continuously iterating on the same Model, and not creating a new one each time you run the Model.
For detailed instructions for how to deploy Models, go to Cloud Deployment.
Deployment Limitations in Beta
Visual Prompting is in beta, so we're still ironing out a few kinks. Here are known limitations for this beta release:
- The Predictions in the user interface will have more distinct lines around detected objects than the Predictions in the deployed Model. We're working hard to get the Predictions in the deployed Model as good as what you see on screen.
- Models from Visual Prompting Projects can't be deployed in LandingEdge.
When you're looking at and labeling an image (Labeling View), click Project to see all the images in your Project.
In Project View, you can view all images, organized by Labeled and Unlabeled.
Labels and Prediction Toggles
- Labels: View the labels you added.
- Prediction: View the Predictions from the Model.
Visual Prompting FAQ
Is Visual Prompting a form of Segmentation?
Yes! Visual Prompting is a type of semantic segmentation, just like the Segmentation Project Type. Semantic segmentation is when the Model learns from each pixel in an image. For example, say you have an image of New York City. You can label some pixels so the Model can identify these Classes: people, cars, buildings, billboards, and the sky. Furthermore, each object identified as a Class will fall under one entity. In this case, every car detected will simply be labeled as "Car", instead of Honda, Toyota, Mercedes, etc.
So, how does Visual Prompting differ from Segmentation? In short, Visual Prompting only requires partial labeling, while Segmentation requires full labeling.
Is Visual Prompting going to replace Segmentation?
Not to worry, the Segmentation Project Type isn't going anywhere and will continue to be available.
Where is the "Nothing to Label" option?
Visual Prompting doesn't have a "Nothing to Label" option because the Model only takes into consideration labeled pixels.
Why are there no metrics?
For the other Project Types in LandingLens, metrics are created by comparing the overlap between your labels (the Ground Truth) and the Model's Predictions. The metrics tell you how often the Model predicted correctly.
Since Visual Prompting only requires you to label small areas of the object you want to detect—and not the full object—the Model doesn't have a full Ground Truth to compare itself against. The metrics we use for other Project Types could be misleading.
Why can't I fine-tune the hyperparameters?
We've pre-selected hyperparameters to optimize the speed and accuracy of Visual Prompting, so that you can focus on labeling data.
Does Visual Prompting offer Agreement-Based Labeling?
No. Agreement-Based Labeling, which is available for other Project Types, allows multiple users to label the same set of images to analyze the consistency of their labels. That approach doesn't apply to Visual Prompting because you only need to label a few small areas of an object for the Model to learn to detect it. If we used Agreement-Based Labeling in Visual Prompting, each user could label a different area of the same object. Even though each label would be different, they could all be correct.
Why don’t I see Predictions on an image I just uploaded?
If you run the Model and later upload a new image, the Model doesn’t automatically process that image. You must re-run the Model to add a Prediction.