Deployment Options
  • 07 Jul 2023
  • 2 Minutes to read
  • Dark
  • PDF

Deployment Options

  • Dark
  • PDF

Article Summary

After you are happy with the results of your trained Model, you are ready to use it! To use a Model, you deploy it, which means you put the Model in a virtual location so that you can then upload images to it. When you upload images, the Model runs inferences, which means that it detects what it was trained to look for.

You can run inference up to 40 times per minute per endpoint. If you exceed that limit, the API returns a 429 Too Many Requests response status code. We recommend implementing an error handling or retry function in your application. If you have questions about inference limits, please contact your Landing AI representative or

Deployment Options

There are a few ways to deploy your LandingLens Model:

Use the table below as a reference when choosing a deployment option. See the section below for the feature details.

FeatureCloud Deployment & CloudInference
Docker Application
Operating System Compatibility
Windows, Linux, & Mac
Windows & Linux
Linux Ubuntu 20.04 or later (AMD64)
Metadata Input/Output 
Drag and Drop Images for Inference

Results Dashboard

Only available for LandingEdge v1

Offline Mode
Folder Input/Output
Post-Processing Scripts


PLC Communications

Multi-Inspection Point 

HiLo Shadow & Production Mode

The only deployment option for Visual Prompting is Cloud Deployment.

Feature Details

Operating System Compatibility
  • Cloud Deployment and Cloud Inference are only available from LandingLens.
  • Docker is available in multiple environments.
  • LandingEdge is available for Windows and Linux. For more information on its system requirements, go here.
Input/Output FoldersThe application can continuously look for new images added to the Input folder, then write inference results in JSON to the Output folder.
APIThe application can receive APIs to upload images. API results are returned in JSON.
  • Cloud Deployment and Cloud Inference support POST APIs.
  • LandingEdge and Docker support Web APIs.
Input/Output MetadataThe application can receive predefined image metadata. The predefined metadata will also be uploaded if the images are uploaded to LandingLens.
Example metadata:
 "imageId": "28587.jpg",
 "inspectionStationId": "11",
 "locationId": "California#11",
 "captureTimestamp": "2021-10-11T12:00:00.00000"
WebcamThe application can auto-detect attached webcams. The application will automatically pick one if you have multiple webcams connected to your computer.
GenICamThe application can auto-detect GenICams. You must install the camera driver before the application can detect the GenICam. 
PLC CommunicationThe application supports Rockwell PLCs (Programmable Logic Controllers).
Results DashboardThe application can display live results for you to monitor inference.
HiLo Shadow & Production ModeThe application can give users the ability to mark images as:
  • OK (good) or NG (not good) in Shadow mode.
  • Accept or Reject in Production mode.
Offline ModeThe application can run inference offline.
Multi-Inspection Point SupportThe application supports multiple Inspection Points.
Drag and Drop Images for InferenceThe application allows users to simply drag and drop images one at a time to run inference.
Post-Processing ScriptsThe application supports using post-processing scripts.

Was this article helpful?