Deployment Options
- 07 Jul 2023
- 2 Minutes to read
- Print
- DarkLight
- PDF
Deployment Options
- Updated on 07 Jul 2023
- 2 Minutes to read
- Print
- DarkLight
- PDF
Article Summary
Share feedback
Thanks for sharing your feedback!
After you are happy with the results of your trained Model, you are ready to use it! To use a Model, you deploy it, which means you put the Model in a virtual location so that you can then upload images to it. When you upload images, the Model runs inferences, which means that it detects what it was trained to look for.
Note:
You can run inference up to 40 times per minute per endpoint. If you exceed that limit, the API returns a
429 Too Many Requests
response status code. We recommend implementing an error handling or retry function in your application. If you have questions about inference limits, please contact your Landing AI representative or sales@landing.ai.Deployment Options
There are a few ways to deploy your LandingLens Model:
- Cloud Deployment: Deploy your Model to a virtual environment hosted by LandingLens. Use API calls or Mobile Inference to send images to your Model.
- LandingEdge: Use the LandingEdge application to communicate with edge devices, industrial cameras, and programmable logic controllers (PLCs).
- Cloud Inference: This is the "legacy" version of Cloud Deployment, and is available to Classic Flow users only.
- Docker Application: This is available to Classic Flow users only.
Use the table below as a reference when choosing a deployment option. See the section below for the feature details.
Feature | Cloud Deployment & CloudInference | LandingEdge | Docker Application |
---|---|---|---|
Operating System Compatibility | Windows, Linux, & Mac | Windows & Linux | Linux Ubuntu 20.04 or later (AMD64) |
API | ✓ | ✓ | ✓ |
Metadata Input/Output | ✓ | ✓ | ✓ |
Drag and Drop Images for Inference | ✓ | ✓ | |
Results Dashboard | ✓ | ✓ | |
Webcam | ✓ | ✓ Only available for LandingEdge v1 | |
Offline Mode | ✓ | ✓ | |
Folder Input/Output | ✓ | ✓ | |
Post-Processing Scripts | ✓ | ||
GenICam | ✓ | ||
PLC Communications | ✓ | ||
Multi-Inspection Point | ✓ | ||
HiLo Shadow & Production Mode | ✓ |
Note:
The only deployment option for Visual Prompting is Cloud Deployment.
Feature Details
Feature | Description |
---|---|
Operating System Compatibility |
|
Input/Output Folders | The application can continuously look for new images added to the Input folder, then write inference results in JSON to the Output folder. |
API | The application can receive APIs to upload images. API results are returned in JSON.
|
Input/Output Metadata | The application can receive predefined image metadata. The predefined metadata will also be uploaded if the images are uploaded to LandingLens. Example metadata:
|
Webcam | The application can auto-detect attached webcams. The application will automatically pick one if you have multiple webcams connected to your computer. |
GenICam | The application can auto-detect GenICams. You must install the camera driver before the application can detect the GenICam. |
PLC Communication | The application supports Rockwell PLCs (Programmable Logic Controllers). |
Results Dashboard | The application can display live results for you to monitor inference. |
HiLo Shadow & Production Mode | The application can give users the ability to mark images as:
|
Offline Mode | The application can run inference offline. |
Multi-Inspection Point Support | The application supports multiple Inspection Points. |
Drag and Drop Images for Inference | The application allows users to simply drag and drop images one at a time to run inference. |
Post-Processing Scripts | The application supports using post-processing scripts. |
Was this article helpful?