Deployment Options
  • 02 Mar 2024
  • 2 Minutes to read
  • Dark
    Light
  • PDF

Deployment Options

  • Dark
    Light
  • PDF

Article Summary

After you are happy with the results of your trained model, you are ready to use it! To use a model, you deploy it, which means you put the model in a virtual location so that you can then upload images to it. When you upload images, the model runs inferences, which means that it detects what it was trained to look for.

Deployment Options

There are a few ways to deploy your LandingLens model:

  • Cloud Deployment: Deploy your model to a virtual environment hosted by LandingLens. Use API calls or Mobile Inference to send images to your model. 
  • Landing AI Deploy Docker: Download our Docker image to create a Dockerized container. Deploy your model and run inference in this self-hosted container.
  • LandingEdge: Use the LandingEdge application to communicate with edge devices, industrial cameras, and programmable logic controllers (PLCs).

Cloud DeploymentContainer DeploymentLandingEdge
LatencyHighLowLow
ThroughputConfigurable30 FPS30 FPS
PricingPer inferenceFlex licenseMachine license

When to Use Cloud Deployment

Cloud Deployment is a scalable and cost-effective deployment solution. It can accommodate surges in inference traffic up to a configurable rate limit, with charges incurred per inference. Cloud Deployment is a preferred option for managing variable inference loads. 

Use Cloud Deployment if you:

  • Want to start running inference without purchasing GPU machines or managing deployments.
  • Have a good network connectivity from your inferencing point to the cloud.

When to Use Docker Deployment

Docker Deployment is the most flexible deployment option for developers that build mission-critical solutions or process high-throughput continuous inference loads. It can be deployed in your private cloud, on-premises, or at the edge. 

Use Docker Deployment if you: 

  • Have a deployment infrastructure and want to add inferencing capabilities to it.
  • Are looking for deployment automation in a container-based infrastructure.

When to Use LandingEdge

LandingEdge is an application that lets you deploy to an edge computer, such as an industrial PC. 

Use LandingEdge if you:

  • Want to build machine vision solutions using specialized hardware, like industrial cameras and PLCs.
  • Want to build mission-critical solutions at the edge.

Compare Deployment Options

Use the table below as a reference when choosing a deployment option.

FeatureCloud DeploymentDockerLandingEdge
General
HostingLandingLens-hostedSelf-hostedSelf-hosted
Operating systemLinux, Mac, WindowsAnyLinux, Windows
Can run inference when not connected to the internet
Can run inference on Visual Prompting models
Can see live results in the user interface
(there is no user interface)
Can upload image metadata to images
Maximum inference calls per minute40Depends on systemDepends on system
Can communicate with PLCs
Can deploy on NVIDIA Jetson devices
Can deploy on ARM64 processors
Running inference consumes credits
Send Images for Inference
Drag and drop images
GenICam
Images from webcam
Video (will convert to images)
(via Python library)
Select from a designated folder (folder watcher)
Send images via POST APIs
Post-Inference Features
Apply post-processing scripts
(via Python library)
(via Python library)
Can view inferenced images & predictions in LandingLens
(must pass --upload flag)
(must enable "Upload results to LandingLens")
Can save inferenced images to a local folder
Notes:

Was this article helpful?