Mobile Inference
  • 23 Oct 2024
  • 3 Minutes to read
  • Dark
    Light
  • PDF

Mobile Inference

  • Dark
    Light
  • PDF

Article summary

This article applies to these versions of LandingLens:

LandingLensLandingLens on Snowflake

LandingAI is excited to share that we’ve developed a new feature—Mobile Inference! This feature empowers you with an additional way to use LandingLens models in the real-world.

After you develop a model and deploy it to an endpoint, you can then generate a QR code. Scanning the QR code with your phone opens the LandingLens app in your browser.

From there, you can take a photo or select a photo from your phone. The app sends the photo to the cloud-hosted model to run inference on it, and then the results display immediately on your phone! If you're using Mobile Inference for an Object Detection project, you'll also see the number of objects detected.

The images and predictions are saved back to LandingLens, so that you can review all results in a centralized location. You can also share the QR code with your colleagues, so that you can all collaborate on running inference.

Note:
You can run inference up to 40 times per minute per endpoint. If you exceed that limit, the API returns a 429 Too Many Requests response status code. We recommend implementing an error handling or retry function in your application. If you have questions about inference limits, please contact your LandingAI representative or sales@landing.ai.

Use Mobile Inference

Mobile Inference is a part of Cloud Deployment. To use Mobile Inference, you first deploy to an endpoint, which is the virtual device that will host the model.

To use Mobile Inference:

  1. Set up an endpoint, as you would for Cloud Deployment. 
  2. After creating an endpoint, the Mobile Inference tile appears on the right side of your screen (you might have to scroll over to see it). In this tile, click Get QR Code.
    Get QR Code
    Note:
    If you've already run inference, the Mobile Inference tile no longer displays. Instead, click the QR Code icon next to the endpoint name to open the QR Code pop-up window.
    Click the QR Code Icon
  3. A pop-up window opens with a QR code that is unique to this deployment.
  4. Turn on your phone’s camera app, and hold up your phone to the QR code.
  5. Click the pop-up that appears at the top of your phone.
    Scan the QR Code with Your Camera App
  6. The LandingLens app opens in your browser. Click Select a Photo and select one of these options to get an image:
    • Photo Library: Select an image saved to your phone.
    • Take Photo: Use the Camera app on your phone to take a photo.
    • Choose File: Select a file saved to your phone.
      Select a Photo
  7. After you select or take a photo, the app sends the photo to the deployed model, and the model runs inference on it. The model’s predictions show up almost instantaneously as overlays on your phone. If you're using Mobile Inferencefor an Object Detection Project, you'll also see the number of objects detected.
    Predictions Display on Your Phone
  8. If you have more samples lined up, you can keep taking photos and running predictions–just click Select a Photo to continue.
  9. When you're done running inference, you can close the tab on your browser.
  10. All the photos you’ve taken and their predictions display on the Deploy page back in LandingLens. This set includes results from all the ways you ran inference for this deployment, including via APIs and Mobile Inference.
    View All Images and Predictions

Deactivate a QR Code

You can deactivate the QR code for Mobile Inference at any time by clicking Turn It Off on the QR code pop-up window. When the QR code is deactivated, no one with the QR code can use it to run inference.

Deactivate the QR Code for Mobile Inference

Deactivating the QR code can be useful in multiple situations. For example, you can deactivate it if you have a time-bound project, and don't want to add more data after the end of the project.

To reactivate a QR code, simply follow the procedure in Use Mobile Inference.


Was this article helpful?

What's Next