Docker
  • 14 Nov 2022
  • 9 Minutes to read
  • Dark
    Light
  • PDF

Docker

  • Dark
    Light
  • PDF

 

Deployment with Docker is no longer supported.

Article Summary

Docker Overview

Docker is a packaged Model and inference application that can be installed in different environments. Here is more information about the Docker application:

  • Compatible with Linux/Ubuntu 20.04 or greater (AMD64)
  • Deployable on the cloud or on a local Linux GPU box
  • Offers basic inference ability
  • All API control (including remote access)
  • No hardware integrations with cameras, programmable logic controllers, etc.

Download Docker

  1. You can install Docker here.
  2. If you are using the Docker desktop app, set the Docker memory limit to the max. This can fix issues like long app startup times and inference crashing on large images. To do this:
    1. Open the Docker desktop app.
    2. Click the gear at the top.
    3. Select the Resources panel and drag the slider for Memory all the way to the right.
    4. Click Apply & Restart.
      Configure the Docker Memory 
  3. Log in to LandingLens.
  4. To download the app from the platform, send at least one model from Model Iteration to Continuous Learning. The app is a self-contained Docker image that contains everything needed to run inference.
    Add to Continuous Learning 
  5. Go to the Model & Device page and click Deploy on your model in the Models Available to Deploy tab.
    Deploy Your Model 
  6. Click Create a New Device at the bottom, to go to the Download App section to download the Docker App.
    Create a New Device 

Run Docker

  1. For all commands, the parts you must fill out yourself are in BOLD.
  2. Unpack the app with the following command. Replace PATH/TO/APP.tar with the path to the image you just downloaded. docker load -i PATH/TO/APP.tar
  3. After completion, Docker will say Loaded Image: IMAGE_NAME. Take note of this.
  4. Use 1 of the following commands to run the app, filling in IMAGE_NAME from the previous step. In addition, please specify the absolute path to a local folder to store the logs, to aid in debugging down the line. For customer support, we will first ask for the logs in this folder.
    Note:
    As of v20210226.2, the app will run by default against production. If you are on Windows, provide all paths in the format DRIVE_LETTER:/path/to/folder, e.g. C:/path/to/folder. Also, ensure that the terminal is running in the same drive as the folder.
    • To run with only API support: docker run -d -p 8080:3000 -p 8081:3001 -v /ABS/PATH/TO/LOG/DIR/:/root/app/logs/ --name inference-engine IMAGE_NAME
    • To run with only API support against staging environment: docker run -d -p 8080:3000 -p 8081:3001 -e STAGE=staging -v /ABS/PATH/TO/LOG/DIR/:/root/app/logs/ --name inference-engine IMAGE_NAME
    • To run with Metadata Tracking support: docker run -d -fp 8080:3000 -p 8081:3001 -e METADATA_TRACKING=True -v /ABS/PATH/TO/LOG/DIR/:/root/app/logs/ --name inference-engine IMAGE_NAME
    • To run with HiLo> (Human in the Loop) support in two modes (shadow, production): docker run -d -p 8080:3000 -p 8081:3001 -e HILO=shadow -v /ABS/PATH/TO/LOG/DIR/:/root/app/logs/ --name inference-engine IMAGE_NAME
    • If no mode is specified for HiLo, then the experimental mode is used, where it works as no HiLo support. Otherwise, if the mode is set as shadow or production, METADATA_TRACKING will turn on automatically. 
    • To run with non-image-upload support (a feature that allows the user to only upload inference results without the original images): docker run -d -p 8080:3000 -p 8081:3001 -e NONIMAGE_UPLOAD=True -v /ABS/PATH/TO/LOG/DIR/:/root/app/logs/ --name inference-engine IMAGE_NAME
    • To run with API and local folder support: docker run -d -p 8080:3000 -p 8081:3001 -v /ABS/PATH/TO/LOG/DIR/:/root/app/logs/ -v /ABS/PATH/TO/INPUT/:/mnt/local_input/ -v /ABS/PATH/TO/OUTPUT/:/mnt/local_output/ --name inference-engine IMAGE_NAME For this, you will need to fill in the absolute paths to the local input and output folders on your device. The app will now be able to process batches of data from a local folder on your device.
    • To run with USB webcam support (LINUX ONLY): docker run -d -p 8080:3000 -p 8081:3001 -v /ABS/PATH/TO/LOG/DIR/:/root/app/logs/ -e FEATURES=webcam --device /dev/videoN:/dev/video0 --name inference-engine IMAGE_NAME For this, replace /dev/videoN with the device corresponding to your webcam. To see which webcam devices are available, run the following commands: sudo apt-get install v4l-utils v4l2-ctl --list-devices
  5. You can now go to http://127.0.0.1:8080/ in your browser to access the app and if all the above steps are done correctly, you should be able to see the below page from the app.
    Inference Engine 
  6. To stop and remove the app, run: docker container stop inference-engine && docker container rm inference-engine

Configure Docker

  1. The Inference app needs to be registered with a LandingLens project with user credentials accessible from the platform, by clicking on “Authentication” under the user profile in the upper right corner:
    Authentication 
  2. If this is your first time authenticating, then please go ahead and generate and keep it safe for any future registration process. For more information on generating API Keys, go here.
    Generate API Key 
  3. Once you have successfully acquired the user credentials, register the Inference app, with the App name and credentials, like below:
    Register Your App 
  4. On successful registration of your app, you will see the status on the app as “Online” and also it should be available under the “Devices” on the platform.
    New Device Listed 
  5. After successful registration, your app is ready for subscription to any Model(s) of the Project(s). Users can select the project for subscription, which is open as seen below with a project selection drop-down menu, in the app dashboard:
    Subscribe Your App to a Project 
  6. After a successful subscription, your app’s Engine section status will be updated to “Running” from “Stopped” and is ready for automatic model download and inference:
    The App Engine Status Changes to Running 
  7. Any successfully registered and subscribed app, is continuously checking for any updates in Models and automatically pulls and updates the app with the latest models configured by the user on the platform, without any need for restarting the app. Further, the device will be visible under deployment of the project.
    Devices List 

Run Inference

Before attempting any inference, navigate to the web console and ensure that the engine status says "Running".  Inference will not work if the app is still starting up or "applying changes".

As of today, the following methods of inference are available:

  • [Drag and Drop] Run the inference by drag and drop option for single images.
    Drag and Drop to Run Inference  
  • [API] Run the inference using a local client application like a simple http client against the Inference App endpoint. Sample Python code with requests library:
import json
import requests
from mimetypes import guess_type
from pathlib import Path


def infer(filename):
    url = "http://127.0.0.1:8080/api/v1/images"
    with open(Path(filename).resolve(), "rb") as f:
        files = [("file", (Path(filename).name, f, guess_type(filename)))]
        metadata = {
            "imageId": "28587.jpg",
            "inspectionStationId": "11",
            "locationId": "factory_floor#1",
            "captureTimestamp": "2021-10-11T12:00:00.00000"
        }
        payload = {"metadata": json.dumps(metadata)}
        response = requests.request("POST", url, files=files, data=payload)
        return json.loads(response.text)


print(infer("PATH/TO/AN/IMAGE.png"))

Sample Response

Object Detection

{
   "code": 0,
   "message": "",
   "data": {
       "predictions": {
           "0333a1cf-e828-4f6a-a846-1eb87ea70ddd": {
               "score": 0.47807765007019043,
               "labelName": "Cat",
               "labelIndex": 1,
               "defectId": 339399,
               "coordinates": {
                   "xmin": 430,
                   "ymin": 788,
                   "xmax": 3358,
                   "ymax": 3689
               }
           }
       },
       "type": "ObjectDetectionPrediction",
       "latency": {
           "preprocess_s": 0.0262603759765625,
           "infer_s": 0.7816088199615479,
           "postprocess_s": 6.198883056640625e-06,
           "serialize_s": 0.0005638599395751953
       }
   }
}

Segmentation

{
   "code": 0,
   "message": "",
   "data": {
       "predictions": {
           "imageHeight": 338,
           "imageWidth": 229,
           "numClasses": 4,
           "encoding": {
               "algorithm": "rle",
               "options": {
                   "map": {
                       "Z": 0,
                       "N": 1
                   }
               }
           },
           "bitmaps": {
               "36ac0a33-e5da-1017-4814-3a315687e5a4": {
                   "score": 0.277986546569967,
                   "labelName": "Cat",
                   "labelIndex": 1,
                   "defectId": 553471,
                   "bitmap": "<SOME RLE ENCODED BITMAP>"
               },
               "63d719f1-033d-4c2d-0fd1-ddf1fd73cde1": {
                   "score": 0.5281267794435738,
                   "labelName": "Dog",
                   "labelIndex": 2,
                   "defectId": 553472,
                   "bitmap": "<SOME RLE ENCODED BITMAP>"
               },
               "50d0f2b4-ad28-7917-1a2b-5b094513d011": {
                   "score": 0.2895060772370499,
                   "labelName": "刀丝",
                   "labelIndex": 3,
                   "defectId": 786819,
                   "bitmap": "<SOME RLE ENCODED BITMAP>"
               }
           }
       },
       "type": "SegmentationPrediction",
       "latency": {
           "preprocess_s": 0.0005967617034912109,
           "infer_s": 0.24967670440673828,
           "postprocess_s": 6.67572021484375e-06,
           "serialize_s": 0.3327920436859131
       }
   }
}


Note:
These run-length-encoded bitmaps look like "1373Z1N1831Z1N1144Z1N686Z1N1831Z1N1144Z1N..."

Classification

{
   "code": 0,
   "message": "",
   "data": {
       "predictions": {
           "0333a1cf-e828-4f6a-a846-1eb87ea70ddd": {
               "score": 0.47807765007019043,
               "labelName": "Cat",
               "labelIndex": 1,
           }
       },
       "type": "ClassificationPrediction",
       "latency": {
           "preprocess_s": 0.0262603759765625,
           "infer_s": 0.7816088199615479,
           "postprocess_s": 6.198883056640625e-06,
           "serialize_s": 0.0005638599395751953
       }
   }
}
  • [Local Folder] Run the inference on a batch of images by copying them into the local input directory. The output will be saved to the local output directory. This requires you to have started the Docker with local folder support.
    • The inference engine will attempt to run inference on files ending in .bmp, .jpg, .jpeg, and .png.
    • The output will preserve folder structure. For example, say the local input and output folders are /home/username/input/ and /home/username/output/. Then, if I copy an image to /home/username/input/a/b/c/image.png, the corresponding output will be saved to /home/username/output/a/b/c/image.png.json.
  • [Webcam] If the webcam is enabled (LINUX ONLY), click the "Webcam" tab. The live preview will be on the left, and the inference result will be on the right. Click the camera button to trigger the webcam. Depending on your preference, you can choose to "Flip Webcam" in the Settings pane. This will mirror the webcam preview horizontally.
    Webcam Live Preview 
  • [Webcam API] If the webcam is enabled (LINUX ONLY), then you can also trigger the inference programmatically.
    • Get only predictions from the webcam:
      • curl --location --request GET 'http://127.0.0.1:8080/api/v1/webcam/images'
      • The response format is the same as the normal image API and does not return the image captured from the webcam.
    • Get predictions, along with the image captured from the webcam:
      • curl --location --request GET 'http://127.0.0.1:8080/api/v1/webcam/images?return_image=true'
      • The image will be appended to the response data in base64 format. For example:

                ```
                {
                   "code": 0,
                   "message": "",
                   "data": {
                       "predictions": {<SAME FORMAT>},
                       "latency": 916,
                       "imgSrc": "data:image/png;charset=utf-8;base64,iVBORw..."
                   }

                ```

Run Docker in Offline Mode

Available with docker app with inference bundle download option.

  1. Make sure Docker is already installed.
  2. Download the CL app Docker from the platform.
  3. Start a command line in the same folder. If you are using Windows, you need to use PowerShell!
  4. Extract the Docker image $ docker load -i cl-app-v1.tar. Note the IMAGE_NAME (should look like inference-engine-amd64:YYYYMMDD.N). Make sure this version is 20210505.0 or newer!
  5. Create config folder $ mkdir config.
  6. Download app.json and profile.json into the config folder. Use the ones here.
  7. Create models folder $ mkdir models.
  8. Download the inference bundle. Unzip it and place the folder in the model's directory. Note the MODEL_ID, which looks similar to 7f55684e-6eae-4efc-98f1-120f37f8d4e0. There is a sample bundle ZIP here. To get the inference bundle for your specific project, you will need to ping internal Landing APIs.
  9. Edit config/app.json and fill in "<REPLACE_ME_WITH_MODEL_ID>". It should match the model ID of the model you downloaded.

        ```
        {
           "bundlePath": "tmp/models/bundle_<REPLACE_ME_WITH_MODEL_ID>",
           "bundleId": "<REPLACE_ME_WITH_MODEL_ID>",
           "modelVersion": null,
           "queueSize": 30,
           "uploadResults": false,
        ```

  10. Your directory structure should look like this:
    Directory Structure 
  11. Start app in offline mode. To do this, use the following command, replacing IMAGE_NAME. You need to be in the same directory.docker run -d -p 8080:3000 -p 8081:3001 -v $PWD/config/:/root/app/config/ -v $PWD/models/:/root/app/tmp/models/ -e OFFLINE=true --name inference-engine IMAGE_NAME If the above command errors out (or on Mac), you may need to replace $PWD with the actual absolute path to the directory. 
  12. Navigate to http://localhost:8080 in your web browser.

Upload Inference Results to LandingLens

If you would like to upload results back to LandingLens to view them in the Inference Monitor, please select the Upload Results checkbox and click Apply. Make sure the box is checked BEFORE running inference!

Deregister Docker

If you are done using a device, first stop and remove the app by running docker container stop inference-engine && docker container rm inference-engine

Afterward, you can deregister it from LandingLens through Continuous Learning > Devices > [find your device] > Action > Delete. This will remove it from the devices table. Please be careful - there is no "undo" option!


Was this article helpful?

What's Next