Custom Processing
  • 17 Jan 2024
  • 5 Minutes to read
  • Dark
    Light
  • PDF

Custom Processing

  • Dark
    Light
  • PDF

Article Summary

The Custom Processing setting allows you to customize and apply your own automations using C# or Python scripts. This setting has two options:

  • Image Processing: Automate an action that will happen before inference.
  • Results Processing: Automate an action that will happen after inference.

To add a custom script:

  1. Click Edit next to Image Processing or Results Processing.
    Edit to Add a Custom Processing Script
  2. Select C# or Python from the drop-down menu, depending on the type of script you have.
  3. Replace the instructions in the large field with your custom script.
  4. Click Save.
    Add the Script and Save
  5. The Log Messages field will display a success message if the script is accepted. If you received an error in this field, please check your script and try again.
    Success Message
  6. Click Done.

Run Images Through Two Inspection Points

Note:
The sample script for the example in this section is a post-processing Python script for Classification models.

This sample script allows you to run an image through two Inspection Points if the model predicts a certain Class. When the image is checked in the second Inspection Point, the predicted Class can be overridden, depending on what that Inspection Point predicts.

Here is the sample script. To read a use case for this script, see Use Case for Running Images Through Two Inspection Points.

def run(self):
   primary = self.Result.PrimaryResult
   self.Log(primary.Predictions.LabelName)
  
   if primary.Predictions.LabelName != "primary-trigger-class-name":
       # skip secondary inspection
       return

   # run secondary inspection
   secondary = self.RunOtherInspection("second-inspection-point-name", self.Image)
   secondaryResult = secondary.PrimaryResult.Predictions
   self.Log(secondaryResult.LabelName)

   if secondaryResult.LabelName == "secondary-inspection-class-name":
       self.Log("Overriding result")
       self.Result.SetDerivedResult(ClassificationResult("your-secondary-class-name", 1, secondaryResult.Score)

The table below describes the placeholders that you will need to update when using this script.

PlaceholderDescription
primary-trigger-class-nameThe name of the Class that will move the image to the second Inspection Point when that Class is predicted. For example, if you want an image to be checked by a second Inspection Point if the Class "Dog" is detected, replace the primary-trigger-class-name placeholder with Dog.
second-inspection-point-nameThe name of the second Inspection Point you want applicable images to be sent to. For example, if the name of the second Inspection Point is called puppy-inspection, replace the second-inspection-point-name placeholder with puppy-inspection.
secondary-inspection-class-nameThe name of the Class that will be overridden if that Class name is detected. For example, if you want the name of a Class to be overridden if the Class "Puppy" is detected, replace the secondary trigger-class-name placeholder with Puppy
overridden-class-nameThe name of the Class that you want the Class in the secondary-trigger-class-name placeholder to be overridden to. For example, if you want LandingEdge to override the Class name to "Puppy", replace the overridden-class-name placeholder with Puppy.

Use Case for Running Images through Two Inspection Points

Let's say you work for an automatic dog door company, and you want your model to detect dogs and puppies so that when a dog or puppy approaches the dog door, the door will automatically open. Your model has these Classes: Dog, Not Dog, and Puppy. This is how Script #1 will work in this use case.

You run inference on an image, and you receive the Prediction.

  • If the Prediction is "Dog", the image is sent to a second Inspection Point called "puppy-inspection" to check if the dog is a puppy.
    • If the second Inspection Point also predicts a dog, the Class stays as "Dog".
    • If the second Inspection Point predicts the image is of a puppy, the Class is overridden to "Puppy".
  • If the Prediction is "Not Dog", the image does not require a second verification, and the Inspection Point proceeds to the next image.

This is the sample script edited to follow this use case.

def run(self):
    primary = self.Result.PrimaryResult
    self.Log(primary.Predictions.LabelName)
    
    if primary.Predictions.LabelName != "Dog":
        # skip secondary inspection
        return

    # run secondary inspection
    secondary = self.RunOtherInspection("puppy-inspection", self.Image)
    secondaryResult = secondary.PrimaryResult.Predictions
    self.Log(secondaryResult.LabelName)

    if secondaryResult.LabelName == "puppy":
        self.Log("Overriding result")
        self.Result.SetDerivedResult(ClassificationResult("puppy", 1, secondaryResult.Score))

Override the Class Name, Score, and Index

This sample script allows you to override the Class name and score of an image. For example, you can manually override the final Predictions based on the model's results and other logic you'd like to add.

Here is the sample script.

def run(self):
    newClass = "new-class"
    newScore = number
    newIndex = 1
    newResult = ClassificationResult(newClass, newIndex, newScore)
    self.Result.SetDerivedResult(newResult)

The table below describes the placeholders that you will need to update when using this script.

PlaceholderDescription
new-classThe name of the Class you want to change the current prediction to. For example, if the model predicted the Class Cat, but it should be the Class Dog, replace the new-class placeholder with Dog.
numberThe Confidence Score of the Prediction. This entry must be a number between 0 and 1. For example, if you want to change the Confidence Score to .876, replace the number placeholder with .876.

Add Metadata to Images

Note:
The ability to add metadata to images through scripting is available in LandingEdge v2.3.93 and later.

If you save images to LandingLens (by enabling Upload Results to LandingLens), you can run a script to customize the values for this metadata:

  • Image ID
  • Inspection Station ID
  • Location ID

Here is the sample Python post-processing script for customizing metadata values:

def run(self):
    self.Data.Metadata.ImageID = "1234"
    self.Data.Metadata.InspectionStationID = "Inspection Point A"
    self.Data.Metadata.LocationID = "Main Warehouse"

The table below describes the placeholders that you will need to update when using this script.

PlaceholderDescription
1234The value for the Image ID.
Inspection Point AThe value for the Inspection Station ID
Main WarehouseThe value for the Location ID.

The metadata displays in LandingLens when you view the image here: Deploy > Edge Deployment > Historical Data > open image. This metadata is only accessible in LandingLens, and isn't embedded in the image.

Customized Metadata

Skip Saving Certain Images

Note:
The ability to skip saving certain images through scripting is available in LandingEdge v2.4.89 and later.

If you save images to LandingLens (by enabling Upload Results to LandingLens), you can run a script to skip saving certain images.

Here is the sample C# post-processing script for not saving images:

public void Run()
{
    Data.ForceSkipSaveImage = true; // will not save or upload this image+result
}

Read Out Raw Scores from a Classification Model

Note:
The ability to read out raw scores through scripting is available in LandingEdge v2.5.10 and later.

This sample script allows you to read out the raw scores from a Classification model. This script also includes AllClasses, which is a Dictionary with the Class index as the key. The AllClasses Dictionary provides access to all names and indices of all Classes in the model, so Class information can be retrieved with AllClasses[index].

Here is the sample C# post-processing script for reading out the raw scores from a Classification model:

public void Run()
{
    if (Result.PrimaryResult is not ClassificationResult clf)
    {
        // not a classification result
        // this script is only applicable to classification, so exit the script
        return;
    }

    Log($"All classes in model: {string.Join(", ", AllClasses)}");
    Log($"Class scores for image: [{string.Join(", ", clf.Predictions.RawScores)}]");

    for (int i=0; i<AllClasses.Count; i++)
    {
        Log($"Class '{AllClasses[i].Name}' score = {clf.Predictions.RawScores[i]}");
    }    
}


Sample Script for Reading Out Raw Scores

Was this article helpful?