• 10 Jul 2024
  • 7 Minutes to read
  • Dark
  • PDF


  • Dark
  • PDF

Article summary

Choosing the right camera for a computer vision application is a critical decision that can significantly impact the performance and success of your project. This article delves into the essential factors to consider when selecting a camera, including resolution, shutter type, and more. Understanding these key aspects will help you make an informed decision and ensure optimal results for your specific needs.

Keep the "Big Picture" in Mind

Choosing the right camera is just one part of setting up your image capture system. Make sure you also select the right lens and lighting.

The guiding principle in developing your camera, lens, and lighting system is that it should lead to reliable and repeatable results. Good data in leads to good data out.

Camera Resolution

It is important to select a camera that provides sufficient resolution to isolate defects or objects in the camera's field of view. Camera resolution is the number of pixels the camera sensor has, typically defined by its width (x-axis) and height (y-axis) in pixels.

To determine what camera resolution you need, follow the steps outlined below.

1. Identify the Size of the Smallest Defect

First, identify the size of the smallest object you need to detect. For example, let’s say you want to detect scratches on a sheet of metal. You measure the samples of scratched metal you have and determine that the smallest scratch is 1 mm.

2. Determine the Number of Pixels Needed to Cover the Smallest Feature

To detect an object accurately, you need a certain number of pixels to cover it. The best practice is to use 5-10 pixels across an object (given excellent visible contrast).

Let’s say you decide to use 10 pixels. 

3. Calculate the Spatial Resolution

Now that you know the size of the smallest object and you’ve determined how many pixels should cover that object, calculate the spatial resolution. Spatial resolution is the real-world size that each pixel represents in the captured image. 

The equation for spatial resolution is:


Using the numbers from our example, the equation is:


This means that in our example, each pixel represents 0.1 mm of the real-world object.

4. Determine the Required Camera Field of View (FOV)

A camera’s field of view (FoV) is the area it can capture in one image. You can also think of FOV as the total area you want to inspect. To return to our example of detecting scratches on metal sheets, determine the size of the area you want to be captured in each image. 

Let’s say that you want the camera to capture an area that is 100 mm x 100 mm.

5. Calculate the Required Camera Resolution

You can now plug in all your data points to determine the required camera resolution for your application.

This is the equation to calculate the camera resolution:


To use the information from our example, the equation is:


Based on these calculations, you would select a camera with a resolution of at least 1000 pixels in the x-axis to ensure you can detect the smallest defects accurately across your entire field of view. Camera sensors are made in specific resolutions, and the closest specific resolution (going up) from 1000 pixels is 1024x768.

Rolling Shutter vs. Global Shutter

A camera shutter controls how long the camera’s sensor is exposed to light to capture an image. A camera is available with a global shutter or a rolling shutter.

A rolling shutter exposes the image sensor to light sequentially, line by line. This means that the top and bottom of the frame are captured at slightly different times. If the object in the frame is moving—like a bottle on a conveyor belt—a rolling shutter can result in a “jello effect”, which is when vertical lines appear wavy. Therefore, rolling shutter cameras are recommended for fixed or static applications, because the image might otherwise be distorted. Rolling shutter cameras are also significantly less expensive than global shutter cameras.

A global shutter is a type of camera sensor technology that captures an entire image at once. The exposure time of a global shutter camera starts and ends at the same time, which means that each pixel in the image sensor is exposed to light simultaneously. Because the entire frame is captured at once, moving objects do not exhibit the “jello effect”, or any skew or distortion. Therefore, global shutters are recommended for high-speed moving applications. 

Color vs. Monochrome Sensors

A camera can have either a color sensor, which captures images in full color, or a monochrome sensor, which captures images in black and white. Each can offer specific advantages, and your use case should help determine whether you choose a camera with a color or monochrome sensor.

Following the image acquisition process, every pixel of the image provides a grey level value. The typical quantization is based on 256 grey levels (8 bit), 1024 levels (10 bit) or 4096 (12 bit). The image obtained is monochrome (black and white).

To display a color image, 3 coordinates are required: a red coordinate, a green coordinate and a blue coordinate. The value of each coordinate can be expressed in the same way as a monochrome image, with a range of 256, 1024 or 4096 levels per channel. This results in 3 times the amount of data required, which in turn results in a loss of image resolution and image performance when compared to a monochrome sensor.

Use a Color Sensor if Color Information Is Important

If the application requires the computer vision system to differentiate between colors, then a color sensor may be the best choice.

Color Sensors Are Slower

A color sensor pixel has 8 bits for the red channel, 8 bit for the green channel and 8 bit for the blue channel. Therefore 3 times the data to be processed compared to a monochrome sensor where each pixel has 8 bits resulting in a higher processing time and a slower frame rate.

Monochrome Sensors Can Achieve Higher Resolutions

A monochrome sensor can typically achieve a higher effective resolution than a color sensor. This is because monochrome sensors do not have a Bayer filter or any other color filters that block light and reduce the amount of light each pixel can capture. 

As a result, each pixel on a monochrome sensor can collect more light, which leads to higher resolution and better image quality, particularly in low-light conditions.

Camera Parameters

When setting up a computer vision application, understanding and optimizing the following camera parameters is crucial for achieving the desired image quality and performance:

Properly balancing these factors can enhance the efficiency and accuracy of the application.


Exposure time is the amount of time in which the light is allowed to reach the sensor. The higher the value, the higher the quantity of light represented on the resulting image. 

Increasing the exposure time is the easiest way to expose more light to the sensor, although it should be balanced against the presence of noise in the image. An exposure time that is too high may cause motion blur, overexposure, and reduced frame rate.

Frame Rate

The frame rate is the frequency at which a complete image is captured by the sensor, expressed in frames per second (fps). Set the frame rate based on the application. 

For example, if you have a production line that inspects 1,000 bottles per minute, this is how you calculate the minimum frame rate:



Gain is a way to increase the amount of signal collected by the image sensor. Increasing the gain will increase the image noise so that the overall signal to noise ratio remains unchanged.

Triggering System

A triggering system ensures that the camera captures an image 1) when the object is in the field of view of the camera and 2) when the lighting is correct. Most industrial cameras provide triggering options. Typically, a triggering system works by receiving an input from a trigger sensor, which then activates the camera and any necessary lighting.

A triggering system is especially important when taking images of moving objects, like an item on a conveyor belt, because it ensures that the item is in the field of view of the camera and is correctly lit.

Can I Use a Cell Phone Camera or Webcam?

As a reminder, the guiding principle in developing your camera, lens, and lighting system is that it should lead to reliable and repeatable results. 

If your use case is detecting small scratches on semiconductor wafers, then you will most likely need an industrial camera and a well-designed lighting and triggering system to reliably and repeatedly detect these defects.

However, if your goal is to create a mobile app that people can upload photos to to detect what bird species they see while bird-watching, then training the model on images taken from cell phones makes sense.

Was this article helpful?