Beginners guide to Object Detection with Edge Impulse

Studio Dashboard
  1. Choose object classes to be detected
  2. Label bounding boxes and annotate images
  3. Preprocess image data / feature extraction
  4. Choose a Model Architecture and Deep Learning Framework
  5. Setup training environment
  6. Setup hyperparameters and start training
  7. Test and Validate the model and tune and retrain as required
  8. Deploy

1. Choose object classes to be detected

The classes I am choosing for the project are an apple and a mug so that we will build an object detector that can classify between these two classes purely for illustrative purposes. The chosen labels we need to predict are therefore “mug” or “apple” and we want to create a model that will analyze an image frame and ideally highlight all apples and mugs in the scene together with the probability of it being one of those objects. Object detection should not be confused with image classification where the goal is to classify an image as a whole into classes. An object detector outputs the coordinates of an object together with its probability of that object being from a particular class. So keep in mind that when we talk about the output features of an Object Detector we are not just talking about the classification into categories but also locations in an image. This is typically visualized as boxes drawn around detected objects with text labels attached to those boxes which are also known as bounding boxes, something we will discuss in a bit more detail shortly.

Project Wizard
Images Wizard
Configuring Project for Object detection
Bounding Box example

2. Label bounding boxes and annotate images

The labelling process is done using labelling software that allows you to visually draw the bounding box and indicate the label. There are both online and offline labelling tools available and the good news is that Edge Impulse includes a labelling tool built into the Studio.

Upload data
https://github.com/aiot-africa/edge_impulse-objectdetection-annotation_importer/tree/main/images
Training dataset
Label split
Adding labels
Editing labels
Editing bounding boxes
example of bounding_boxes.labels
Multiple Labels in a single file
Exporting raw data
Export example
Export types
Folder structure for importing Pascal VOC
Successful script execution

3. Preprocess image data/feature extraction

Now its time to head to the Impulse design screen and create an Impulse that looks like below by clicking on the empty “add a processing block” fields, be sure to click “Save Impulse” and you will see an “Image” and “Object Detection” option appear under Impulse Design where you can tweak the parameters of the feature extraction and model respectively.

Object detection Impulse
Feature extraction — Parameters
How them apples

4. Choose a model architecture and Deep Learning Framework

Once again Edge Impulse helps make some sound design decisions on your behalf. In terms of the model architecture the image detection block at launch uses the MobileNetV2 SSD architecture. In the world of Object Detection there are few architectures that are popular due to their performance and you will most likely encounter YOLO, SSD and others. SSD is arguably the better option for real-time object detection and that is about as much as you need to worry about this step. If model architectures are your thing and you want to delve deeper there are options to use the Keras expert mode but we wont be doing that and instead we will stick with the choices that have been made for us . Similarly with the deep learning framework the choice has been made for us and this Tensorflow Lite. Again you don't have to worry to much about the details at this point if you don't want to.

5. Setup training environment

There is no need to worry about things like Jupyter Notebooks, training environments etc as the Studio is your environment, lets move on.

6. Setup hyperparameters and start training

The only thing that you need to consider are the Hyperparameters which are shown as Training Settings which is accessible by selecting “Object detection” under “Impulse Design”.

Neural Network configuration
int8
float32

7. Test and Validate the model and tune and retrain as required

Now that you have your model trained its time to test it to see whether you are happy with its performance or not. The Live Classification tool as described in the next section is one such way to do quick testing but to do proper testing you need to run the model on the test data. Recall during the Data acquisition stage you split the training data between training and test sets using the 80/20 split, its now time to use that 20% to test the model. This is import to verify that your model can classify on previously unseen data and hasn't overfit to the training set.

Test results
Adding a device
Mobile phone classifier
Real-time inference on RPi4
Multiobject Detection
Apple detection issues

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store