Building Interactive standalone Edge Impulse Models with MQTT Connectivity on the Nordic Thingy91

Peter Ing
12 min readFeb 24, 2022

--

PART 4 of the 4 part article series on using the Nordic Thingy91 with Edge Impulse

Part 1: From Zero to Hero with the Nordic Thingy91 and Edge Impulse

Part 2: Getting started with Edge Impulse and the Thingy91

Part 3: Building the custom Edge Impulse Thingy91 Firmware and Connecting to MQTT

Recapping the first 3 parts you were introduced to the Nordic Thingy91 Prototyping platform and its architecture and features. Then you were shown the end to end process of building a tinyML project with Edge Impulse based on a motion detector.

Deployment to your Thingy91 was also covered using both the Edge Impulse online tool to build firware with your model included, and how to build locally from source using the C++ library. The process of updating the firmware onto the Thingy91 was also covered. If you have followed along you should therefor be comfortable using the Nordic Thingy91 with Edge Impulse.

In Part 3 the Nordic software development environment was introduced. The Nordic Thingy91 runs the Zephyr Real Time Operating System and you saw how you can use the Menuconfig graphical menu to make changes to the MQTT broker and topic as well as some of the LTE connection time out settings in the firmware before building the source.

Finally, you were shown how to use the Edge Impulse CLI or a Serial Terminal to Connect your Thingy91 to the Internet and of course to start running your Impulses or in other words your machine learning models continuously on your device.

If any of these steps are unfamiliar to you I highly suggest going through this article series from Part 1 which provide enough background to follow the steps in this part.

The Standalone Firmware is a version of the Edge Impulse firmware for the Nordic Thingy91 that provides you with some handy features for building and deploying the Thingy91 itself, allowing you to use it without needing a PC or CLI interface. This opens up the possibility of experimenting with standalone use cases for embedded ML. All of the features are active before building the firmware. So each time you want to try a new feature you must rebuild the firmware.

One thing to keep in mind before moving on is that the Edge Impulse device firmware exists to allow you to fully integrate your device In this case the Thingy91 into the Edge Impulse Studio. It is an essential link in the chain to allow you comfortably sample and collect data and interact with supported devices from the Edge Impulse Studio. The device firmware also gives you a simple way to run your model and observe performance and results on a real piece of hardware.

Its not intended to be a final production application but being open source does provide you with a jumping off point to build your own custom applications and deployments and is generally designed to run tethered to your PC so that you can observe inference results. This is the same for the firmware across all supported boards and platforms.

This gives you a quick way to evaluate performance and debug and test you tinyML model before incorporating into a final production application.

The Standalone Firmware is built on top of the main official Thingy91 firmware and behaves exactly the same as the default firmware. Over and above the following features have been added:

  • Start/Stop Inference using the Thingy91’s main push button
  • Start Inference automatically on boot up
  • Change the inference classification threshold
  • Start the LTE and MQTT connection at boot
  • Publish Inference stats (like you see in CLI) via JSON for consumption by your application
  • Set LED and Buzzer Outputs for up to 5 classes
  • Set Output On time

These features can be used without any knowledge of Zephyr or coding. All you need to do is build and export your model as a C++ library and follow the steps in Part 3 to setup and build locally.

With this you can use your Thingy standalone to provide feedback to users when inference happens as well as consume the ML events in your application via MQTT.

Feedback is provided through 5 different LED colors available, Red, Green, Blue, Yellow and Magenta as well as the Buzzer as a high pitched and low pitched tone.

You can set up 2 outputs per inference class, the primary output is the LED colour and the secondary is the buzzer both of which don’t have to be used together. The output pulse width allows you to turn control how long the outputs are turned on per inference cycle depending on your application.

With automatic LTE connectivity you don’t need to first attach your Thiny91 to a PC to run AT+CONNECT. A configurable audible low to high pitch chirp tone alerts to when you have connected successfully allowing for full headless operation of your Thingy91.

Prerequisites

Part 3 of this article series covers how to get setup to build firmware locally, all you need is to have the correct nRF Connect SDK installed and setup to build Nordic projects as well as the Github client to get the source. I will use the standard command line client.

Clone the repo from https://github.com/peteing/edgeimpulse-standalone-firmware-nordic-thingy91

git clone https://github.com/peteing/edgeimpulse-standalone-firmware-nordic-thingy91.git

You need export you model as a C++ library from within Edge Impulse Studio and drop the source files into your project as described in Part 3.

The default model included with the Edge Impulse Thingy91 firmware hosted in Edge Impulse’s Github is an indoor and outdoor detection model. This model is also included in the Standalone Firmware to get you started if you don’t have time to go through the tutorial for motion detector.

The prebuild configuration tools are Menuconfig and Guiconfig. Menuconfig is ideal if you only have a console or don’t want to leave a console, since you undoubtly have access to a GUI workstation to use Edge Impulse I will make the assumption that you have Guiconfig available to you and use that for the rest of this guide. The steps in Menuconfig are similar.

To activate and Guiconfig you run

west build -t guiconfig

Note the -t option vs -b when building, after a few moments to you will see the following:

The MQTT Edge Impulse Menu is where you will be working and menus are highlighted in blue as seen above. You can use the + on the side to expand menus to access sub menus, expanding this menu gives you:

The “Thingy91 App Customization” sub menu is an additional option that will appear in this firmware version that is not in the original Edge Impulse firmware. These are to configure the options described above for standalone mode.

The 3 groupings are are:

  • Inference Settings
  • Connectivity Setup
  • Output Setup

Inference Settings

When you run your Impulse from the CLI or serial terminal you would normally need to first attach your Thingy91 to your PC by means of the USB cable, you can now start and stop inference without needing to connect to the PC whenever and wherever you are.

You choices are between using the main push button on the Thingy91 to start and stop your inference or running the Inference at boot. The last option is the default standard behavior (use CLI or Serial terminal). Note that you can only select one of these options at a time and if you select the push button or automatic start you will still be able to view the inference results over the Serial Terminal but the CLI and the AT Command processor will both be disabled.

Inference can be started and stopped as you wish by pressing and holding the button for over 1 second. When inference is stopped the main LED is Red to show you that inference is stopped. When you start the inference is will turn the Red off and flash Green briefly before turning the LED off to indicate your model is running.

Why not leave the LED on to indicate the inference is running you may ask? If you intend to use a colour LED to give feedback then it would be confusing and make it difficult to spot so having it turn off while running gives you that ability. When turning the inference off the LED returns to the Red stopped state. Take note of the default value if you don’t select this option.

When using the CLI you are able to see your inference classification results. If you want to use the inference to actually produce events such as LED or audio outputs you need to set a threshold value at which a class is deemed to be detected. This can be hardcoded in your application but to allow you work with this firmware with no code changes, the classification threshold is configurable as a percentage and behind the scenes this is converted to the normalized value between 0 and 1.

Connectivity Settings

The LTE connectivity is the really amazing feature of the Thingy91. The default behavior is to manually start the LTE connection using AT+CONNECT to establish communication with the MQTT broker. The default firmware also allows you to change the broker and the topic from the top level MQTT Edge Impulse Demo Menu. For standalone operation you also need a way to ensure LTE connectivity can be initiated without needing to connect via the CLI

You now have the option to select between the default behavior of using the serial console or having LTE connectivity start automatically.

The LTE connection is a blocking operation, meaning it effectively stops the code while executing. As you can see when you run it via the Serial Terminal where the AT+CONNECT command blocks the AT command processor until it is either successful or unsuccessful. Behind the scenes the MQTT client connection takes place in a separate thread. So the initial LTE connection can hold up your application for a while if it struggles to connect.

The setting provided by the original firmware allows you to set a custom delay for both LTE and MQTT connections which I suggest you play with by testing the LTE connectivity using the AT+CONNECT. You will need to test if its possible to connect at all in your region. Where there is coverage signal may not be good in certain areas (Low power cellular is relatively new and not as widely or well implemented as normal LTE and 3G).

It is highly recommended to test out the connectivity before changing the Connectivity settings to automatically start your connection because if there are problems the device will appear to freeze while the LTE connection attempts and in the process blocks the main thread.

The “MQTT Connect Audio Alert” is a way for you to know when standalone when connectivity has happened since you typically wont have a Serial Terminal connected while roaming around with your tinyML powered Thingy91. It will give an upchirp sound as soon as the connection has occurred and when you hear that the main thread will start again. Enabling you to run your Inference either via command line or via the button depending on what you chose under Inference settings.

By default the firmware would choose the label/category that received the highest scores from your labels stored in your exported model. The JSON option allows you to publish the inference results in JSON together with the timing statistics thereby publishing the same information that the CLI presents but this time in JSON format via MQTT. This is ideal for consuming in applications.

The structure of the JSON payload mimics that of the CLI:

Output Setup

There wouldn’t be any fun in running standalone if it wasn’t possible to let your model do something in the physical world. The Thingy91 has multicolor LEDs and a buzzer as its primary outputs and you can choose to use these to indicate which classes are being detected (above the threshold set under Inference Settings).

Its very important build a model that performs well so that it clearly distinguishes classes from each other. One key thing to look out for with tinyML is poorly performing models which can give you unexpected behavior.

While you may know the class names from the Edge Impulse Studio, you need to know the order in which the labels are stored in your exported model. This requires viewing a source file. Going back to your C++ export there are the 3 folders exported by Edge Impulse:

The model-parameters folder stores specific meta data relating to your model.

Open the “model_metadata.h” file in a text editor of your choice. I am using VSCode but you can even use Notepad if you like, just make sure not to make any changes to the file.

The motion detection model you built in Part 2 has the following classes/labels: “motion_forward”, “motion_side”, “motion_up” and “stationary” . Look for a line that contains the definition of an array called “ei_classifier_inferencing_categories” which is near the top of the file (line 28) and looking at the elements you can see the class labels matching what was created in the Edge Impulse Studio.

The first element of the array or the first label you see is Inference Label 1 and the second label is Inference Label 2 and so forth. Note the classes, categories and labels are referring to the same thing in this context, i.e. the specific things you trained your classification model to detect. These labels will depend on what labels you created in Edge Impulse when training your data.

Each label can be configured to turn on either a specific LED color as a primary output and/or the buzzer with a high or low tone as a secondary output. You are free to use both the visual output and audio output separately or together. Provision is made for 5 labels at this time and each label has a Primary (LED) and Secondary (Buzzer) output.

We are going to choose the Red LED for Label 1 (motion_forward) and no audio tone. This is easily done by expanding the “Inference LABEL 1 Primary Output(Visual)” and “Inference LABEL 1 Secondary Output(Audio)”:

Its possible to use the same options for more than one Label, so if you want to clearly differentiate between classes ensure you don’t select overlapping options. All the labels follow the same Primary (visual), Secondary (audio) structure. Remember the labels map to Inference LABEL numbers in the order in which they are stored in the “ei_classifier_inferencing_categories” shown above.

When each class is active the corresponding output will activate until the next class activates. Observing the output you will note that when for example you move the device forward and backward and you set this to the Red LED (if you selected that above) then the light will stay Red until the next inference happens. If that is still forward/backward motion, the LED will stay Red until you change the motion to another direction. The outputs corresponding the label associated with this direction will then activate after the previous label’s colour or sound turns off.

Inference takes 2 seconds to complete in continuous inference which is a good compromise to ensure that you get a good amount of data make a prediction. You may not want the output to stay on for the full two seconds but just to pulse for a short time to indicate the detected class. This is possible by setting the “Output Timing” option to any value between 0 and 2 seconds and this applies Primary and Secondary outputs of all labels. This is handy if you want a short beep from the buzzer or a quick flash of the LED.

The value is in milliseconds (1thousadnth of a second) and you can enter any value between 0–2000. It will allow you to enter a value higher than 2000 but the application will still not exceed 2 seconds.

Try to experiment with different use cases and demonstrate the Thingy91 and Edge Impulse to friends and colleagues. As an example one use case of standalone mode is tamper detection where you could hide the Thingy91 inside a valuable item and as soon as it is moved, sound the buzzer and flash the LED red. With the event published to the MQTT server you could consume that as well in other processing pipelines to store and provide alerts to an application.

There are a lot of possibilities with the Thingy91 and Edge Impulse and its well worth the investment not only as a learning tool but also for demonstrating and even deploying tinyML applications.

Be sure to visit the Edge Impulse Forum for help on any of these topics.

--

--