Available Targets
Available Targets
Here is the list of all the available targets of the dAIEdge-VLab and their inference engine(s).
Last update : 14.04.2026 at 23:12:53
| Target Name | runtime(s) | Description |
|---|---|---|
| Raspberry Pi 4B | ort tflite aidge | Raspberry Pi 4b features theBroadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.8GHz. The available Raspberry Pi 4b has 4G of RAM. |
| Raspberry Pi 5 | ort tflite aidge | Raspberry Pi 5 features the Broadcom BCM2712 quad-core Arm Cortex A76 processor @ 2.4GHz, making it up to three times faster than the previous generation. The available Raspberry Pi 5 has 4G of RAM. |
| Kendryte K210 | nncase | The Kendryte K210 is a dual-core 64-bit RISC-V SoC designed for low-power edge AI, featuring 8 MB on-chip SRAM (≈6 MB general + 2 MB “AI SRAM” for feature maps), a fixed-function KPU CNN accelerator (1x1/3x3 conv, BN, ReLU, pooling). Models are compiled to .kmodel via NNCase from Keras, TFLite or ONNX. |
| NXP LPC55S69 | tflite | High Efficiency Arm® Cortex®-M33-Based Microcontroller from NXP. |
| Qualcomm RB3 Gen2 | tflite | The Qualcomm RB3 Gen 2, based on the QCS6490 processor, is an IoT development kit designed for high-performance computing. |
| STM32L4R9 | tflite x-cube-ai | STM32L4R9 device extend the ultra-low-power portfolio and performance with an Arm® Cortex®-M4 core with DSP and floating-point unit (FPU) at 120 MHz. |
| Jetson Orin Nano | ort tflite trt | Jetson Orin Nano Developer Kit is an official NVIDIA development kit based on the Jetson Orin Nano 8GB module that delivers up to 40 TOPS of AI performance. |
| STM32MP257 | tflite | The STM32MP257 offers advanced edge AI capabilities with its NPU accelerator and provides flexibility in running AI applications on either the CPU, GPU, or NPU. Note that the benchmarks performed by the dAIEdge-VLab are run on the NPU. |
| SpeckSim | speck | SpeckSim is a simulator for the Speck hardware device from SynSense that runs spiking neural networks (SNNs). Upload a torch.nn.Sequential model (saved as a .pth file) composed only of Linear or Conv2d layers, trained locally to solve the MNIST task (input_dim = 28*28, output_dim=10). Then, evaluate your model’s performance (accuracy) when executed as a spiking neural network on SpeckSim. |
| Speck Dev Kit | speck | Speck™ is a fully event-driven neuromorphic vision SoC. As it requires a dataset for benchmarking due to its neuromorphic nature, please use the dAIEdge-VLab Python Client instead of the Web user interface. Refer to the user documentation of the VLAb under the [dAIEdge-VLab Targets / Speck] Section for more informations. Benchmark with web UI uses a default NMNIST dataset as generated in the example documentation. |
Configurations
Here is the list of all the available configurations that can be used with the dAIEdgeVLab Python API. The values in the column Target msut be used for the parameter target in the startBenchmark methode. Same for the parameter runtime.
| ID | Target name | Target |
Runtime |
|---|---|---|---|
| 1 | Raspberry Pi 4B | rpi4b | ort |
| 2 | Raspberry Pi 4B | rpi4b | tflite |
| 3 | Raspberry Pi 4B | rpi4b | aidge |
| 4 | Raspberry Pi 5 | rpi5 | ort |
| 5 | Raspberry Pi 5 | rpi5 | tflite |
| 6 | Raspberry Pi 5 | rpi5 | aidge |
| 7 | Kendryte K210 | k210 | nncase |
| 8 | NXP LPC55S69 | lpc55s69jbd100 | tflite |
| 9 | Qualcomm RB3 Gen2 | qualcommrb3gen2 | tflite |
| 10 | STM32L4R9 | stm32l4r9 | tflite |
| 11 | STM32L4R9 | stm32l4r9 | x-cube-ai |
| 12 | Jetson Orin Nano | jetsonorinnano | ort |
| 13 | Jetson Orin Nano | jetsonorinnano | tflite |
| 14 | Jetson Orin Nano | jetsonorinnano | trt |
| 15 | STM32MP257 | stm32mp257 | tflite |
| 16 | SpeckSim | specksim | speck |
| 17 | Speck Dev Kit | speck_dev_kit | speck |
Use example
Here is an example for the configuration 2:
api = dAIEdgeVLabAPI("setup.yaml")
# Start a benchmark for a given target, runtime and model
benchmark_id = api.startBenchmark(
target = "rpi4b",
runtime = "tflite",
model_path = MODEL
)