Skip to content
Developper Doc

Developper Doc

Welcome to the dAIEdge-VLab Developer Documentation. The dAIEdge-VLab platform relies on integrated hardware boards, referred to as “targets.” This documentation provides all the necessary information on how to:

  • Integrate your own target into the VLab environment.

Each hardware board integration is associated with one or more specific inference engine (e.g. TFLite, OnnxRuntime).

Quick Overview

You must provide a Docker image capable of running one or more services to evaluate the performance of a model on your target.

The dAIEdge-VLab deploys this image on the host machine(s) specified by the developer and uses it to execute job requests submitted by users. Each image runs exclusively on the host(s) you control and is not shared across other hosts in the cluster. After execution, the dAIEdge-VLab collects the results and presents them to the users.

Execution model

The Docker image runs on the host machine, while the target device is controlled externally. Therefore, the target must be treated as an unreliable and potentially uninitialized system.

Your implementation must not assume any prior state on the target. Each job execution should:

  • Detect the current state of the target,
  • Install or configure all required dependencies if necessary,
  • Ensure the target is fully operational before running the model.

The Docker image should act as a self-contained orchestration layer, responsible for preparing the target environment before executing any workload.

This approach ensures:

  • Reproducibility of results across runs,
  • Seamless deployment of new targets without manual setup,
  • Robust recovery from failures (e.g., power loss, crashes, inconsistent states).
Note: If multiple inference engines are available for the same hardware board, it is recommended to integrate them as seperated instances (e.g., RaspberryPi5_TFLite and RaspberryPi5_OnnxRuntime). The idea is to keep the repositories as simple as possible and to avoid the use of complex scripts to select the inference engine. (this is not mandatory as it is possible to handle multiple inference engines in the same repository.)

How to use this documentation

In order to succeffuly integrate your own target, we recommand reading the documentation in the following order:

  1. Make sure you meet the requirements
  2. Get familar with the web user interface
  3. Understand the dAIEdge-VLab architecture
  4. Be aware of the mandatory guideline
  5. Follow the Integration steps

Other resources

Have also a look at the other available resources that may help you in the integration process:

Note: The documentation is still under development. If you have any questions or need help, please contact the dAIEdge-VLab team.