Skip to content
Integration steps

Integration steps

Integration : The integration steps stands as a red thread for the integration of a target, however it does not have to be strictly followed.

This documentation assumes you know how to use Docker.

To start quickly, you can have a look at the default template repositories for Linux and MCU targets. These templates are available in the target repository and can be used as a starting point for your integration.

Prerequisites

Before starting the integration, make sure you have the following prerequisites in place:

  • Have read and understood the requirements. This step is really important as it will help you understand what is expected from your target and what are the requirements that you need to meet.
  • Have a look at the dAIEdge-VLab architecture to understand the different components of the dAIEdge-VLab and how they interact with each other.
  • Have a clear understanding of the dAIEdge-VLab pipeline. This will help you understand how the benchmarking process works and how your work will be utilized. It will also help you understand the different steps of the pipeline and how to implement them in your target repository.

Setup Target

The target must be a programmable device with communication capabilities. It is recommended to use wired communication over wireless for better performances and stability

  • Set up your target so it can run model inference and retrieving some performance indicators. This may involve the installation of an inference engine. This part is really up to you and depends on your target. It is closely linked to the target repository implementation. You can look at the examples of target integration for inspiration.

Setup the target repository

To maximize compatibility with the dAIEdge-VLab, it is recommended to use GitLab as a repository hosting service. However, any other hosting service can be used as long as it allows the creation of a token that can be used to give access to the container registry.

Setup a host machine

When integrating a target to the dAIEdge-VLab, it should be available all the time. Therefore, it is recommended to use a PC that can be turned on 24/7 for the Host. It may be a VM. It may also be possible to use the target as the host machine.

  • Set up the host machine using the installation guide here : Host setup.
  • Register your target to the dAIEdge-VLab using the registration guide here : Register a target.

Once everything is setup, you should be able to see your newly integrated target as a selectable target in the web interface. Try to benchmark it, you should be able to see the results of the benchmark in the web interface. As nothing is implemented yet, the results should be empty or show some errors.

Implement the scripts.

The implementation of the benchmarking scripts is really up to you. You can look at examples of implementations and adapt it to your target. Take into account the input files that are automatically provided by the dAIEdge-VLab. Pay attention to the benchmark type requested by the user, this influence the behavior of the benchmark that you should perform.

Also have a look at the dAIEdge-VLab pipeline to understand the benchmarking process. You will have to implement the four mandatory scripts that are defined in the folder structure. These scripts are executed in the following order :

  • Implement the support.sh script. This script is intended to get the necessary files (model) from th web to the host machine.

  • Implement the build.sh script. This script is intended to be used if some tools must be build on the target. Leave empty if not.

  • Implement the deploy.sh script. This script is intended to copy the necessary files (model, benchmarking scripts, etc…) from the host machine to the target.

  • Implement the manager.sh script. This script is intended to run the inferences on the model and retrieves the performance indicators back. As an output of the benchmarking process, you will have to generate a report that follow the benchmark report keys. See Benchmark report.

To test your implementation, you can simulate the pipeline locally by running the scripts in the correct order and by setting the necessary environment variables. Have a look at the services documentation for pipeline simulation guides :

You have to make sure that the scripts are able to run without errors and that they generate the expected outputs. If errors are encountered due to the infeasibility of the benchmark (e.g., the target is not able to run the model, the model is too big for the target, etc…), make sure to handle them properly and to fill up the error.log and user.log files with the necessary information to understand the error. See Handling errors and warnings.

Tests and troubleshoot

Finally, look at the weekly tests. Your target is tested and a health score is given. This may help you locate issues and fix them.