Benchmarking service with random data (TYPE1)
In this mode, the benchmark of the model is performed uesing randomly generated data. The number of inference is either fixed by the benchmarking tool used or follow the variable $NB_INFERENCE
Usefull environment variables
The following environment variables are provided to the docker container running the target scripts. These are the only variables that are usefull for the implementation of the TYPE1 benchmarking service.
| Variable name | Description |
|---|---|
$TARGET |
The target name |
$RUNTIME |
The runtime name |
$NB_INFERENCE |
The number of inferences that should be performed to create the benchmark. |
$BENCHMARK_TYPE |
TYPE1 |
$MODEL_FILENAME |
Provide the file name of the model provided by the user. Note that a timestamp is added to it automatically to ensure uniqueness. |
$PARAMETERS_FILENAME |
Provide the file name of the parameters file containing parameters linked to the runtime. Only if one was provided. |
Pipeline simulation
You can simulate the pipeline locally by running the scripts in the correct order. This can be useful to test your implementation before pushing it to the repository and running it on the dAIEdge-VLab.
export FUNCTIONS_PATH=`pwd`/exit_functions.sh
export LOG_PATH=`pwd`
export TARGET="agx"
export RUNTIME="ORT"
export NB_INFERENCE=20
export BENCHMARK_TYPE="TYPE1"
export MODEL_FILENAME="model_type1.onnx"
export PARAMETERS_FILENAME="parameters.json"
export FUNCTIONS_PATH=`pwd`
export LOG_PATH=`pwd`
export WORKSPACE_PATH=`pwd`
mkdir $LOG_PATH
touch $LOG_PATH/error.log
touch $LOG_PATH/user.log
exit_if_error_log_filled() {
if [ -s "$LOG_PATH/error.log" ]; then
echo "Errors detected in error.log. Exiting."
exit 1
fi
}
# Setup your own environment variables here if needed
#export BOARD_USER="agx"
#export BOARD_IP="157.26.100.91"
#export BOARD_PASSWORD="1234"
./AI_Support/support.sh;
exit_if_error_log_filled
./AI_Build/build.sh;
exit_if_error_log_filled
./AI_Deploy/deploy.sh;
exit_if_error_log_filled
./AI_Manager/manager.sh;
exit_if_error_log_filledGenerated artifacts
The following artifacts are generated by the dAIEdge-VLab after the execution of the pipeline:
- Benchmark report : report.json
- Log files : user.log & error.log