Testing Flow
Once the model has been submitted, it is validated to confirm whether the repository link has the directory structure as mentioned in the submission flow above and is adhering to the mission constraints as published in the user mission handbook shared. Post successful validation, the model testing begins. The testing flow is mentioned below.
Software Environment Testing
The validated repository is tested on a software environment corresponding to the mission of interest for the user. In this test, the model will be executed for a sample image that is shared by the developer and will be validated against the corresponding output shared by the user.
This test shall pass when the output from the test and the output shared by the user pass if a high PSNR is achieved. In case of a lower PSNR, the testing flow will break, notifying the user about the inconsistencies between the outputs or when the model/application execution violates the mission constraints (for example, exceeding the maximum RAM limit, or inferencing output size greater than the maximum size set)
As a result of this test, a preliminary check for the functionality of the model/application is performed in the relevant software environment. There will be no statistics of the model performance provided as part of this test.
Hardware Environment Testing
In this test, the model/application is tested on an engineering model with the software environment. This test is performed post successful completion of the software environment testing.
For this test, the user has the option to choose sample images available on the portal. The chosen sample image will be used as input for the models. Once the execution is over, the user is notified about the status of the test.
The execution may fail only if the model/application execution is not compatible with the hardware or with the mission constraints (for example, exceeding the maximum RAM limit, or inferencing output size greater than the maximum size set). The test does not aim to provide the quality of the output. In case of execution failure, the user will be notified about the termination of the execution and the cause of the termination.
The validation of the output quality shall be done by the user, This test is done to provide statistics of the model performance on the edge computing hardware. The model performance/ run time statistics include the following:
Execution Time (in seconds)
RAM Consumption
VPU/GPU/CPU consumption
Output Size
The user may choose to deploy the model if the output quality and the run time performance are satisfactory, or may choose to resubmit the model/application post necessary fine-tuning to achieve better performance/results.
Last updated