I am running a machine learning application for image classification that involves data collection, preprocess, training and testing the model. Now , my next intent is to deploy this model on a LXD container and evaluate the performance of the container for running the ML application. How can I do this?
The ML application likely requires access to a GPU to run the model on.
Assuming you have a single NVidia GPU, you would create a container with the NVidia runtime, and add at least the compute and utility capability. The first is for CUDA, the second is to have the nvidia-smi utility to verify that the GPU is responding. Then, you add a LXD device for the GPU in order to expose the GPU to the container.
Finally, get a shell into the container and run nvidia-smi to verify that the GPU is accessible from within the container.
By doing all the above, your container will have the proper NVIDIA driver and the proper NVIDIA runtime. If you follow some ML tutorial, you then continue with the rest of their instructions to setup your system. That is, do not follow other instructions on setting up the NVIDIA driver or runtime.