Skip to main content

Export your custom YOLOv7 model to TensorRT

 


Another post starts with you beautiful people! Thanks for the great response to my last post where we successfully exported a custom-trained YOLOv7 model to ONNX format.  In this post, we are going to move one step further to achieve high-performance inference using NVIDIA TensorRT. We will learn how can we export our YOLOv7 ONNX model to a TensorRT format.

NVIDIA TensorRT is an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications. This file format is widely used in case you need to deploy your model into edge devices but with high inference speed. So knowledge of exporting your YOLOv7 model to TensorRT format is very beneficial learning.

Please note the exportation of the model is environmentally sensitive. It is recommended to use the same environment where you have trained the original YOLOv7 model. For this post as well I am using the same Google Colab environment which I used for YOLOv7 training as well as for exporting to ONNX. As a first step, we will install the required dependencies as below-


Once the above-required libraries are installed successfully, as a second step we will download the original YOLOv7 code repo for exporting our custom trained YOLOv7 model to ONNX format. Please skip this step if you have already exported it using my last post.


Let's export the YOLOv7 weight file to ONNX format using the below command -

In the above command, replace the .pt file with your own YOLOv7 file and remove the zero after --device argument if you don't have GPU. Once successfully finished, this command will save a .onnx format file in the same location where your .pt file is. Now as a third step we will clone another code repo for TensorRT export as below-


After successfully cloning the repo, as a fourth step, we will use its export.py file to convert our ONNX file to TensorRT format as below-

Here again, replace the .onnx file with your own file and rename .trt file name as per your wish, the final exported file will be saved with that name. Once the above command runs successfully, you will see the following output at the end of the console-

So our onnx to trt exportation is done successfully ✌. Next as the fifth and last step, we need an inference script to load this .trt file and test it on a test image. Let's understand the inference script with code snippets-


 First, we have imported the required library for the inference. Next, we will write a class with a few necessary functions. Inside this class, Our first function is for loading and initializing the TensorRT engine as below-

Our next four functions are for running the inference, transforming the input image into the required format, detecting the target object with confidence scores, printing the frame per second, and the bounding box coordinates as below-



Next, we will create a list of colors for use in the bounding box. A sample list will be looked like as below-


And our last function is for visualizing the bounding boxes around our target class as below-


Now we are ready to make an inference. Let's test an image with our TensorRT model-


Here in the first line pass your .trt file as engine_path and provide your test image in the inference() function. Next, I used matplotlib library to visualize the processed image and it looks like the below-

It looks perfect👌. I have got the result with 39.9 FPS which is very fast 💪 considering my original YOLOv7 custom model was based on its biggest size pre-trained weight which performance on MS COCO dataset was as below-


So if you train your dataset with its smallest version and then convert it to TensorRT, you may get 100+ FPS. Sounds interesting😮 Then why are you waiting? Start training your own YOLOv7 model, export it to the TensorRT engine, and compare the inference speed in CPU/GPU-based devices. Like always, you can find this post's colab notebook here. That's it for today guys. In my next post I will again share something useful till then 👉 Go chase your dreams, have an awesome day, make every second count, and see you later in my next post😇








Comments

  1. Unlock the power of Python with our development services. From web applications to machine learning algorithms, our Python experts offer customized solutions for businesses. Get efficient and scalable applications, secure coding practices, and robust testing. Elevate your business with our Python Development Services

    ReplyDelete
  2. Embark on a transformative journey into the realm of data science with APTRON's comprehensive Data Science Training in Gurgaon. In an era where data is hailed as the new currency, mastering the intricacies of data science is imperative for career advancement.

    ReplyDelete

Post a Comment

Popular posts from this blog

How to install and compile YOLO v4 with GPU enable settings in Windows 10?

Another post starts with you beautiful people! Last year I had shared a post about  installing and compiling Darknet YOLOv3   in your Windows machine and also how to detect an object using  YOLOv3 with Keras . This year on April' 2020 the fourth generation of YOLO has arrived and since then I was curious to use this as soon as possible. Due to my project (built on YOLOv3 :)) work I could not find a chance to check this latest release. Today I got some relief and successfully able to install and compile YOLOv4 in my machine. In this post I am going to share a single shot way to do the same in your Windows 10 machine. If your machine does not have GPU then you can follow my  previous post  by just replacing YOLOv3 related files with YOLOv4 files. For GPU having Windows machine, follow my steps to avoid any issue while building the Darknet repository. My machine has following configurations: Windows 10 64 bit Intel Core i7 16 GB RAM NVIDIA GeForce GTX 1660 Ti Version 445.87

How to convert your YOLOv4 weights to TensorFlow 2.2.0?

Another post starts with you beautiful people! Thank you all for your overwhelming response in my last two posts about the YOLOv4. It is quite clear that my beloved aspiring data scientists are very much curious to learn state of the art computer vision technique but they were not able to achieve that due to the lack of proper guidance. Now they have learnt exact steps to use a state of the art object detection and recognition technique from my last two posts. If you are new to my blog and want to use YOLOv4 in your project then please follow below two links- How to install and compile Darknet code with GPU? How to train your custom data with YOLOv4? In my  last post we have trained our custom dataset to identify eight types of Indian classical dance forms. After the model training we have got the YOLOv4 specific weights file as 'yolo-obj_final.weights'. This YOLOv4 specific weight file cannot be used directly to either with OpenCV or with TensorFlow currently becau

How to use opencv-python with Darknet's YOLOv4?

Another post starts with you beautiful people 😊 Thank you all for messaging me your doubts about Darknet's YOLOv4. I am very happy to see in a very short amount of time my lovely aspiring data scientists have learned a state of the art object detection and recognition technique. If you are new to my blog and to computer vision then please check my following blog posts one by one- Setup Darknet's YOLOv4 Train custom dataset with YOLOv4 Create production-ready API of YOLOv4 model Create a web app for your YOLOv4 model Since now we have learned to use YOLOv4 built on Darknet's framework. In this post, I am going to share with you how can you use your trained YOLOv4 model with another awesome computer vision and machine learning software library-  OpenCV  and of course with Python 🐍. Yes, the Python wrapper of OpenCV library has just released it's latest version with support of YOLOv4 which you can install in your system using below command- pip install opencv-python --up