Skip to main content

Export your custom YOLOv7 model to TensorRT

 


Another post starts with you beautiful people! Thanks for the great response to my last post where we successfully exported a custom-trained YOLOv7 model to ONNX format.  In this post, we are going to move one step further to achieve high-performance inference using NVIDIA TensorRT. We will learn how can we export our YOLOv7 ONNX model to a TensorRT format.

NVIDIA TensorRT is an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications. This file format is widely used in case you need to deploy your model into edge devices but with high inference speed. So knowledge of exporting your YOLOv7 model to TensorRT format is very beneficial learning.

Please note the exportation of the model is environmentally sensitive. It is recommended to use the same environment where you have trained the original YOLOv7 model. For this post as well I am using the same Google Colab environment which I used for YOLOv7 training as well as for exporting to ONNX. As a first step, we will install the required dependencies as below-


Once the above-required libraries are installed successfully, as a second step we will download the original YOLOv7 code repo for exporting our custom trained YOLOv7 model to ONNX format. Please skip this step if you have already exported it using my last post.


Let's export the YOLOv7 weight file to ONNX format using the below command -

In the above command, replace the .pt file with your own YOLOv7 file and remove the zero after --device argument if you don't have GPU. Once successfully finished, this command will save a .onnx format file in the same location where your .pt file is. Now as a third step we will clone another code repo for TensorRT export as below-


After successfully cloning the repo, as a fourth step, we will use its export.py file to convert our ONNX file to TensorRT format as below-

Here again, replace the .onnx file with your own file and rename .trt file name as per your wish, the final exported file will be saved with that name. Once the above command runs successfully, you will see the following output at the end of the console-

So our onnx to trt exportation is done successfully ✌. Next as the fifth and last step, we need an inference script to load this .trt file and test it on a test image. Let's understand the inference script with code snippets-


 First, we have imported the required library for the inference. Next, we will write a class with a few necessary functions. Inside this class, Our first function is for loading and initializing the TensorRT engine as below-

Our next four functions are for running the inference, transforming the input image into the required format, detecting the target object with confidence scores, printing the frame per second, and the bounding box coordinates as below-



Next, we will create a list of colors for use in the bounding box. A sample list will be looked like as below-


And our last function is for visualizing the bounding boxes around our target class as below-


Now we are ready to make an inference. Let's test an image with our TensorRT model-


Here in the first line pass your .trt file as engine_path and provide your test image in the inference() function. Next, I used matplotlib library to visualize the processed image and it looks like the below-

It looks perfect👌. I have got the result with 39.9 FPS which is very fast 💪 considering my original YOLOv7 custom model was based on its biggest size pre-trained weight which performance on MS COCO dataset was as below-


So if you train your dataset with its smallest version and then convert it to TensorRT, you may get 100+ FPS. Sounds interesting😮 Then why are you waiting? Start training your own YOLOv7 model, export it to the TensorRT engine, and compare the inference speed in CPU/GPU-based devices. Like always, you can find this post's colab notebook here. That's it for today guys. In my next post I will again share something useful till then 👉 Go chase your dreams, have an awesome day, make every second count, and see you later in my next post😇








Comments

  1. Unlock the power of Python with our development services. From web applications to machine learning algorithms, our Python experts offer customized solutions for businesses. Get efficient and scalable applications, secure coding practices, and robust testing. Elevate your business with our Python Development Services

    ReplyDelete
  2. Embark on a transformative journey into the realm of data science with APTRON's comprehensive Data Science Training in Gurgaon. In an era where data is hailed as the new currency, mastering the intricacies of data science is imperative for career advancement.

    ReplyDelete

Post a Comment

Popular posts from this blog

How to use TensorBoard with TensorFlow 2.0 in Google Colaboratory?

Another post starts with you beautiful people! It is quite a wonderful moment for me that many Aspiring Data Scientists like you have connected with me through my facebook page and have started their focused journey to be a Data Scientists by following my  book . If you have not then I recommend to atleast visit my  last post here . In two of my previous posts we have learnt about keras and colab. In this post I am going to share with you all that TensorFlow 2.0 has been released and one quite interesting news about this release is that our beloved deep learning library keras is in built with it. Yes! You heard it right. If you know keras then using TensorFlow 2.0 library is quite easy for you. One of the interesting benefit of using TensorFlow library is it's visualization tool known as  TensorBoard . In this post we are going to learn how to use TensorFlow 2.0 with MNIST dataset and then setup TensorBoard with Google Colaboratory. Let's start this pos...

How can I make a simple ChatBot?

Another post starts with you beautiful people! It has been a long time of posting a new post. But my friends in this period I was not sitting  where I got a chance to work with chatbot and classification related machine learning problem. So in this post I am going to share all about chatbot- from where I have learned? What I have learned? And how can you build your first bot? Quite interesting right! Chatbot is a program that can conduct an intelligent conversation based on user's input. Since chatbot is a new thing to me also, I first searched- is there any Python library available to start with this? And like always Python has helped me this time also. There is a Python library available with name as  ChatterBot   which is nothing but a machine learning conversational dialog engine. And yes that is all I want to start my learning because I always prefer inbuilt Python library to start my learning journey and once I learn this then only I move ahead for another...

Identify Eight types of Indian Classical Dance forms with YOLOv4

Another post starts with you beautiful people! Thank you all who had followed my last post about  install and compile YOLOv4 in Windows10   and could able to successfully set up the Darknet in their machines. As I promised in last post and you asked for, in this post I am going to share you the steps required for training a custom object with YOLOv4. If you are seeing my blog first time, I recommend you to first follow my  last post  and then proceed further. For this exercise I have choosen a dataset of eight Indian Classical Dance forms- Manipuri from Manipur Bharatanatyam from Tamil Nadu Odissi from Orissa Kathakali from Kerala Kathak from Uttar Pradesh Sattriya from Assam Kuchipudi from Andhra Pradesh Mohiniyattam from Kerala You can download the dataset from this hackethon link . After downloading the dataset , you need to create 8 folders with class name and copy respective images from train folder to there. For this work I have writt...