Skip to main content

Export your custom YOLOv7 PyTorch model to ONNX

 

Another post starts with you beautiful people💓. In my previous post, we learned about training a custom dataset with an official Pytorch-based YOLOv7 object detector. If you have not seen that post, I recommend you check it once. The link is here. Once we achieve the best model, the next important step is to use that model. Sometimes you may need to use multiple ML models which you have trained on different-different ML frameworks like PyTorch, TensorFlow, Caffe, etc. In production or the real world, the trained model can be deployed as Rest API, or integrated with a web application without changing its form but what if you need to use that within your mobile device as an Android or iOS app as well as you want to use it with an embedded system like Nvidia Jetson 💣? Here comes the problem of interoperability. In this post, we are going to learn how can we export our custom YOLOv7 model to ONNX format.

ONNX is an open format built to represent machine learning models. ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers. For our export, I will use the same Google Colab platform where I had trained my custom yolov7 model. Let's first import some required dependencies-

Since Google Colab has already PyTorch installed so we don't need to install it. During this work, my Colab had the following version of Python and PyTorch-


Next, we will download the official code repo of YOLOv7 as below-

Once the code base is downloaded, let's check a test image with our trained YOLOv7 model so that in the last of this post we can compare it with the exported ONNX model inference. The command to do the inference is as below-
! python detect.py --weights best.pt --img-size 1280 --source test_image.jpg

Here, just replace best.pt with your custom YOLOv7 weight file and  test_image.jpg with your test image. After successfully running of above command the resulting image will be saved in the path- runs/detect/exp/test_image.jpg. Let's check the outcome by opening the image below-


And it looks like as below-


So till now, everything looks fine. Let's move to our export. For this purpose, the official code repo that we had already downloaded has a script named export.py.
We will use the same to export the model into ONNX format as below-

You can check export.py to understand it's each argument. After successfully running of above command you will see the following output in the console-

Now, the next most important step is to do the inference with this exported model. Let's understand this with below code snippets one by one-

In the above code snippet, first, we have loaded the ONNX model, our test image, required libraries for inference, CPU or GPU selection, and the main class of the ONNX runtime as a session. In the next code snippet, we will create a function to do the following things with the given input image-
  1. Resize and pad image while meeting stride-multiple constraints
  2. Scale ratio
  3. Compute padding
  4. divide padding into 2 sides
  5. Resize and find the top, bottom, left, and right coordinates
  6. Add border and return the image with the scale ratio and sides

Next, we will create a list of our target object names- in my case, it is wheat, color scheme for the bounding boxes. Then we will apply the above-created function- letterbox() to the image, will convert the image to a NumPy array and then we will normalize it for further processing. Next, we will extract the output from the model as below-

Now, we are ready for making the inference. First, we will fetch the output of the ONNX inference session and then we will iterate it to get the image, bounding box coordinates, confidence score, and the class name, and then we will use this information to draw the bounding boxes around the detected class with its name as below-

  That's it! Our inference script is ready for our ONNX file and once executed it had shown me the output image as below-

If you compare this image with the previous output image you will find although the confidence score of the detected class has been dropped yet the accuracy of the exported model is the same. It means our exported ONNX model is working as expected. One more point to observe in our inference script is that we have not imported the PyTorch library which we used to train the custom dataset with YOLOv7. That is the interoperability of the ONNX format. Now our production deployment team can choose any ML framework they are comfortable with and using this unified inference framework (ONNX), the exported model can be deployed to anywhere. You can find the complete Colab Notebook here.

So why wait now☝ Just copy the notebook, export your custom YOLOv7 model to ONNX format and deploy it to production💥. That's it for today guys. In my next post, I will share to export the model into TensorRT format till then👉 Go chase your dreams, have an awesome day, make every second count, and see you later in my next post😇


















Comments

  1. Here all content so useful and helpful for beginner and experience both.This site is so amazing, This sites gives good knowledge of Data-science ,This is very helpful for me.

    ReplyDelete
  2. Here all content so useful and helpful for beginner and experience both.This site is so amazing, This sites gives good knowledge of data-science-training,This is very helpful for me.

    ReplyDelete
  3. Thanks for sharing this blog. It is very informative.

    At Repute, We are Leading Branding Agencies In Coimbatore. Our elite team helps to make strategy for each client requirement and provide unique branding solutions.

    ReplyDelete
  4. Are you looking for a top-notch Data Science Institute in Delhi? Look no further! Delhi is a hub of education, with many institutes offering courses in data science. In this article, we'll explore some of the best data science institutes in Delhi, what they offer, and why they're worth considering.data Science institute in Delhi.

    ReplyDelete
  5. Thanks for sharing the valuable information about data Science. If you are looking for Degree course. Should check out B tech CSE in Data Science from K.R. Mangalam University

    ReplyDelete
  6. This comment has been removed by the author.

    ReplyDelete
  7. Thank you for providing this kind of informatio, I Really liked reading it. I like your writing style, it’s quite unique. Thanks for sharing the information here. and please read The Most Important Features of Microsoft Azure in 2023

    ReplyDelete
  8. Embark on a transformative journey into the dynamic realm of data science with APTRON's comprehensive Data Science Course in Gurgaon. At APTRON, we pride ourselves on offering a cutting-edge curriculum designed to equip you with the skills and knowledge essential for a successful career in data science.

    ReplyDelete
  9. Thank you for sharing this useful information with us. Did you know data annotation holds the key to successful machine learning models? It’s a vital task that bridges the gap between raw data and AI comprehension. What exactly is data annotation, you ask? Well, it’s the process of labeling data to train algorithms. But why is it so crucial? Simply put, it’s the foundation on which machine learning thrives.

    ReplyDelete

Post a Comment

Popular posts from this blog

How to install and compile YOLO v4 with GPU enable settings in Windows 10?

Another post starts with you beautiful people! Last year I had shared a post about  installing and compiling Darknet YOLOv3   in your Windows machine and also how to detect an object using  YOLOv3 with Keras . This year on April' 2020 the fourth generation of YOLO has arrived and since then I was curious to use this as soon as possible. Due to my project (built on YOLOv3 :)) work I could not find a chance to check this latest release. Today I got some relief and successfully able to install and compile YOLOv4 in my machine. In this post I am going to share a single shot way to do the same in your Windows 10 machine. If your machine does not have GPU then you can follow my  previous post  by just replacing YOLOv3 related files with YOLOv4 files. For GPU having Windows machine, follow my steps to avoid any issue while building the Darknet repository. My machine has following configurations: Windows 10 64 bit Intel Core i7 16 GB RAM NVIDIA GeForce GTX 1660 Ti Version 445.87

How to convert your YOLOv4 weights to TensorFlow 2.2.0?

Another post starts with you beautiful people! Thank you all for your overwhelming response in my last two posts about the YOLOv4. It is quite clear that my beloved aspiring data scientists are very much curious to learn state of the art computer vision technique but they were not able to achieve that due to the lack of proper guidance. Now they have learnt exact steps to use a state of the art object detection and recognition technique from my last two posts. If you are new to my blog and want to use YOLOv4 in your project then please follow below two links- How to install and compile Darknet code with GPU? How to train your custom data with YOLOv4? In my  last post we have trained our custom dataset to identify eight types of Indian classical dance forms. After the model training we have got the YOLOv4 specific weights file as 'yolo-obj_final.weights'. This YOLOv4 specific weight file cannot be used directly to either with OpenCV or with TensorFlow currently becau

How to use opencv-python with Darknet's YOLOv4?

Another post starts with you beautiful people 😊 Thank you all for messaging me your doubts about Darknet's YOLOv4. I am very happy to see in a very short amount of time my lovely aspiring data scientists have learned a state of the art object detection and recognition technique. If you are new to my blog and to computer vision then please check my following blog posts one by one- Setup Darknet's YOLOv4 Train custom dataset with YOLOv4 Create production-ready API of YOLOv4 model Create a web app for your YOLOv4 model Since now we have learned to use YOLOv4 built on Darknet's framework. In this post, I am going to share with you how can you use your trained YOLOv4 model with another awesome computer vision and machine learning software library-  OpenCV  and of course with Python 🐍. Yes, the Python wrapper of OpenCV library has just released it's latest version with support of YOLOv4 which you can install in your system using below command- pip install opencv-python --up