Skip to main content

How to use Redis in Windows along with your keras Rest API?


Another post starts with you beautiful people!
It was quite overwhelming to see almost 10,000 views on my last post where we learned to build a simple Keras + Deep Learning Rest API. That post is quite important if you want to deploy your model as a Rest API in development environment so that you can easily demonstrate it to your tester team or business team. One drawback with this approach is that it was intended for single threaded use only with no concurrent requests but in production environment your model will be used by many users at the same time so it is important to efficiently batch process incoming inference requests. Many thanks to Adrian who has shared his knowledge to use Redis to solve this problem and make our simple keras Rest Api scalable. If you are macos/linux user then you can follow Adrian's post but if you are a Windows user like me then you must follow this post because officially Redis does not support Windows.

Here we have chosen Redis along with message queuing/message brokering paradigms to efficiently batch process incoming inference requests. For this exercise Redis will act as our temporary data store on the server. Images may come in to the server via a variety of methods such as cURL, a Python script, or even a mobile app. Now our first step is to download and start the Redis server in our Windows machine. Follow the below steps for this purpose-
  1. Download latest of Redis from Redis for Windows link
  2. Download file be in zip format like Redis-x64-3.2.100.zip
  3. Create a new folder and extract downloaded zip file there
  4. Navigate to extracted folder and open it in a cmd as an Admin
  5. Run the following command:redis-server.exe
  6. A pop up will come and ask you to allow access, accept the default. It will start the Redis server
  7. Leave this terminal open to keep the Redis data store running
  8. In another terminal, you can validate Redis is up and running by following command: redis-cli.exe ping
  9. In response you will get a PONG back from Redis, it means you’re ready to go
Once you follow above steps, you will see following like screen after starting Redis server-


Now, open PyCharm IDE and create a new project. I have my project name as Scalable-Keras_API. The complete project structure will kook like below-

Inside this project create a python file run_keras_server.py like we did in our last post. In this file we are going to use ResNet pretrained on the ImageNet dataset. But you can easily replace this model with you own models. Let's import required packages first-

Since in this exercise we are going to classify the images using ResNet model, we will define some input parameters required to control image dimensions and for server queuing like below-

In above code cell we have defined passing float32  type images to the server with dimensions of 224 x 224 and containing 3  channels. Since I am running this server on a CPU machine I am using BATCH_SIZE = 32; if you have GPU(s) on your production system, you must tune BATCH_SIZE  for optimal  performance. SERVER_SLEEP and CLIENT_SLEEP constants are denoting the amount of time the server and client will pause before polling Redis again, respectively. Again in production environment you must tune the value of these constants.

Next, to deploy our keras model as Rest API along with Redis we will initialize them like below-

Here, in the first line we have initialized flask application and then we have connected to the Redis server using it's StrictRedis() function. Here we are passing the host name, default port number for Redis. You can find details about various Redis classes in following link redis docs. Next, we have initialized our model as a global variable.

Next, we will define some functions to preprocess the input images. These preprocessing includes logic like encode the input NumPy array to utf-8 encoding and reshaping. Always remember in order to store our images in Redis, they need to be serialized. Since images are just NumPy arrays, we can utilize base64 encoding to serialize the images. Using base64 encoding also has the added benefit of allowing us to use JSON to store additional attributes with the image. Similarly, we need to deserialize our image prior to passing them through our model -

Next we have defined a prepare_image  function which pre-processes our input image for classification using the ResNet50 implementation in Keras.. If you are using your own model then you can utilize this function to perform any required pre-processing, scaling, or normalization-


Now we will create another function where we will write logic to classify the process. Later we will call this function in our main() function. The main objective of this function will be polling for image batches from the Redis server, classify the images, and return the results to the client. the logic will be like below-
  1. Here we are first using the Redis database’s lrange()  function to get, at most, BATCH_SIZE  images from our queue
  2. From there we initialize our imageIDs, batch and begin looping over the queue
  3. In the loop, we first decode the object and deserialize it into a NumPy array, image
  4. Next, we will add the image  to the batch and we also append the id  of the image to imageIDs
After this we will check if there are any images in the batch, if we have we will make predictions on the entire batch by passing it through the model. From there, we loop over a the imageIDs  and corresponding prediction  results. Then we remove the set of images that we just classified from our queue using ltrim. And finally, we sleep for the set SERVER_SLEEP  time and await the next batch of images to classify-

Next, we will handle the /predict  endpoint of our REST API. We will use the @app.route decorator just above our predict () function to define our endpoint so that Flask knows what function to call. We could easily have another endpoint which uses AlexNet instead of ResNet or any other model and we’d define the endpoint with associated function in a similar way. For our purpose, we just have one endpoint called /predict. Our predict  function will handle the POST requests to the server. The goal of this function is to build the JSON data  that we’ll send back to the client. If the POST data contains an image we convert the image to PIL/Pillow format and preprocess it -

Here we have converted the array to C-contiguous ordering because performance-wise, accessing memory addresses which are next to each other is very often faster than accessing addresses which are more "spread out" (fetching a value from RAM could entail a number of neighbouring addresses being fetched and cached for the CPU.) This means that operations over contiguous arrays will often be quicker. Next to prevent hash/key conflicts we have used UUID. Next, we append the id  as well as the base64  encoding of the image  to the d  dictionary. It’s very simple to push this JSON data to the Redis db  using rpush()-

Next, we’ll loop continuously until the model server returns the output predictions. We start an infinite loop and attempt to get the predictions. From there, if the output  contains predictions, we deserialize the results and add them to data  which will be returned to the client.We also delete  the result from the db since we have pulled the results form the database and no longer need to store them in the database and break  out of the loop. Otherwise, we don’t have any predictions and we need to sleep and continue to poll. Then we will call flask.jsonify() on data and return it to the client-

Lastly, to demo our Keras REST API, we need a __main__  function to actually start the server. This main function will kick off our classify_process  thread-

That's it, we can now test our API. Go to your project folder path and open in a new terminal. Create and activate virtual environment like we did in our last post. Then install the required libraries using pip install command and then run following command: python run_keras_server.py ; you will see below like screen-

Once you see the Running line you are ready to classify images. For this we we write a new Python file and write our logic to make our request to our Rest API. Create a new python file simple_request.py and import requests library first then define endpoint url and input image with complete path. Then we will read the image in binary mode and put it into a payload dictionary. The payload is POST’ed to the server with requests.post(). If we get a success  message, we can loop over the predictions and print them to the terminal like below-

Now open up a new terminal and execute the following command: python simple_request.py
Once you run the above script, it will take some time. Meantime you can go back to the run_keras_server.py console and see what's going on there. The backend will look like below-

Once you see 200 response, return to your simple_request.py console and see the result. Our model has successfully classified the image as space shuttle with 98% accuracy-

That's it we have scaled our previous keras rest api using Redis. This exercise will surely teach you as it did for me- how to deploy your model as Rest api and how to make a request to this api with Python. There are enough to learn to make a api more scalable but to know the starting point is key to success. So don't hesitate to start with basics. Try this exercise in your machine, replace the model with your own model, deploy it as a rest api and explore other open source solutions also till then Go chase your dreams, have an awesome day, make every second count and see you later in my next post.







Comments

  1. Good post!Thank you so much for sharing this pretty post,it was so good to read and useful to improve my knowledge as updated one,keep blogging.
    Data Science with Python Training in Electronic City

    ReplyDelete
  2. Its very informative blog and useful article thank you for sharing with us , keep posting learn Data Science online Training

    ReplyDelete
  3. great post about data training i am currently doing my training from data analytics course in pune . it is the best it training institute in pune , thanks

    ReplyDelete

Post a Comment

Popular posts from this blog

How to install and compile YOLO v4 with GPU enable settings in Windows 10?

Another post starts with you beautiful people! Last year I had shared a post about  installing and compiling Darknet YOLOv3   in your Windows machine and also how to detect an object using  YOLOv3 with Keras . This year on April' 2020 the fourth generation of YOLO has arrived and since then I was curious to use this as soon as possible. Due to my project (built on YOLOv3 :)) work I could not find a chance to check this latest release. Today I got some relief and successfully able to install and compile YOLOv4 in my machine. In this post I am going to share a single shot way to do the same in your Windows 10 machine. If your machine does not have GPU then you can follow my  previous post  by just replacing YOLOv3 related files with YOLOv4 files. For GPU having Windows machine, follow my steps to avoid any issue while building the Darknet repository. My machine has following configurations: Windows 10 64 bit Intel Core i7 16 GB RAM NVIDIA GeForce GTX 1660 Ti Version 445.87

How to use opencv-python with Darknet's YOLOv4?

Another post starts with you beautiful people 😊 Thank you all for messaging me your doubts about Darknet's YOLOv4. I am very happy to see in a very short amount of time my lovely aspiring data scientists have learned a state of the art object detection and recognition technique. If you are new to my blog and to computer vision then please check my following blog posts one by one- Setup Darknet's YOLOv4 Train custom dataset with YOLOv4 Create production-ready API of YOLOv4 model Create a web app for your YOLOv4 model Since now we have learned to use YOLOv4 built on Darknet's framework. In this post, I am going to share with you how can you use your trained YOLOv4 model with another awesome computer vision and machine learning software library-  OpenCV  and of course with Python 🐍. Yes, the Python wrapper of OpenCV library has just released it's latest version with support of YOLOv4 which you can install in your system using below command- pip install opencv-python --up

How can I make a simple ChatBot?

Another post starts with you beautiful people! It has been a long time of posting a new post. But my friends in this period I was not sitting  where I got a chance to work with chatbot and classification related machine learning problem. So in this post I am going to share all about chatbot- from where I have learned? What I have learned? And how can you build your first bot? Quite interesting right! Chatbot is a program that can conduct an intelligent conversation based on user's input. Since chatbot is a new thing to me also, I first searched- is there any Python library available to start with this? And like always Python has helped me this time also. There is a Python library available with name as  ChatterBot   which is nothing but a machine learning conversational dialog engine. And yes that is all I want to start my learning because I always prefer inbuilt Python library to start my learning journey and once I learn this then only I move ahead for another sources.