Skip to main content

Generative AI: Retrieval Augmented Generation(RAG)


 
Another blog post starts with you beautiful people👦. I hope you have explored my last blog post about 2x faster fine-tuning of Mistral 7b model on a custom dataset👈. In this blog post, we are going to learn an essential technique in Generative AI: Retrieval Augmented Generation (RAG).

What is RAG?

Retrieval Augmented Generation (RAG) is an innovative approach that melds generative models, like transformers, with a retrieval mechanism. By tapping into existing knowledge, RAG retrieves pertinent information from expansive external datasets or knowledge bases to enhance the generation process, thereby elevating the model's content relevance and factual accuracy💪. This versatility renders RAG particularly beneficial for tasks demanding the assimilation of external knowledge, such as question answering or content creation.

Upon receiving input, RAG actively searches for relevant documents from specified sources (e.g., Wikipedia, company knowledge base, etc.). It then seamlessly amalgamates this retrieved data with the input, offering a comprehensive output complete with references. This unique structure enables RAG to effortlessly integrate new and evolving information without the need to retrain the entire model from scratch💥.

RAG vs Fine-Tuning?

RAG augments the prompt with the external data, while fine-tuning incorporates the additional knowledge into the model itself. RAG requires less labeled data and resources than fine-tuning processes, making it less costly. Much of RAG expenses often go into setting up embedding and retrieval systems. In contrast, fine-tuning requires more labeled data, significant computational resources, and state-of-the-art hardware like high-performance GPUs or TPUs. As a result, the overall cost of fine-tuning is relatively higher than RAG💸.

RAG Architecture?

A standard RAG application comprises two primary elements:

A. Indexing: a data ingestion pipeline that sources and indexes data, typically conducted offline. The indexing sequence is as follows-

1. Load: First we need to load our data. This is done with DocumentLoaders.

2. Split: Text splitters break large Documents into smaller chunks. This is useful both for indexing data and for passing it into a model, since large chunks are harder to search over and won’t fit in a model’s finite context window.

3. Store: We need somewhere to store and index our splits so that they can later be searched. This is often done using a VectorStore and Embeddings model.

B. Retrieval and generation: the operational RAG chain, is responsible for receiving user queries during runtime, retrieving pertinent data from the index, and passing it to the model. The sequence is as follows-

1. Retrieve: Given a user input, relevant splits are retrieved from storage using a Retriever.

2. Generate: A ChatModel / LLM produces an answer using a prompt that includes the question and the retrieved data.

Let's build something with RAG!

Now we are ready to use RAG with a large language model-Mistral 7B that we also used in the last blog. As of date 18 January'24, the hot topic in India is the new temple of lord Ram in Ayodhya city. If we ask any question related to this to an LLM, they cannot reply with accurate answers since they are not trained with current news. To prove this point, we can load the pre-trained LLM model and ask a related question to it like below-

Load the 4-bit Mistral 7b model as we did in my last post-


And I asked a current question to the loaded pre-trained model as below-



And as I expected, the model is unable to give the answer-


I hope you have understood the problem statement.👀 Now we will use RAG to get the required information from an external source to teach our LLM about the current topic. As you have read above in the architecture section of the RAG, we will need to store that source somewhere. For this purpose, we are going to use FAISS but you can use any other vector database like CHROMA. Let's install the FAISS library using pip command in our colab notebook-

Also, install other required libraries like langchain, langchain-community, unsloth, transformers and sentence-transformers if you still need to install them.
The next step is to download the data from our source. In my case, it is 'https://srjbtkshetra.org/'. You can replace this with any other site of your requirement. To load all text from this webpage into a document format, we will use WebBaseLoader from the LangChain tool. Please note that LangChain supports various types of data loaders and you can refer this link for all the details-

The next step is to divide the loaded texts into smaller chunks that can fit into our model's context window. For example, GPT-3.5-turbo has a context window of 4,097 tokens and Mistral 7B has 8,000. So if we try to pass that context window then the behavior of LLMs will be unpredictable and suffer from severe performance degradation👽. For this purpose, we will use text splitter as below-

The next step is to encode these chunks of text into the embeddings and then perform the indexing of those embeddings to our vector database as below-

Here, for embeddings, we use the 'sentence-transformers/all-mpnet-base-v2' model from the Hugging Face hub but you can explore other sentence-transformers models as well from this link. Now we will construct a receiver to fetch the documents from the vector db as below-

Next, we will create and load our pipeline for the inference as you already read in last blog-

Next, we will create a prompt template for giving the instruction to the model, format the output as our need, and create a chain of RAG all of this like below-

Now our RAG is ready to ask any relative questions to our added source. For example, I asked about the dimensions of the newly built Lord Ram Temple and the model is now able to provide me accurate answers as below-

Another one-

How cool is this, right guys👏. In this post, we learned about the creation of PromptTemplate & Chain, usage of  RunnablePassthrough, invocation of Retriever, integration of context Integration, and the LLM invocation. The usage of RAG with any other possible use case is endless💫. So don't wait. Make a copy of this colab notebook in your colab notebook and start playing with your own data and any source you want to update with the LLM.  In the next post, we will further learn something useful use case of Gen AI, till then 👉 Go chase your dreams, have an awesome day, make every second count, and see you later in my next post.
















Comments

  1. your blog content supports a beginner and learning fast from your blog, The variable in your content is very good and different category, thanks for sharing this information.

    learn more about Data Science click Data Science

    ReplyDelete
  2. Embark on your journey to becoming a proficient Python developer with APTRON Solutions' Python Training Course in Noida. Our comprehensive curriculum, expert faculty, hands-on learning approach, and placement assistance ensure that you acquire the skills and knowledge needed to succeed in the competitive field of Python programming. Join us and unlock endless opportunities in the world of software development and technology. Enroll now and take the first step towards a rewarding career in Python programming with APTRON Solutions.

    ReplyDelete
  3. Whether you are a fresh graduate, mid-career professional, or aspiring data enthusiast, APTRON Solutions offers a structured learning path to help you excel in the dynamic field of Data Science Training in Noida. Don't wait any longer; take the first step towards a rewarding career by enrolling in our data science training program in Noida. Unlock your potential and become a sought-after data science professional with APTRON Solutions by your side. Reach out to us today to know more about course details, fees, and upcoming batches. Your data science journey awaits!

    ReplyDelete

Post a Comment

Popular posts from this blog

How to use opencv-python with Darknet's YOLOv4?

Another post starts with you beautiful people 😊 Thank you all for messaging me your doubts about Darknet's YOLOv4. I am very happy to see in a very short amount of time my lovely aspiring data scientists have learned a state of the art object detection and recognition technique. If you are new to my blog and to computer vision then please check my following blog posts one by one- Setup Darknet's YOLOv4 Train custom dataset with YOLOv4 Create production-ready API of YOLOv4 model Create a web app for your YOLOv4 model Since now we have learned to use YOLOv4 built on Darknet's framework. In this post, I am going to share with you how can you use your trained YOLOv4 model with another awesome computer vision and machine learning software library-  OpenCV  and of course with Python 🐍. Yes, the Python wrapper of OpenCV library has just released it's latest version with support of YOLOv4 which you can install in your system using below command- pip install opencv-python --up

How to install and compile YOLO v4 with GPU enable settings in Windows 10?

Another post starts with you beautiful people! Last year I had shared a post about  installing and compiling Darknet YOLOv3   in your Windows machine and also how to detect an object using  YOLOv3 with Keras . This year on April' 2020 the fourth generation of YOLO has arrived and since then I was curious to use this as soon as possible. Due to my project (built on YOLOv3 :)) work I could not find a chance to check this latest release. Today I got some relief and successfully able to install and compile YOLOv4 in my machine. In this post I am going to share a single shot way to do the same in your Windows 10 machine. If your machine does not have GPU then you can follow my  previous post  by just replacing YOLOv3 related files with YOLOv4 files. For GPU having Windows machine, follow my steps to avoid any issue while building the Darknet repository. My machine has following configurations: Windows 10 64 bit Intel Core i7 16 GB RAM NVIDIA GeForce GTX 1660 Ti Version 445.87

How to convert your YOLOv4 weights to TensorFlow 2.2.0?

Another post starts with you beautiful people! Thank you all for your overwhelming response in my last two posts about the YOLOv4. It is quite clear that my beloved aspiring data scientists are very much curious to learn state of the art computer vision technique but they were not able to achieve that due to the lack of proper guidance. Now they have learnt exact steps to use a state of the art object detection and recognition technique from my last two posts. If you are new to my blog and want to use YOLOv4 in your project then please follow below two links- How to install and compile Darknet code with GPU? How to train your custom data with YOLOv4? In my  last post we have trained our custom dataset to identify eight types of Indian classical dance forms. After the model training we have got the YOLOv4 specific weights file as 'yolo-obj_final.weights'. This YOLOv4 specific weight file cannot be used directly to either with OpenCV or with TensorFlow currently becau