Member-only story

Building a Conversational RAG with Meta’s Llama 3.3

Dhruv Yadav

--

In one of my previous stories, I covered a tutorial on building RAG applications. But you know what’s better than a simple RAG app? A conversational RAG, YES THAT’S right!!

After writing my previous article, I decided to explore more about RAGs and simultaneously write a series of articles on the things that I learned — because why not? I will cover many topics in it, the newest addition to the series is the “Conversational RAG.”

If you’re not a Medium member, read through this link: Link

Meta recently released Llama 3.3, and in this article, I’ll be taking the new model for a ride to develop the application.

AI Generated

Also, the complete code is available here: GitHub

Intuition

I am no expert teacher, but I will try my best to give you the best idea of how this conversational RAG works.

Firstly, the basic stuff, you load the data source, chunk it, generate vectors, and create the vectorstore. There are three main components:

  • History-Aware Retriever
  • Question-Answer Chain
  • The final RAG Chain

How it works?

When you ask the RAG app a question (referred to as “input”), here’s what…

--

--

Dhruv Yadav
Dhruv Yadav

Written by Dhruv Yadav

I just like to yap about stuff

No responses yet

Write a response