LangChain Memory for Beginners: Make Your Chatbot Remember Conversations

Author

Kritim Yantra

Apr 23, 2025

LangChain Memory for Beginners: Make Your Chatbot Remember Conversations

Have you ever chatted with a bot that forgets everything the moment you hit "send"?

It’s like talking to a goldfish. 🐟

But what if you want your chatbot to remember your name, your last question, or the topic you're discussing?

That’s where LangChain Memory comes in. In this blog, we’ll break down:

  • What LangChain is
  • What “Memory” means in LLMs
  • How LangChain Memory works
  • Step-by-step example to build your own chatbot with memory

Let’s get into it 👇


🧠 What Is LangChain?

LangChain is an open-source framework that makes it easier to build apps using LLMs (like ChatGPT or Claude).

You can think of it as a set of tools to:

✅ Connect your chatbot to custom data (like PDFs, websites)
✅ Make it interactive (use buttons, search, filters)
✅ Add memory and logic to conversations

LangChain works with models from OpenAI, Anthropic, Cohere, HuggingFace, and more.


🧠 What Is "Memory" in LLMs?

By default, LLMs don’t remember your past messages unless you send them again in every request.

That’s like this:

User: What is Python?
Assistant: Python is a programming language...
User: And who created it?
Assistant: I don’t know what “it” means.

That’s annoying, right?

With Memory, your chatbot can remember previous messages so that future replies make sense in context.


🧩 Why Is Memory Important?

  • 🗣️ Better conversations
  • 💬 Personalized responses
  • 🔁 Ongoing context
  • 🧾 Summarized history
  • 🧠 Real "assistant"-like behavior

Think of it like a chat with a real person—they remember what you said before.


🛠️ Types of Memory in LangChain

LangChain offers a few types of memory:

1. ConversationBufferMemory

  • Remembers everything in the conversation
  • Useful for chatbots

2. ConversationSummaryMemory

  • Summarizes conversation as it goes
  • Saves space (useful for long chats)

3. ConversationBufferWindowMemory

  • Remembers only the last few messages
  • Good for temporary context

4. VectorStoreRetrieverMemory

  • Stores knowledge in a vector database
  • Best for FAQs, document search, etc.

In this blog, we’ll use the simplest one: ConversationBufferMemory.


🧪 Step-by-Step: Build a Chatbot with Memory (Python)

Let’s build a memory-enabled chatbot using LangChain + OpenAI.

🔧 Prerequisites:

Install the packages:

pip install openai langchain python-dotenv

Create a .env file with your OpenAI key:

OPENAI_API_KEY=your_openai_api_key

🧱 Basic Setup

from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from dotenv import load_dotenv
import os

load_dotenv()

# Load the model
llm = ChatOpenAI(temperature=0.7)

# Add memory
memory = ConversationBufferMemory()

# Create conversation chain
conversation = ConversationChain(
    llm=llm,
    memory=memory,
    verbose=True
)

# Chat loop
while True:
    user_input = input("You: ")
    if user_input.lower() in ['exit', 'quit']:
        break
    response = conversation.predict(input=user_input)
    print(f"Bot: {response}")

🎯 Example Output

You: My name is Ajay.
Bot: Nice to meet you, Ajay!

You: What is the capital of France?
Bot: The capital of France is Paris.

You: And what's my name?
Bot: Your name is Ajay.

Amazing! The bot remembers your name from earlier in the chat. 🧠


🔍 How Does It Work Internally?

The memory stores both:

  • Your messages
  • Bot's responses

Every time you send a new message, the bot gets:

History:
User: My name is Ajay.
Bot: Nice to meet you, Ajay!
User: What is the capital of France?

Current Input:
And what's my name?

This helps the model stay in context like a human.


🛡️ Memory Limitations

  • LLMs have a token limit (e.g., 8k–32k tokens)
  • Long chats may get cut off or reset
  • You might need summary-based memory for longer conversations

🧰 Upgrade: Use ConversationSummaryMemory

To keep memory without hitting token limits, use:

from langchain.memory import ConversationSummaryMemory

memory = ConversationSummaryMemory(llm=llm)

It keeps a compressed summary of your chats—great for long-form interactions.


🧠 Real-World Use Cases

Use Case Memory Needed?
Customer support bot ✅ Yes
Educational tutor ✅ Yes
Q&A from PDF ❌ Not required (stateless)
AI diary assistant ✅ Definitely

Memory = human-like = more useful assistant.


🚀 What’s Next?

Now that you’ve learned how to add memory, here’s what you can explore next:

  1. 🧾 Use custom PDFs or websites as context (LangChain + document loaders)
  2. 🧮 Add tools to your chatbot (calculator, web search, code execution)
  3. 📦 Store conversation history in a database
  4. 💬 Use UI tools like Streamlit or Gradio for chat interfaces

✅ Conclusion

In this blog, you learned:

  • What memory means in LLMs
  • Why LangChain makes building AI apps easier
  • How to create a chatbot with memory using just a few lines of Python

And now you’re one step closer to building your own personal AI assistant!

Tags

Python AI Prompts LLM

Comments

No comments yet. Be the first to comment!

Please log in to post a comment:

Sign in with Google

Related Posts