Local LLM Chatbot

Just Built My Own Local LLM PDF Chatbot

After a lot of late nights and trial-and-error (and yes, quite a few frustrating errors ), I finally created something meaningful – a local chatbot that can read PDFs and answer your questions intelligently, without relying on the cloud.

What It Does:

  • Upload a PDF file
  • Ask your question in plain English
  • It scans the file and gives back a smart, relevant answer using a local LLM
  • All offline – runs directly on your machine!

Tech Stack I Used:

  • Python for scripting
  • Streamlit for the web interface
  • LangChain for chaining the logic
  • Ollama for running LLMs locally
  • ChromaDB for vector storage
  • sentence-transformers (initially tried OpenAIEmbeddings, later switched for local)
  • tinyllama model (ran best on my system)

My Setup & Challenges:

I’m running this on a Mac Mini M1 (8GB RAM).

  • tinyllama worked smoothly
  • Mistral made the system crawl
  • Didn’t try heavier models (e.g., Mixtral, LLaMA 3) due to hardware limits
  • Also faced confusion between using OpenAIEmbeddings vs HuggingFaceEmbeddings
    (hint: for fully local setup, HuggingFace is the better route)

GitHub Repo:
https://github.com/mohdintsar/local-llm-pdf-chatbot

If you’re into open-source AI or just want to learn how local LLMs work — feel free to explore, clone, fork, or contribute!
And yes, a ⭐ would make my day!

#LocalLLM #PDFChatbot #Ollama #LangChain #ChromaDB #Python #GenAI #OpenSource #Techaiblog #AItools #LinkedInDev