Airgapped Offline Retrieval Augmented Generation (RAG)

In Collaboration With
ChromaChroma
StreamlitStreamlit

Course Outline

Set up a fully offline Retrieval-Augmented Generation (RAG) solution using open-source models and Docker. Learn to deploy a local model serving pipeline with ChromaDB for vector storage, all wrapped in a Streamlit app

Learning Outcomes

  • Learn to deploy and serve open-source LLM’s locally using Docker containers
  • Setup a fully offline RAG pipeline with Meta Llama open source models
  • Understand the challenges building and deploying an offline AI system

Who Is This Course For?

  • Anyone looking at deployment of open-source models locally or closed-network solutionsEngineers and architects looking at secure RAG solutionsEngineers looking to familiarize themselves with open-source models

Pre-requisites

  • Intermediate working experience with language models or machine learning modelsFundamental understanding of RAGPreviously worked with Docker or any cloud deployment (we will recap some of the basics)
LevelAdvanced
Your Instructor
I
Instructor
Duration2 hours
Showcasing
ChromaChroma
Progress0/8 chapters complete
0%
Your certificateComplete course to get your certificate
Certificate of Completion
Course Name
Why Enroll?
In this project, you will learn to build a fully offline RAG pipeline using open-source models. By deploying everything locally using Docker, you’ll set up model serving, vector storage with ChromaDB, and a user interface with Streamlit to run RAG workflows seamlessly. This project will help engineers who want to build self-contained AI systems, providing a robust foundation for handling sensitive data or use cases requiring offline capabilities.
Start Course