• Developed a FastAPI-based backend that enables users to upload PDFs or text files, extract
content, and perform intelligent question-answering using OpenAI GPT-4 and LangChain workflows.
• Integrated Pinecone for embedding storage and Supabase for metadata, automating document
ingestion, chunking, and retrieval pipelines, resulting in a 60% improvement in contextual response
accuracy.
• Containerized the backend using Docker and deployed it on Render with GitHub Actions–based
CI/CD for seamless version control and scalability
• Built an interactive AI assistant using FastAPI that combines OpenAI Whisper for speech-to-text,
GPT-4 for reasoning, and ElevenLabs TTS for generating natural spoken responses.
• Integrated LangChain for contextual prompt flow management and Chroma DB for semantic
vector storage, enabling low-latency conversational retrieval across voice queries.
• Deployed the complete pipeline on Render using Docker and implemented CI workflows for
continuous delivery and version tracking.
A Simple Rag Pipeline Using Langchain and chromavectordb , Ollama ,HuggingFace Embedding Models github