MedMind: AI Backend API (LangServe) 🤖
This repository contains the AI Backend API for the MedMind project. It is a high-performance, asynchronous service built with FastAPI and LangServe, designed to host and expose specialized LLM (Large Language Model) chains for the frontend application to consume.
🧠 AI Capabilities (LangServe Endpoints)
The API utilizes LangChain to orchestrate specialized AI chains, primarily using Ollama for local, open-source LLM inference (e.g., MedGemma).
| Path | Description | Chain Type | Input Key |
|---|---|---|---|
/chat |
General Medical Chatbot with conversational memory. | chat_chain_with_memory |
query |
/symptom_checker |
Analyzes symptoms to provide preliminary health information. | medgemma_symptoms_chain |
query |
/lab_report_analysis |
Interprets and explains medical lab report results. | medgemma_lab_report_chain |
Custom (LabReportInput) |
/mental_health_support |
Empathetic and supportive conversational agent with memory. | mental_health_chain_with_memory |
query |
🛠️ Tech Stack
- Package Manager: uv (Astral)
- API Framework: FastAPI
- LLM Orchestration: LangChain and LangServe
- LLM Inference: Ollama (local server)
🔗 Project Dependency (Frontend Web App)
This API is the intelligence core for the MedMind web application. It is crucial that this service is running before starting the frontend.
- Frontend Repository: https://github.com/smriti2805/MedMind
🚀 Setup & Installation
Prerequisites
- uv: This project uses
uvfor fast package management. - Ollama: Must be installed and running on your system to serve the LLMs.
- Download Ollama
- Pull the required model (e.g.,
ollama pull medgemmaorollama pull gemma:2b).
Installation Steps
-
Clone this repository:
bash git clone [https://github.com/kishandev2509/MedMind---Backend](https://github.com/kishandev2509/MedMind---Backend) cd MedMind---Backend -
Sync Dependencies:
Useuvto automatically create the environment and install dependencies:
bash uv sync -
Run the Server:
Start the backend service usinguv:
bash uv run main.py
Thelifespanfunction will automatically check for and attempt to start the Ollama server before adding the LangServe routes.- API URL:
http://localhost:8000 - Interactive Docs:
http://localhost:8000/docs
- API URL: