Start a conversation
Upload documents and ask questions to extract insights.
Upload any document. Ask any question. Get precise answers powered by local LLM and neural search. No data leaves your machine.
Semantic vector search finds contextually relevant information, not just keyword matches.
Everything runs locally on your machine. No API calls to external services. Your data stays yours.
Supports PDF, DOCX, TXT, XLSX, XLS, and CSV files out of the box with intelligent parsing.
Retrieval-Augmented Generation ensures answers are always grounded in your actual documents.
Documents are intelligently split and indexed for optimal retrieval and comprehensive answers.
Built on LangChain, ChromaDB, and Ollama. Fully extensible and customizable to your needs.
Drag and drop your documents. We support PDF, DOCX, TXT, Excel, and CSV formats.
Documents are automatically chunked, embedded, and stored in a local vector database.
Ask questions in natural language. The AI retrieves relevant context and generates precise answers.
DocsInsight Engine uses Ollama with the Llama 3 8B model by default. You can configure it to use any Ollama-compatible model.
No. Everything runs locally on your machine. The LLM, vector database, and all document processing happen on-device. Zero data leaves your environment.
You need Python 3.11+, Ollama installed and running, and at least 8GB of RAM (16GB recommended). A GPU is optional but improves response speed significantly.
Yes. You can change the model in the backend configuration. Any model supported by Ollama can be used for both embeddings and text generation.
PDF, DOCX, DOC, XLSX, XLS, CSV, and TXT files are supported natively with intelligent content extraction.