An agentic LLM-powered assistant that converts natural language into SQL queries, executes them safely, and returns conversational answers from a PostgreSQL database.
This project implements a Text-to-SQL system with tool-augmented reasoning, enabling users to interact with a database using plain English.
- Natural language → SQL query generation
- Safe SQL execution (read-only)
- Conversational response synthesis
- Dynamic schema awareness
- Modular workflow using LangGraph
flowchart LR
A[User Input] --> B[SQL Generator LLM]
B --> C[DB Executor]
C --> D[Result Synthesizer LLM]
D --> E[Final Answer]
C --> F[(PostgreSQL DB)]
B --> F
This system follows a structured agentic pipeline:
-
Text-to-SQL Agent Generates SQL queries from user input
-
Tool Augmentation Uses:
- Database schema inspection
- SQL execution tools
- Converts user queries into SQL
- Uses schema context for accuracy
- Executes SQL queries
- Restricts to
SELECTonly
- Retrieves:
- Tables
- Columns
- Relationships
- Converts SQL results into conversational responses
- Managed via LangGraph
- Controls execution flow and state
- LLM: LLaMA 3.2 (via Ollama)
- Orchestration: LangGraph
- Framework: LangChain
- Database: PostgreSQL
- ORM: SQLAlchemy
- Only allows
SELECTqueries (read-only) - Blocks INSERT / UPDATE / DELETE
- Schema-aware prompting to reduce hallucinations
# Clone the repo
git clone <your-repo-url>
cd <your-repo>
# Install dependencies
pip install -r requirements.txtCreate a .env file:
DB_USER=your_user
DB_PW=your_password
DB_HOST=localhost
DB_PORT=5432
DB_NAME=your_dbpython main.pyYou: Show me all customer accounts with balances above 10,000
AI: Here are the accounts with balances above 10,000...
This project can be described as:
- Agentic LLM Workflow
- Tool-Augmented LLM System
- Text-to-SQL Application
- Add memory (conversation history)
- Add retry & error handling
- Add query validation layer
- Integrate vector DB for hybrid retrieval
- Add evaluation benchmarks