|
I'm Arjun — an AI/ML Engineer and Governance Architect based in Bharat 🇮🇳, building the trust infrastructure that responsible AI demands. I started as a full-stack engineer, but the deeper I went into deploying AI in production — for fintech, HR, and healthtech — the clearer one thing became: enforcement was missing. Every system was intelligent. None were accountable. That realization became KavachX — India's first real-time AI governance engine that treats compliance not as a checkbox, but as an engineering primitive. What drives me:
|
class TheIndicSentinel:
name = "Arjun"
role = "AI Governance Architect"
location = "Bharat 🇮🇳"
mission = "Digital Armor for AI"
law_stack = [
"DPDP Act 2023",
"IT Act 2000",
"AI Accountability Norms"
]
tech_stack = [
"Python", "FastAPI",
"PyTorch", "React",
"Docker", "PostgreSQL"
]
def philosophy(self):
return (
"Not just intelligent."
" Accountable."
) |
KavachX is not a library. It is not a tool. It is a governance infrastructure — the policy firewall that stands between AI models and the real world.
|
Every inference intercepted, analyzed across composite risk dimensions, and scored before a token reaches the user. |
Custom-trained classifiers built for Indian linguistic context — understanding nuance, intent, and domain-specific harm. |
Immutable audit chains with cryptographic logging. Every decision timestamped and surfaced on a live compliance dashboard. |
From headless API middleware to browser-level interception — governance woven into the fabric, not bolted on. |
┌─────────────────────────────────────────┐
│ AI MODEL (LLM / ML System) │
└──────────────────┬──────────────────────┘
│ Inference Request
▼
╔══════════════════════════════════════════════════════╗
║ K A V A C H X E N G I N E ║
║──────────────────────────────────────────────────────║
║ Gate 1: 🔐 Security │ Prompt Injection / NAEL ║
║ Gate 2: 🧬 Safety │ Hate, Bias, Harm Scoring ║
║ Gate 3: 📋 Compliance │ DPDP PII / IT Act Check ║
║ Gate 4: 📊 Audit │ Immutable Log + Dashboard ║
╚══════════════════════════════════════════════════════╝
│ Governed Response
▼
┌─────────────────────────────────────────┐
│ END USER / CLIENT │
└─────────────────────────────────────────┘
Real-time governance platform acting as the middle-layer for AI interactions. Composite risk scoring, ML-native safety classifiers, and an immutable compliance audit chain. Capabilities:
|
A zero-dependency Python utility and library to identify and redact Aadhaar, PAN, UPI, and other sensitive identifiers critical for DPDP compliance. Capabilities:
|
A curated index of laws, frameworks, tools, and research papers focused on building Responsible and Compliant AI within the Indian ecosystem. Capabilities:
|
LLM-powered bilingual assistant enabling Indian traders to query inventory, sales, and business health in natural language — Hindi or English. Capabilities:
|
Q2 2026 ──────────────────────────────────────────────────────────►
│
├── 🧬 Bhasha-Shield
│ High-speed safety filter for Indian languages (Indic-NLP)
│ Detecting prompt injection in Hindi, Tamil, Bengali, Telugu
│
├── 🔐 PrivacyConnect SDK
│ Lightweight middleware bridging LLM agents with India Stack
│ Account Aggregator + ONDC in a DPDP-compliant wrapper
│
├── 📦 DPDPA-Masker (OSS Package)
│ Pip-installable PII redaction for Indian data patterns
│ Targets: Aadhaar, PAN, UPI IDs, Indian phone formats
│
└── 📊 Governance-as-Code
Terraform provider for AI safety policies
Automate compliance across cloud environments at scale
| Initiative | Status | Description |
|---|---|---|
| 🛡️ KavachX v3.6 — ML Hardening | 🟢 Active | Fine-tuning domain safety classifiers for general AI safety |
| 📋 DPDP Compliance Engine | 🟢 Active | Tightening PII detection with Bharat-native data patterns |
| 🧬 Bhasha-Shield (Indic NLP) | 🟡 Research | Multilingual safety filter for Indian dialect prompt attacks |
| 📦 DPDPA-Masker OSS Package | 🔵 Planned | Public pip package for DPDP-aligned PII redaction |
| ✍️ Technical Blog | 🔵 Planned | Engineering deep-dives on AI governance architecture |
