A beginner-friendly AI Governance & Risk Toolkit — risk register, governance templates, and audit-ready workflows for early-stage AI teams.
-
Updated
Dec 1, 2025 - HTML
A beginner-friendly AI Governance & Risk Toolkit — risk register, governance templates, and audit-ready workflows for early-stage AI teams.
A practical framework for turning data analysis into decision policies you can defend. Covers risk modeling, thresholding, exception handling, policy cards, monitoring, and update triggers, using real patterns like abstention rules, reorder points, and fairness-aware benchmarking. Built for “ship it” data science.
Forkit Core is an open source passport layer for AI models and agents with GitHub CI validation, local verification, and Hugging Face-compatible export.
Automated validation toolkit for tabular ML models in finance and regulated domains.
Audit-ready explainability artifacts (reason codes, model cards, drift checks) for scikit-learn investment & credit-risk models.
Data Trust Engineering (DTE) is a vendor-neutral, engineering-first approach to building trusted, Data, Analytics and AI-ready data systems. This repo hosts the Manifesto, Patterns, and the Trust Dashboard MVP.
EU AI Act governance prototype that turns real medical AI evidence and ISO/IEC 42001 governance scaffolding into reviewable classification, evidence checks, human oversight, and incident-handling paths.
Four Tests Standard (4TS) - Vendor-neutral specification for verifiable AI governance
Regime-based evaluation framework for financial NLP stability. Implements chronological cross-validation, semantic drift quantification via Jensen-Shannon divergence, and multi-faceted robustness profiling. Replicates Sun et al.'s (2025) methodology with modular, auditable Python codebase.
Enterprise AI Router and Governance System — the AI that governs all AIs
Supporting materials for “Building Governable ML Models with R,” presented at posit::conf 2025
Model governance for insurance pricing — PRA SS1/23 validation reports, model risk management, risk tier scoring
Customizable AI Acceptable Use Policy and governance framework for US enterprises. MIT licensed. Covers compliance, HR, infosec, and legal.
A platform that makes your domain model executable and shared across humans, systems and AI agents, so nothing is guessed and work stops being re-done. One explicit, documented model becomes the single ground truth that cuts governance overhead, removes ambiguity, and lets AI act with accuracy instead of approximation.
Policy-driven model promotion gate evaluator for MLOps release workflows
Prompt-governed CLI automation architecture for AutoFACS: bounded execution, audit-first workflows, human-reviewed operations, and public-facing documentation for CV/ML control-plane design.
Professional AI Security Assurance portfolio demonstrating model supply-chain security, LLM red teaming, static analysis, SBOM validation, risk classification, and governance-aligned AI safety workflows.
Production-style ML decision intelligence platform for insurance claim severity prediction, explainability, monitoring, and governance.
Drift observability architecture for Databricks Delta Lake — detects data & model drifts, builds PSI visualizations, and exports governance telemetry for Responsible AI.
MLOps compliance dashboard. Model lineage tracking + audit reports for EU AI Act, NIST AI RMF. One command setup.
Add a description, image, and links to the model-governance topic page so that developers can more easily learn about it.
To associate your repository with the model-governance topic, visit your repo's landing page and select "manage topics."