This extension adds architecture-inspection, layer-surgery, evaluation, and retraining contracts so the Graphbrain-style symbolic/neural framework can operate on real TensorFlow, Keras, PyTorch, ONNX, and related networks inside TritFabric.
Contents:
NETWORK_SURGERY_SPEC.md— canonical architecture for inspection, surgical modification, retraining, and governed promotionTRITFABRIC_RETRAINING_FLOW.md— execution flow for distributed training/evaluation in TritFabricschemas/— machine-readable JSON Schemas for network artifacts, probes, patch plans, retrain jobs, eval suites, and reportsexamples/— example patch plan and retrain jobcode/model_surgery_api.py— Python skeleton for adapters, patch plans, and evaluators
The governing idea is to treat each model as a first-class governed artifact with:
- a stable architecture representation
- a probe report over graph topology, layers, tensors, and trainability
- a surgical patch plan with auditable intent
- a retrain job with data, budget, identity, policy, and twin scenario bindings
- an evaluation report that combines ML metrics with governance, twin, and graph-quality checks
Recommended order of implementation:
- Implement framework adapters that can emit
NetworkArtifactandArchitectureProbe. - Gate all modifications through
LayerSurgeryPlan. - Execute retraining only through
RetrainJobon TritFabric. - Promote or reject a candidate only from
ModelEvaluationReport. - Keep symbolic Graphbrain/claim/provenance checks in the loop for high-risk or graph-mutating actions.
Additional code stub:
code/tensorflow_keras_adapter_stub.py— minimal adapter example for inspecting Keras models and conservative freeze/unfreeze patching