Skip to content

mhingston/sloop

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Copilot + Mulch self-learning hook pack

This folder is a minimal hook pack you can copy into an existing repository to add a Mulch-backed GitHub Copilot self-learning loop without modifying Mulch itself.

What it does

  • primes Mulch context at sessionStart
  • refreshes published context when the user prompt changes
  • publishes session-scoped context artifacts and resolves them through stable metadata
  • captures prompt and tool breadcrumbs during the session
  • records a tactical session summary into the copilot Mulch domain at sessionEnd
  • promotes strong copilot inbox records into real domains with a dedicated helper

Files

  • .github/hooks/mulch-copilot.json - Copilot hook configuration
  • .github/hooks/mulch-self-learning.mjs - hook runner and manual prime refresher
  • .github/hooks/mulch-eval.mjs - asynchronous evaluator for copilot inbox records
  • .github/hooks/mulch-promote.mjs - promotion helper for copilot inbox records
  • .github/hooks/mulch-close-loop.mjs - background runner that evaluates then promotes in sequence
  • AGENTS.md - tells Copilot to resolve the latest session-scoped Mulch context artifact

Bootstrap requirements

Before copying these files into a target repository, the user needs to:

  1. Install Mulch in the target repo, for example with npm install -D @os-eco/mulch-cli.
  2. Initialize Mulch with ./node_modules/.bin/ml init.
  3. Add a dedicated tactical inbox domain with ./node_modules/.bin/ml add copilot.
  4. Add any real project domains you want to learn into, such as api, frontend, or testing.
  5. Ensure Node is available because the hook runner is executed with node.
  6. Commit .github/hooks/mulch-copilot.json to the default branch so Copilot loads it.

After bootstrap, copy the five files in .github/hooks/ plus AGENTS.md into the target repository.

Notes

  • The hook runner looks for Mulch at ./node_modules/.bin/ml first, then falls back to ml or mulch on PATH.
  • The copilot domain is used as an inbox for hook-generated session summaries.
  • The stable handoff point is .github/hooks/.runtime/prime.meta.json, which points to the latest session-scoped published artifact via artifactPath.
  • The assembled published artifact includes a per-session Mulch export plus ml ready, ml status, and prompt-matched ml search output.
  • The hook keeps raw ml prime --export output separate from the published artifact so same-worktree sessions do not compete for one shared prime.txt path.
  • mulch-promote.mjs can review or apply promotions for mulch-eval records that scored as promote.
  • mulch-close-loop.mjs is launched automatically in the background from sessionEnd, so evaluation and promotion stay out of the synchronous hook path while still running in order.

How the loop works

  1. sessionStart runs prompt-aware ml prime --exclude-domain copilot --format plain --export .github/hooks/.runtime/prime-raw-<session>.txt, plus ml ready --limit 5 and ml status, then publishes an assembled context artifact to .github/hooks/.runtime/prime-<session>.txt
  2. sessionStart writes .github/hooks/.runtime/prime.meta.json, which points to the latest published artifact
  3. userPromptSubmitted logs the prompt, refreshes metadata and the session-scoped published artifact, and adds prompt-matched ml search output when available
  4. postToolUse and errorOccurred log tool results and failures
  5. sessionEnd runs ml learn --json and writes a session summary with ml record copilot --stdin
  6. sessionEnd background-spawns mulch-close-loop.mjs, which runs mulch-eval.mjs --record-outcomes and then mulch-promote.mjs --apply in sequence
  7. mulch-promote.mjs can also be run manually to preview or apply promotions for promote-worthy inbox records

Eval loop

By default, the hook pack launches the evaluator in the background at sessionEnd. You can also run it manually or from CI:

node .github/hooks/mulch-eval.mjs
node .github/hooks/mulch-eval.mjs --json
node .github/hooks/mulch-eval.mjs --record-outcomes

The evaluator uses a deterministic rubric:

  • groundedness: changed files and evidence
  • reusability: domain mapping and breadth of touched files
  • specificity: prompt/context richness
  • validation signal: tool execution and existing session outcome data

Recommendations:

  • promote - strong candidate to mine into real Mulch domains
  • review - useful, but still tactical or incomplete
  • discard - low-signal session summary

When --record-outcomes is used, the script appends a mulch-eval outcome to each unevaluated record using stock ml outcome.

Promotion loop

Use the promotion helper to close the loop on strong session summaries:

node .github/hooks/mulch-promote.mjs --json
node .github/hooks/mulch-promote.mjs --apply
node .github/hooks/mulch-promote.mjs --record <copilot-record-id> --apply

By default, the script filters to copilot records where the latest mulch-eval outcome recommends promote and where at least one suggested domain is available. With --apply, it writes reference records into each pending target domain and appends mulch-promote outcomes back to the source copilot record.

The copilot domain remains a tactical inbox for session summaries. It is excluded from priming so operational breadcrumbs do not get fed back into model context.

Closing the loop

Hooks can prepare context, but they cannot directly inject it into Copilot's live prompt.

This pack closes the loop by combining:

  • sessionStart and userPromptSubmitted hooks: refresh .github/hooks/.runtime/prime.meta.json and the latest session-scoped context artifact
  • AGENTS.md: instructs Copilot to resolve that metadata file and read the current artifact
  • sessionEnd hook: records new tactical learnings and launches ordered async evaluation and promotion
  • mulch-promote.mjs: lets humans or automation promote strong copilot inbox records into real domains

That gives you a practical cycle of:

Mulch domains -> session-scoped context artifact -> prime.meta.json -> Copilot reads via AGENTS.md -> work happens -> hooks capture/evaluate/promote -> durable learnings land in real Mulch domains

About

Self-learning loop template

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors