diff --git a/CLAUDE.md b/CLAUDE.md index bf469c1..5a29852 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -6,7 +6,7 @@ Persistent codebase knowledge layer for AI agents. Pre-builds architecture, depe - TypeScript, ESM (`"type": "module"`) - tree-sitter (native N-API) + 27 language grammar packages - @modelcontextprotocol/sdk - MCP server (stdio transport) -- commander - CLI (init, serve, update, status, symbols, search, modules, hotspots, hook, upgrade) +- commander - CLI (init, serve, update, inject, status, symbols, search, modules, hotspots, hook, upgrade) - simple-git - git integration + temporal analysis - zod - schema validation for LLM analysis results - yaml - cortex.yaml manifest @@ -45,16 +45,25 @@ Hybrid extraction: - `codecortex symbols [query]` - browse and filter the symbol index - `codecortex search ` - search across all knowledge files - `codecortex modules [name]` - list modules or deep-dive into one +- `codecortex inject` - regenerate inline context in CLAUDE.md and agent config files - `codecortex hotspots` - files ranked by risk (churn + coupling + bugs) - `codecortex hook install|uninstall|status` - manage git hooks for auto-update - `codecortex upgrade` - check for and install latest version -## MCP Tools (13) -Read (10): get_project_overview, get_module_context, get_session_briefing, search_knowledge, get_decision_history, get_dependency_graph, lookup_symbol, get_change_coupling, get_hotspots, get_edit_briefing -Write (3): record_decision, update_patterns, record_observation +## MCP Tools (5) +get_project_overview, get_dependency_graph, lookup_symbol, get_change_coupling, get_edit_briefing -All read tools include `_freshness` metadata (status, lastAnalyzed, filesChangedSince, changedFiles, message). -All read tools return context-safe responses (<10K chars) via truncation utilities in `src/utils/truncate.ts`. +## MCP Resources (3) +- `codecortex://project/overview` — constitution (architecture, risk map) +- `codecortex://project/hotspots` — risk-ranked files +- `codecortex://module/{name}` — module documentation (template) + +## MCP Prompts (2) +- `start_session` — constitution + latest session for context +- `before_editing` — risk assessment for files you plan to edit + +All tools include `_freshness` metadata (status, lastAnalyzed, filesChangedSince, changedFiles, message). +All tools return context-safe responses (<10K chars) via truncation utilities in `src/utils/truncate.ts`. ## Pre-Publish Checklist Run ALL of these before `npm publish`. Do not skip any step. @@ -72,7 +81,7 @@ Run ALL of these before `npm publish`. Do not skip any step. - **Grammar smoke test** (`parser.test.ts`): Loads every language in `LANGUAGE_LOADERS` via `parseSource()`. Catches missing packages, broken native builds, wrong require paths. This is what would have caught the tree-sitter-liquid issue. - **Version-check tests**: Update notification, cache lifecycle, PM detection, upgrade commands. - **Hook tests**: Git hook install/uninstall/status integration tests. -- **MCP tests**: All 13 tools (read + write), simulation tests. +- **MCP tests**: All 5 tools, resources, prompts, simulation tests. ### Known limitations - tree-sitter native bindings don't compile on Node 24 yet (upstream issue) @@ -91,7 +100,7 @@ Run ALL of these before `npm publish`. Do not skip any step. src/ cli/ - commander CLI (init, serve, update, status) mcp/ - MCP server + tools - core/ - knowledge store (graph, modules, decisions, sessions, patterns, constitution, search, agent-instructions, freshness) + core/ - knowledge store (graph, modules, decisions, sessions, patterns, constitution, search, agent-instructions, context-injection, freshness) extraction/ - tree-sitter native N-API (parser, symbols, imports, calls) git/ - git diff, history, temporal analysis types/ - TypeScript types + Zod schemas diff --git a/README.md b/README.md index 3b02242..b643b00 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,7 @@ Every AI coding session starts with exploration — grepping, reading wrong file ## The Solution -CodeCortex eliminates the cold start. It pre-builds codebase knowledge — architecture, dependencies, risk areas, hidden coupling — so agents skip the exploration phase and go straight to the right files. +CodeCortex eliminates the cold start. It pre-builds codebase knowledge — architecture, dependencies, risk areas, hidden coupling — and injects it directly into your agent's context (CLAUDE.md, .cursorrules, etc.) so agents have project knowledge from the first prompt. **Not a middleware. Not a proxy. Just knowledge your agent loads on day one.** @@ -43,7 +43,7 @@ Three capabilities no other tool provides: 2. **Risk scores** — File X has been bug-fixed 7 times, has 6 hidden dependencies, and co-changes with 3 other files. Risk score: 35. You can't learn this from reading code. -3. **Cross-session memory** — Decisions, patterns, observations persist. The agent doesn't start from zero each session. +3. **Inline context injection** — Project knowledge is injected directly into CLAUDE.md, .cursorrules, and other agent config files with architecture, risk map, and editing directives. Agents use it without any setup. **Example from a real codebase:** - `schema.help.ts` and `schema.labels.ts` co-changed in 12/14 commits (86%) with **zero imports between them** @@ -61,8 +61,8 @@ npm install -g codecortex-ai --legacy-peer-deps cd /path/to/your-project codecortex init -# Check knowledge freshness -codecortex status +# Regenerate inline context in CLAUDE.md and agent config files +codecortex inject ``` ### Connect to Claude Code @@ -101,7 +101,7 @@ Add to `.cursor/mcp.json`: ## What Gets Generated -All knowledge lives in `.codecortex/` as flat files in your repo: +All knowledge lives in `.codecortex/` as flat files in your repo, plus inline context is injected into agent config files: ``` .codecortex/ @@ -111,11 +111,16 @@ All knowledge lives in `.codecortex/` as flat files in your repo: graph.json # dependency graph (imports, calls, modules) symbols.json # full symbol index (functions, classes, types...) temporal.json # git coupling, hotspots, bug history + hotspots.md # risk-ranked files (static, always available) AGENT.md # tool usage guide for AI agents modules/*.md # per-module structural analysis decisions/*.md # architectural decision records sessions/*.md # session change logs patterns.md # coding patterns and conventions + +CLAUDE.md # ← inline context injected here +.cursorrules # ← and here (if exists) +.windsurfrules # ← and here (if exists) ``` ## Six Knowledge Layers @@ -129,37 +134,34 @@ All knowledge lives in `.codecortex/` as flat files in your repo: | 5. Patterns | How code is written here | `patterns.md` | | 6. Sessions | What changed between sessions | `sessions/*.md` | -## MCP Tools (13) +## MCP Tools (5) -### Navigation — "Where should I look?" (4 tools) +Five focused tools that provide capabilities agents can't get from reading code: | Tool | Description | |------|-------------| | `get_project_overview` | Architecture, modules, risk map. Call this first. | -| `search_knowledge` | Find where a function/class/type is DEFINED by name. Ranked results. | +| `get_dependency_graph` | Import/export graph filtered by module or file. | | `lookup_symbol` | Precise symbol lookup with kind and file path filters. | -| `get_module_context` | Module files, deps, temporal signals. Zoom into a module. | +| `get_change_coupling` | Files that must change together. Hidden dependencies flagged. | +| `get_edit_briefing` | Pre-edit risk: co-change warnings, hidden deps, bug history. **Always call before editing.** | -### Risk — "What could go wrong?" (4 tools) +### MCP Resources (3) -| Tool | Description | -|------|-------------| -| `get_edit_briefing` | Pre-edit risk: co-change warnings, hidden deps, bug history. **Always call before editing.** | -| `get_hotspots` | Files ranked by risk (churn x coupling x bugs). | -| `get_change_coupling` | Files that must change together. Hidden dependencies flagged. | -| `get_dependency_graph` | Import/export graph filtered by module or file. | +Static knowledge available without tool calls: -### Memory — "Remember this" (5 tools) +| Resource | Description | +|----------|-------------| +| `codecortex://project/overview` | Full project constitution | +| `codecortex://project/hotspots` | Risk-ranked file table | +| `codecortex://module/{name}` | Per-module documentation | -| Tool | Description | -|------|-------------| -| `get_session_briefing` | What changed since the last session. | -| `get_decision_history` | Why things were built this way. | -| `record_decision` | Save an architectural decision. | -| `update_patterns` | Document coding conventions. | -| `record_observation` | Record anything you learned about the codebase. | +### MCP Prompts (2) -All read tools include `_freshness` metadata and return context-safe responses (<10K chars) via size-adaptive caps. +| Prompt | Description | +|--------|-------------| +| `start_session` | Returns constitution + latest session context | +| `before_editing` | Takes file paths, returns risk/coupling/bug briefing | ## CLI Commands @@ -168,6 +170,7 @@ All read tools include `_freshness` metadata and return context-safe responses ( | `codecortex init` | Discover project + extract symbols + analyze git history | | `codecortex serve` | Start MCP server (stdio transport) | | `codecortex update` | Re-extract changed files, update affected modules | +| `codecortex inject` | Regenerate inline context in CLAUDE.md and agent config files | | `codecortex status` | Show knowledge freshness, stale modules, symbol counts | | `codecortex symbols [query]` | Browse and filter the symbol index | | `codecortex search ` | Search across symbols, file paths, and docs | @@ -180,6 +183,8 @@ All read tools include `_freshness` metadata and return context-safe responses ( **Hybrid extraction:** tree-sitter native N-API for structure (symbols, imports, calls across 27 languages) + host LLM for semantics (what modules do, why they're built that way). Zero extra API keys. +**Inline context injection:** After analysis, CodeCortex injects a rich knowledge section directly into CLAUDE.md and other agent config files. This includes architecture overview, risk map with coupled file names, and editing directives — so agents have project context from the first prompt without needing MCP. + **Git hooks** keep knowledge fresh — `codecortex update` runs automatically on every commit, re-extracting changed files and updating temporal analysis. **Size-adaptive responses** — CodeCortex classifies your project (micro → extra-large) and adjusts response caps accordingly. A 23-file project gets full detail. A 6,400-file project gets intelligent summaries. Every MCP tool response stays under 10K chars. diff --git a/package.json b/package.json index e3c54fa..cb9dc07 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "codecortex-ai", - "version": "0.5.0", + "version": "0.6.0", "description": "Persistent codebase knowledge layer for AI agents — architecture, dependencies, coupling, and risk served via MCP", "type": "module", "bin": { diff --git a/src/cli/commands/init.ts b/src/cli/commands/init.ts index 3539578..a89f3f7 100644 --- a/src/cli/commands/init.ts +++ b/src/cli/commands/init.ts @@ -14,6 +14,7 @@ import { writeFile, writeJsonStream, ensureDir, cortexPath } from '../../utils/f import { readFile } from 'node:fs/promises' import { generateStructuralModuleDocs } from '../../core/module-gen.js' import { generateAgentInstructions } from '../../core/agent-instructions.js' +import { generateHotspotsMarkdown } from '../../git/temporal.js' import { createDecision, writeDecision, listDecisions } from '../../core/decisions.js' import type { SymbolRecord, ImportEdge, CallEdge, SymbolIndex, ProjectInfo } from '../../types/index.js' @@ -40,6 +41,7 @@ export async function initCommand(opts: { root: string; days: string }): Promise const allImports: ImportEdge[] = [] const allCalls: CallEdge[] = [] let extractionErrors = 0 + const langStats = new Map() let parsed = 0 const parseable = project.files.filter(f => languageFromPath(f.path)).length @@ -49,6 +51,9 @@ export async function initCommand(opts: { root: string; days: string }): Promise const lang = languageFromPath(file.path) if (!lang) continue + const stats = langStats.get(lang) || { files: 0, symbols: 0 } + stats.files++ + try { const tree = await parseFile(file.absolutePath, lang) const source = await readFile(file.absolutePath, 'utf-8') @@ -57,12 +62,14 @@ export async function initCommand(opts: { root: string; days: string }): Promise const imports = extractImports(tree, file.path, lang) const calls = extractCalls(tree, file.path, lang) + stats.symbols += symbols.length allSymbols.push(...symbols) allImports.push(...imports) allCalls.push(...calls) } catch { extractionErrors++ } + langStats.set(lang, stats) parsed++ if (showProgress && parsed % 5000 === 0) { process.stdout.write(`\r Progress: ${parsed}/${parseable} files (${allSymbols.length} symbols)`) @@ -74,6 +81,13 @@ export async function initCommand(opts: { root: string; days: string }): Promise if (extractionErrors > 0) { console.log(` (${extractionErrors} files skipped due to parse errors)`) } + + // Warn about languages with 0 symbols extracted + for (const [lang, stats] of langStats) { + if (stats.files > 0 && stats.symbols === 0) { + console.log(` \u26a0 Warning: ${lang} \u2014 ${stats.files} files parsed, 0 symbols extracted. Grammar may not support this language.`) + } + } console.log('') // Step 3: Build dependency graph @@ -140,9 +154,10 @@ export async function initCommand(opts: { root: string; days: string }): Promise // Write graph.json await writeGraph(root, graph) - // Write temporal.json + // Write temporal.json + hotspots.md if (temporalData) { await writeFile(cortexPath(root, 'temporal.json'), JSON.stringify(temporalData, null, 2)) + await writeFile(cortexPath(root, 'hotspots.md'), generateHotspotsMarkdown(temporalData)) } // Write overview.md — compact summary only (no raw file listing) @@ -162,7 +177,7 @@ export async function initCommand(opts: { root: string; days: string }): Promise await writeManifest(root, manifest) // Write patterns.md (empty template) - await writeFile(cortexPath(root, 'patterns.md'), '# Coding Patterns\n\nNo patterns recorded yet. Use `update_patterns` to add patterns.\n') + await writeFile(cortexPath(root, 'patterns.md'), '# Coding Patterns\n\nNo patterns recorded yet. Edit this file directly to add patterns.\n') // Generate structural module docs const moduleDocsGenerated = await generateStructuralModuleDocs(root, { @@ -185,8 +200,8 @@ export async function initCommand(opts: { root: string; days: string }): Promise console.log(' Written: constitution.md') console.log('') - // Step 7: Agent onboarding - console.log('Step 7/7: Generating agent instructions...') + // Step 7: Agent onboarding + inline context injection + console.log('Step 7/7: Generating inline context...') const updatedFiles = await generateAgentInstructions(root) // Seed a starter decision (skip if decisions already exist) diff --git a/src/cli/commands/inject.ts b/src/cli/commands/inject.ts new file mode 100644 index 0000000..547429d --- /dev/null +++ b/src/cli/commands/inject.ts @@ -0,0 +1,27 @@ +import { resolve } from 'node:path' +import { existsSync } from 'node:fs' +import { cortexPath } from '../../utils/files.js' +import { injectAllAgentFiles } from '../../core/context-injection.js' + +export async function injectCommand(opts: { root: string }): Promise { + const root = resolve(opts.root) + + if (!existsSync(cortexPath(root, 'cortex.yaml'))) { + console.error('Error: No CodeCortex knowledge found. Run `codecortex init` first.') + process.exitCode = 1 + return + } + + console.log('Regenerating inline context...') + const updated = await injectAllAgentFiles(root) + + if (updated.length === 0) { + console.log(' All agent config files are already up to date.') + } else { + for (const file of updated) { + console.log(` Updated: ${file}`) + } + } + console.log('') + console.log('Done. Agent config files now contain inline project knowledge.') +} diff --git a/src/cli/commands/update.ts b/src/cli/commands/update.ts index 03fef8c..f0e66ea 100644 --- a/src/cli/commands/update.ts +++ b/src/cli/commands/update.ts @@ -16,6 +16,8 @@ import { generateConstitution } from '../../core/constitution.js' import { createSession, writeSession, getLatestSession } from '../../core/sessions.js' import { readFile as fsRead } from 'node:fs/promises' import { generateStructuralModuleDocs } from '../../core/module-gen.js' +import { generateHotspotsMarkdown } from '../../git/temporal.js' +import { injectAllAgentFiles } from '../../core/context-injection.js' import type { SymbolRecord, ImportEdge, CallEdge, SymbolIndex } from '../../types/index.js' export async function updateCommand(opts: { root: string; days: string }): Promise { @@ -100,6 +102,7 @@ export async function updateCommand(opts: { root: string; days: string }): Promi await writeGraph(root, graph) if (temporalData) { await writeFile(cortexPath(root, 'temporal.json'), JSON.stringify(temporalData, null, 2)) + await writeFile(cortexPath(root, 'hotspots.md'), generateHotspotsMarkdown(temporalData)) } // Generate structural module docs (skip existing) @@ -125,6 +128,9 @@ export async function updateCommand(opts: { root: string; days: string }): Promi temporal: temporalData, }) + // Refresh inline context in agent config files + await injectAllAgentFiles(root) + // Create session log const diff = await getUncommittedDiff(root).catch(() => ({ filesChanged: [], summary: 'no changes' })) const previousSession = await getLatestSession(root) diff --git a/src/cli/grouped-help.ts b/src/cli/grouped-help.ts index bc1b215..7f11c1c 100644 --- a/src/cli/grouped-help.ts +++ b/src/cli/grouped-help.ts @@ -1,7 +1,7 @@ import type { Command, Help } from 'commander' const COMMAND_GROUPS: Array<{ title: string; commands: string[] }> = [ - { title: 'Core', commands: ['init', 'serve', 'update', 'status'] }, + { title: 'Core', commands: ['init', 'serve', 'update', 'inject', 'status'] }, { title: 'Query', commands: ['symbols', 'search', 'modules', 'hotspots'] }, { title: 'Utility', commands: ['hook', 'upgrade'] }, ] diff --git a/src/cli/index.ts b/src/cli/index.ts index 207f404..ca8fb4a 100644 --- a/src/cli/index.ts +++ b/src/cli/index.ts @@ -5,6 +5,7 @@ import { Command } from 'commander' import { initCommand } from './commands/init.js' import { serveCommand } from './commands/serve.js' import { updateCommand } from './commands/update.js' +import { injectCommand } from './commands/inject.js' import { statusCommand } from './commands/status.js' import { symbolsCommand } from './commands/symbols.js' import { searchCommand } from './commands/search.js' @@ -51,6 +52,12 @@ program .option('-d, --days ', 'Days of git history to re-analyze', '90') .action(updateCommand) +program + .command('inject') + .description('Regenerate inline context in CLAUDE.md and agent config files') + .option('-r, --root ', 'Project root directory', process.cwd()) + .action(injectCommand) + program .command('status') .description('Show knowledge freshness and symbol counts') diff --git a/src/core/agent-instructions.ts b/src/core/agent-instructions.ts index 187d4fe..ffb1b88 100644 --- a/src/core/agent-instructions.ts +++ b/src/core/agent-instructions.ts @@ -1,9 +1,5 @@ -import { existsSync } from 'node:fs' -import { join } from 'node:path' -import { readFile, writeFile as writeFileFs } from 'node:fs/promises' import { writeFile, ensureDir, cortexPath } from '../utils/files.js' - -const CODECORTEX_SECTION_MARKER = '## CodeCortex' +import { injectAllAgentFiles } from './context-injection.js' export const AGENT_INSTRUCTIONS = `# CodeCortex — Codebase Navigation & Risk Tools @@ -13,95 +9,35 @@ This project uses CodeCortex. It gives you a pre-built map of the codebase — a ## Navigation (start here) - \`get_project_overview\` — architecture, modules, risk map. Call this first. -- \`search_knowledge\` — find where a function/class/type is DEFINED by name. Ranked results: exported definitions first. NOT for content search — use grep for that. - \`lookup_symbol\` — precise symbol lookup with kind + file path filters. Use when you know exactly what you're looking for (e.g., "all interfaces in gateway/"). -- \`get_module_context\` — what files, symbols, and deps are in a specific module. - \`get_dependency_graph\` — import/export graph filtered by file or module. -- \`get_session_briefing\` — what changed since the last session. ## When to use grep instead - "How does X work?" → grep (searches file contents) - "Find all usage of X" → grep (finds every occurrence) -- "Where is X defined?" → \`search_knowledge\` or \`lookup_symbol\` (finds definitions, ranked) +- "Where is X defined?" → \`lookup_symbol\` (finds definitions with filters) ## Before Editing (ALWAYS call these) - \`get_edit_briefing\` — co-change risks, hidden dependencies, bug history for files you plan to edit. Prevents bugs from files that secretly change together. - \`get_change_coupling\` — files that historically change together. Missing one causes bugs. -- \`get_hotspots\` — files ranked by risk (churn + coupling + bugs). + +## Static Knowledge (read directly, no tool needed) +- \`.codecortex/modules/*.md\` — module docs (purpose, deps, API) +- \`.codecortex/hotspots.md\` — files ranked by risk (churn + coupling + bugs) +- \`.codecortex/patterns.md\` — coding conventions +- \`.codecortex/decisions/*.md\` — architectural decision records ## Response Detail Control Most tools accept \`detail: "brief"\` (default) or \`"full"\`. Use brief for exploration, full only when you need exhaustive data. - -## Building Knowledge (call as you work) -- \`record_decision\` — when you make a non-obvious technical choice, record WHY. -- \`update_patterns\` — when you discover a coding convention, document it. -- \`record_observation\` — record anything you learned (gotchas, undocumented deps, env requirements). -- \`get_decision_history\` — check what decisions were already made and why. ` -const CLAUDEMD_POINTER = ` -${CODECORTEX_SECTION_MARKER} -This project uses CodeCortex for codebase knowledge. See \`.codecortex/AGENT.md\` for available MCP tools and when to use them. -` - -// All known agent instruction files across AI coding tools -const AGENT_CONFIG_FILES = [ - 'CLAUDE.md', // Claude Code, Claude Desktop - '.cursorrules', // Cursor - '.windsurfrules', // Windsurf - 'AGENTS.md', // Generic / multi-agent convention - '.github/copilot-instructions.md', // GitHub Copilot -] - export async function generateAgentInstructions(projectRoot: string): Promise { - // 1. Write .codecortex/AGENT.md (canonical source of truth) + // 1. Write .codecortex/AGENT.md (compact tool reference) await ensureDir(cortexPath(projectRoot)) await writeFile(cortexPath(projectRoot, 'AGENT.md'), AGENT_INSTRUCTIONS) - // 2. Append pointer to every agent config file that exists - // If NONE exist, create CLAUDE.md as default - const updated: string[] = ['AGENT.md'] - let foundAny = false - - for (const file of AGENT_CONFIG_FILES) { - const filePath = join(projectRoot, file) - if (existsSync(filePath)) { - foundAny = true - const wasUpdated = await appendPointerToFile(filePath) - if (wasUpdated) updated.push(file) - } - } - - // If no agent config files exist at all, create CLAUDE.md as default - if (!foundAny) { - await appendPointerToFile(join(projectRoot, 'CLAUDE.md')) - updated.push('CLAUDE.md') - } - - return updated -} - -async function appendPointerToFile(filePath: string): Promise { - // Ensure parent directory exists (for .github/copilot-instructions.md) - const dir = join(filePath, '..') - if (!existsSync(dir)) { - const { mkdir } = await import('node:fs/promises') - await mkdir(dir, { recursive: true }) - } - - if (existsSync(filePath)) { - const content = await readFileFs(filePath, 'utf-8') - // Don't duplicate — check if CodeCortex section already exists - if (content.includes(CODECORTEX_SECTION_MARKER)) return false - await writeFileFs(filePath, content + CLAUDEMD_POINTER, 'utf-8') - return true - } else { - // Create new file with just the pointer - await writeFileFs(filePath, CLAUDEMD_POINTER.trimStart(), 'utf-8') - return true - } -} + // 2. Inject rich inline context into all agent config files + const injected = await injectAllAgentFiles(projectRoot) -async function readFileFs(path: string, encoding: BufferEncoding): Promise { - return readFile(path, encoding) + return ['AGENT.md', ...injected] } diff --git a/src/core/constitution.ts b/src/core/constitution.ts index 0f774ee..d6d3dd7 100644 --- a/src/core/constitution.ts +++ b/src/core/constitution.ts @@ -148,7 +148,7 @@ export async function generateConstitution(projectRoot: string, data?: Constitut } lines.push( ``, - `Use \`get_module_context\` to deep-dive into any module.`, + `Read \`.codecortex/modules/*.md\` directly for module deep-dives.`, `Use \`get_change_coupling\` before editing a file to check what else must change.`, `Use \`lookup_symbol\` to find any function, type, or class.`, ) diff --git a/src/core/context-injection.ts b/src/core/context-injection.ts new file mode 100644 index 0000000..7219de6 --- /dev/null +++ b/src/core/context-injection.ts @@ -0,0 +1,255 @@ +import { existsSync } from 'node:fs' +import { join } from 'node:path' +import { readFile as fsReadFile, writeFile as fsWriteFile, mkdir } from 'node:fs/promises' +import { readFile, cortexPath } from '../utils/files.js' +import { readManifest } from './manifest.js' +import type { TemporalData, DependencyGraph } from '../types/index.js' + +const MARKER_START = '' +const MARKER_END = '' +const OLD_POINTER_PATTERN = /## CodeCortex\nThis project uses CodeCortex[^\n]*\. See[^\n]*AGENT\.md[^\n]*/ + +// Max items shown in inline context +const INLINE_CAPS = { + modules: 10, + hotspots: 5, + couplings: 3, + externalDeps: 5, + bugFiles: 3, +} + +// All known agent instruction files across AI coding tools +export const AGENT_CONFIG_FILES = [ + 'CLAUDE.md', // Claude Code, Claude Desktop + '.cursorrules', // Cursor + '.windsurfrules', // Windsurf + 'AGENTS.md', // Generic / multi-agent convention + '.github/copilot-instructions.md', // GitHub Copilot +] + +/** + * Generate inline context from .codecortex/ data. + * Reads pre-computed knowledge files and synthesizes a ~60-80 line Markdown section. + */ +export async function generateInlineContext(projectRoot: string): Promise { + const manifest = await readManifest(projectRoot) + + // Read temporal data + let temporal: TemporalData | null = null + const temporalContent = await readFile(cortexPath(projectRoot, 'temporal.json')) + if (temporalContent) { + try { temporal = JSON.parse(temporalContent) as TemporalData } catch { /* skip */ } + } + + // Read graph for modules + entry points + let graph: DependencyGraph | null = null + const graphContent = await readFile(cortexPath(projectRoot, 'graph.json')) + if (graphContent) { + try { graph = JSON.parse(graphContent) as DependencyGraph } catch { /* skip */ } + } + + const lines: string[] = [ + MARKER_START, + '## CodeCortex — Project Knowledge (auto-updated)', + '', + ] + + // --- Architecture section (skip for trivially small repos with 0 modules) --- + const hasModules = graph && graph.modules.length > 0 + if (manifest || hasModules) { + lines.push('### Architecture') + + if (manifest) { + lines.push(`**${manifest.project}** — ${manifest.languages.join(', ')} — ${manifest.totalFiles} files, ${manifest.totalSymbols} symbols`) + } + + if (graph) { + // Modules (sorted by size, capped) + if (graph.modules.length > 0) { + const sorted = [...graph.modules].sort((a, b) => b.lines - a.lines) + const shown = sorted.slice(0, INLINE_CAPS.modules) + const modList = shown.map(m => `${m.name} (${m.lines}loc)`).join(', ') + const suffix = graph.modules.length > INLINE_CAPS.modules ? `, +${graph.modules.length - INLINE_CAPS.modules} more` : '' + lines.push(`- **Modules (${graph.modules.length}):** ${modList}${suffix}`) + } + + // Entry points + if (graph.entryPoints.length > 0) { + lines.push(`- **Entry points:** ${graph.entryPoints.map(e => `\`${e}\``).join(', ')}`) + } + + // External deps + const extDeps = Object.keys(graph.externalDeps) + if (extDeps.length > 0) { + const shown = extDeps.slice(0, INLINE_CAPS.externalDeps) + const suffix = extDeps.length > INLINE_CAPS.externalDeps ? `, +${extDeps.length - INLINE_CAPS.externalDeps} more` : '' + lines.push(`- **Key deps:** ${shown.join(', ')}${suffix}`) + } + } + + lines.push('') + } + + // --- Risk Map section --- + if (temporal && (temporal.hotspots.length > 0 || temporal.coupling.length > 0)) { + lines.push('### Risk Map') + + // Top hotspots with coupling context (show WHAT is coupled, not just counts) + const topHotspots = temporal.hotspots.slice(0, INLINE_CAPS.hotspots) + if (topHotspots.length > 0) { + lines.push('**High-risk files:**') + for (const h of topHotspots) { + const fileCouplings = temporal.coupling.filter( + c => c.fileA === h.file || c.fileB === h.file + ) + const bugs = temporal.bugHistory.find(b => b.file === h.file) + + const parts = [`${h.changes} changes`] + if (bugs) parts.push(`${bugs.fixCommits} bug-fixes`) + parts.push(h.stability) + + // Show top 2 coupled files by name instead of just a count + if (fileCouplings.length > 0) { + const topCoupled = fileCouplings + .sort((a, b) => b.strength - a.strength) + .slice(0, 2) + .map(c => { + const other = c.fileA === h.file ? c.fileB : c.fileA + const shortName = other.split('/').pop() ?? other + return `${shortName}${c.hasImport ? '' : ' ⚠'}` + }) + parts.push(`coupled to: ${topCoupled.join(', ')}`) + } + + lines.push(`- \`${h.file}\` — ${parts.join(', ')}`) + } + } + + // Hidden couplings (co-change but no import) + const hidden = temporal.coupling.filter(c => !c.hasImport && c.strength >= 0.5) + if (hidden.length > 0) { + lines.push('') + lines.push('**Hidden couplings (co-change, no import):**') + for (const c of hidden.slice(0, INLINE_CAPS.couplings)) { + lines.push(`- \`${c.fileA}\` ↔ \`${c.fileB}\` (${Math.round(c.strength * 100)}% co-change)`) + } + } + + // Bug-prone files (only if not already shown in hotspots) + const hotspotFiles = new Set(topHotspots.map(h => h.file)) + const buggy = temporal.bugHistory.filter(b => b.fixCommits >= 2 && !hotspotFiles.has(b.file)) + if (buggy.length > 0) { + lines.push('') + lines.push('**Bug-prone files:**') + for (const b of buggy.slice(0, INLINE_CAPS.bugFiles)) { + lines.push(`- \`${b.file}\` — ${b.fixCommits} bug-fix commits`) + } + } + + lines.push('') + } + + // --- Before Editing directive --- + lines.push('### Before Editing') + lines.push('Check `.codecortex/hotspots.md` for risk-ranked files before editing.') + lines.push('If CodeCortex MCP tools are available, call `get_edit_briefing` for coupling + risk details.') + lines.push('If not, read `.codecortex/modules/.md` for the relevant module\'s dependencies and bug history.') + lines.push('') + + // --- Static Knowledge (primary — always available) --- + lines.push('### Project Knowledge') + lines.push('Read these files directly (always available, no tool call needed):') + lines.push('- `.codecortex/hotspots.md` — risk-ranked files with coupling + bug data') + lines.push('- `.codecortex/modules/*.md` — module docs, dependencies, temporal signals') + lines.push('- `.codecortex/constitution.md` — full architecture overview') + lines.push('- `.codecortex/patterns.md` — coding conventions') + lines.push('- `.codecortex/decisions/*.md` — architectural decisions') + lines.push('') + + // --- MCP Tools (secondary — only if server is connected) --- + lines.push('### MCP Tools (if available)') + lines.push('If a CodeCortex MCP server is connected, these tools provide live analysis:') + lines.push('- `get_edit_briefing` — risk + coupling + bugs for files you plan to edit.') + lines.push('- `get_change_coupling` — files that co-change (hidden dependencies).') + lines.push('- `get_project_overview` — architecture + dependency graph summary.') + lines.push('- `get_dependency_graph` — scoped import/call graph for file or module.') + lines.push('- `lookup_symbol` — precise symbol search (name, kind, file filters).') + lines.push(MARKER_END) + + return lines.join('\n') + '\n' +} + +/** + * Inject inline context into a single file. + * Handles three cases: + * 1. New markers present → replace between markers + * 2. Old 3-line pointer present → remove old, insert new with markers + * 3. Neither → append at end + */ +export async function injectIntoFile(filePath: string, content: string): Promise { + // Ensure parent directory exists + const dir = join(filePath, '..') + if (!existsSync(dir)) { + await mkdir(dir, { recursive: true }) + } + + if (!existsSync(filePath)) { + // Create new file with just the inline context + await fsWriteFile(filePath, content, 'utf-8') + return true + } + + const existing = await fsReadFile(filePath, 'utf-8') + + // Case 1: Markers already present → replace between them + const startIdx = existing.indexOf(MARKER_START) + const endIdx = existing.indexOf(MARKER_END) + if (startIdx !== -1 && endIdx !== -1) { + const before = existing.slice(0, startIdx) + const after = existing.slice(endIdx + MARKER_END.length) + const newContent = before + content.trimEnd() + after + if (newContent === existing) return false // No change + await fsWriteFile(filePath, newContent, 'utf-8') + return true + } + + // Case 2: Old pointer present → remove it, append new section + if (OLD_POINTER_PATTERN.test(existing)) { + const cleaned = existing.replace(OLD_POINTER_PATTERN, '').trimEnd() + await fsWriteFile(filePath, cleaned + '\n\n' + content, 'utf-8') + return true + } + + // Case 3: Neither → append + if (existing.includes(MARKER_START)) return false // Partial marker, don't corrupt + await fsWriteFile(filePath, existing.trimEnd() + '\n\n' + content, 'utf-8') + return true +} + +/** + * Inject inline context into all detected agent config files. + * If no config files exist, creates CLAUDE.md. + */ +export async function injectAllAgentFiles(projectRoot: string): Promise { + const content = await generateInlineContext(projectRoot) + const updated: string[] = [] + let foundAny = false + + for (const file of AGENT_CONFIG_FILES) { + const filePath = join(projectRoot, file) + if (existsSync(filePath)) { + foundAny = true + const wasUpdated = await injectIntoFile(filePath, content) + if (wasUpdated) updated.push(file) + } + } + + // If no agent config files exist, create CLAUDE.md + if (!foundAny) { + const filePath = join(projectRoot, 'CLAUDE.md') + await injectIntoFile(filePath, content) + updated.push('CLAUDE.md') + } + + return updated +} diff --git a/src/git/temporal.ts b/src/git/temporal.ts index b21dd82..6dc20f8 100644 --- a/src/git/temporal.ts +++ b/src/git/temporal.ts @@ -6,7 +6,7 @@ import { getCommitHistory } from './history.js' const TEMPORAL_NOISE_FILES = new Set([ 'CHANGELOG.md', 'CHANGES.md', 'HISTORY.md', 'NEWS.md', 'package.json', 'package-lock.json', 'yarn.lock', 'pnpm-lock.yaml', - 'Cargo.lock', 'go.sum', 'poetry.lock', 'Pipfile.lock', + 'Cargo.lock', 'go.sum', 'go.mod', 'poetry.lock', 'Pipfile.lock', ]) function isTemporalNoise(file: string): boolean { @@ -73,6 +73,26 @@ export function getHotspots(commits: CommitInfo[], days: number): Hotspot[] { return results.sort((a, b) => b.changes - a.changes) } +// Pairs that always co-change for trivial toolchain reasons, not real coupling +const NOISE_PAIRS: Array<[RegExp, RegExp]> = [ + [/go\.mod$/, /go\.sum$/], // Go module + checksum + [/Cargo\.toml$/, /Cargo\.lock$/], // Rust manifest + lock + [/\.golden$/, /\.golden$/], // Golden test file clusters +] + +function isCouplingNoise(fileA: string, fileB: string): boolean { + // Same basename in same directory (e.g., file.ts ↔ file.test.ts is NOT noise) + // But lock file pairs ARE noise + for (const [patA, patB] of NOISE_PAIRS) { + if ((patA.test(fileA) && patB.test(fileB)) || (patA.test(fileB) && patB.test(fileA))) { + return true + } + } + // Two golden files always co-change — filter entire clusters + if (fileA.includes('.golden') && fileB.includes('.golden')) return true + return false +} + export function getChangeCoupling(commits: CommitInfo[]): ChangeCoupling[] { const pairCounts = new Map() const fileCounts = new Map() @@ -107,6 +127,9 @@ export function getChangeCoupling(commits: CommitInfo[]): ChangeCoupling[] { const parts = key.split('|') const fileA = parts[0] ?? '' const fileB = parts[1] ?? '' + + // Filter obvious coupling noise (lock file pairs, generated files) + if (isCouplingNoise(fileA, fileB)) continue const maxChanges = Math.max(fileCounts.get(fileA) || 0, fileCounts.get(fileB) || 0) const strength = maxChanges > 0 ? cochanges / maxChanges : 0 @@ -157,6 +180,64 @@ export function getBugArchaeology(commits: CommitInfo[]): BugRecord[] { return results.sort((a, b) => b.fixCommits - a.fixCommits) } +/** + * Generate a pre-computed hotspots.md Markdown file from temporal data. + * Used as a static file replacement for the get_hotspots MCP tool. + */ +export function generateHotspotsMarkdown(temporal: TemporalData): string { + const lines: string[] = [ + '# Risk-Ranked Files', + '', + `> Auto-generated by CodeCortex. ${temporal.totalCommits} commits analyzed over ${temporal.periodDays} days.`, + '', + ] + + if (temporal.hotspots.length === 0) { + lines.push('No hotspots detected.') + return lines.join('\n') + '\n' + } + + // Calculate risk scores (same formula as get_hotspots MCP tool) + const riskMap = new Map() + + for (const h of temporal.hotspots) { + riskMap.set(h.file, { churn: h.changes, couplings: 0, bugs: 0, risk: h.changes, stability: h.stability }) + } + + for (const c of temporal.coupling) { + for (const f of [c.fileA, c.fileB]) { + const entry = riskMap.get(f) || { churn: 0, couplings: 0, bugs: 0, risk: 0, stability: 'stable' } + entry.couplings++ + entry.risk += c.strength * 2 + riskMap.set(f, entry) + } + } + + for (const b of temporal.bugHistory) { + const entry = riskMap.get(b.file) || { churn: 0, couplings: 0, bugs: 0, risk: 0, stability: 'stable' } + entry.bugs = b.fixCommits + entry.risk += b.fixCommits * 3 + riskMap.set(b.file, entry) + } + + const ranked = [...riskMap.entries()] + .sort((a, b) => b[1].risk - a[1].risk) + .slice(0, 30) + + lines.push('| File | Changes | Couplings | Bugs | Risk | Stability |') + lines.push('|------|---------|-----------|------|------|-----------|') + + for (const [file, data] of ranked) { + const risk = Math.round(data.risk * 100) / 100 + lines.push(`| \`${file}\` | ${data.churn} | ${data.couplings} | ${data.bugs} | ${risk} | ${data.stability} |`) + } + + lines.push('') + lines.push(`Generated: ${new Date().toISOString()}`) + + return lines.join('\n') + '\n' +} + export function getStabilitySignals(commits: CommitInfo[]): Map { const now = new Date() const fileData = new Map() diff --git a/src/mcp/prompts.ts b/src/mcp/prompts.ts new file mode 100644 index 0000000..e7e23a6 --- /dev/null +++ b/src/mcp/prompts.ts @@ -0,0 +1,117 @@ +import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js' +import { z } from 'zod' +import { readFile, cortexPath } from '../utils/files.js' +import { getLatestSession, readSession } from '../core/sessions.js' + +export function registerPrompts(server: McpServer, projectRoot: string): void { + // Prompt 1: Start session — constitution + latest session + server.registerPrompt( + 'start_session', + { + description: 'Get project context and latest session summary to start working. Returns constitution and last session log.', + }, + async () => { + const constitution = await readFile(cortexPath(projectRoot, 'constitution.md')) + const latestId = await getLatestSession(projectRoot) + let sessionContent: string | null = null + if (latestId) { + sessionContent = await readSession(projectRoot, latestId) + } + + const messages: Array<{ role: 'user'; content: { type: 'text'; text: string } }> = [] + + messages.push({ + role: 'user', + content: { + type: 'text', + text: constitution || 'No project constitution found. Run `codecortex init` first.', + }, + }) + + if (sessionContent) { + messages.push({ + role: 'user', + content: { + type: 'text', + text: `## Last Session\n\n${sessionContent}`, + }, + }) + } + + return { + description: 'Project context and latest session', + messages, + } + } + ) + + // Prompt 2: Before editing — file-specific risk briefing + server.registerPrompt( + 'before_editing', + { + description: 'Get risk assessment and coupling warnings for files you plan to edit. Pass comma-separated file paths.', + argsSchema: { + files: z.string().describe('Comma-separated file paths to check (e.g., "src/main.ts,src/utils.ts")'), + }, + }, + async ({ files }) => { + const filePaths = files.split(',').map(f => f.trim()).filter(Boolean) + + const temporalContent = await readFile(cortexPath(projectRoot, 'temporal.json')) + if (!temporalContent) { + return { + description: 'Edit briefing', + messages: [{ + role: 'user' as const, + content: { type: 'text' as const, text: 'No temporal data. Run `codecortex init` first.' }, + }], + } + } + + const temporal = JSON.parse(temporalContent) + const lines: string[] = [`## Edit Briefing for ${filePaths.length} file(s)\n`] + + for (const file of filePaths) { + lines.push(`### ${file}\n`) + + // Hotspot info + const hotspot = temporal.hotspots?.find((h: { file: string }) => h.file.includes(file)) + if (hotspot) { + lines.push(`- **Changes:** ${hotspot.changes} (${hotspot.stability})`) + lines.push(`- **Last changed:** ${hotspot.lastChanged}`) + } + + // Coupling warnings + const couplings = (temporal.coupling || []).filter((c: { fileA: string; fileB: string }) => + c.fileA.includes(file) || c.fileB.includes(file) + ) + if (couplings.length > 0) { + lines.push(`- **Coupled files:**`) + for (const c of couplings) { + const other = c.fileA.includes(file) ? c.fileB : c.fileA + lines.push(` - \`${other}\` — ${c.cochanges} co-changes (${Math.round(c.strength * 100)}%)${c.hasImport ? '' : ' ⚠ HIDDEN DEP'}`) + } + } + + // Bug history + const bugs = temporal.bugHistory?.find((b: { file: string }) => b.file.includes(file)) + if (bugs) { + lines.push(`- **Bug history:** ${bugs.fixCommits} fix commits`) + for (const lesson of bugs.lessons) { + lines.push(` - ${lesson}`) + } + } + + lines.push('') + } + + return { + description: `Edit briefing for ${filePaths.join(', ')}`, + messages: [{ + role: 'user' as const, + content: { type: 'text' as const, text: lines.join('\n') }, + }], + } + } + ) +} diff --git a/src/mcp/resources.ts b/src/mcp/resources.ts new file mode 100644 index 0000000..f3a424f --- /dev/null +++ b/src/mcp/resources.ts @@ -0,0 +1,78 @@ +import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js' +import { ResourceTemplate } from '@modelcontextprotocol/sdk/server/mcp.js' +import { readFile, cortexPath } from '../utils/files.js' +import { listModuleDocs } from '../core/modules.js' + +export function registerResources(server: McpServer, projectRoot: string): void { + // Resource 1: Project overview (constitution) + server.registerResource( + 'project_overview', + 'codecortex://project/overview', + { + description: 'Project constitution — architecture, risk map, and available knowledge.', + mimeType: 'text/markdown', + }, + async () => { + const content = await readFile(cortexPath(projectRoot, 'constitution.md')) + return { + contents: [{ + uri: 'codecortex://project/overview', + mimeType: 'text/markdown', + text: content || 'No constitution found. Run `codecortex init` first.', + }], + } + } + ) + + // Resource 2: Hotspots (risk-ranked files) + server.registerResource( + 'project_hotspots', + 'codecortex://project/hotspots', + { + description: 'Risk-ranked files — change frequency, coupling, and bug history.', + mimeType: 'text/markdown', + }, + async () => { + const content = await readFile(cortexPath(projectRoot, 'hotspots.md')) + return { + contents: [{ + uri: 'codecortex://project/hotspots', + mimeType: 'text/markdown', + text: content || 'No hotspots data. Run `codecortex init` first.', + }], + } + } + ) + + // Resource 3: Module docs (template) + server.registerResource( + 'module_doc', + new ResourceTemplate('codecortex://module/{name}', { + list: async () => { + const modules = await listModuleDocs(projectRoot) + return { + resources: modules.map(name => ({ + uri: `codecortex://module/${name}`, + name: `Module: ${name}`, + description: `Documentation for the ${name} module.`, + mimeType: 'text/markdown', + })), + } + }, + }), + { + description: 'Module documentation — purpose, data flow, public API, gotchas.', + mimeType: 'text/markdown', + }, + async (uri, { name }) => { + const content = await readFile(cortexPath(projectRoot, `modules/${name}.md`)) + return { + contents: [{ + uri: uri.href, + mimeType: 'text/markdown', + text: content || `No documentation found for module "${name}".`, + }], + } + } + ) +} diff --git a/src/mcp/server.ts b/src/mcp/server.ts index c552926..303fac4 100644 --- a/src/mcp/server.ts +++ b/src/mcp/server.ts @@ -2,7 +2,7 @@ * CodeCortex MCP Server * * Serves codebase knowledge to AI agents via Model Context Protocol. - * 13 tools: 8 read + 5 write (navigation, risk, memory). + * 5 tools + 3 resources + 2 prompts for navigation, risk, and editing safety. * * Usage: * codecortex serve @@ -22,17 +22,19 @@ import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js' import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js' import { registerReadTools } from './tools/read.js' -import { registerWriteTools } from './tools/write.js' +import { registerResources } from './resources.js' +import { registerPrompts } from './prompts.js' export function createServer(projectRoot: string): McpServer { const server = new McpServer({ name: 'codecortex', - version: '0.5.0', - description: 'Persistent codebase knowledge layer for AI agents. Architecture, dependencies, coupling, risk, and cross-session memory.', + version: '0.6.0', + description: '5 tools for codebase navigation, risk assessment, and editing safety. Architecture, dependencies, coupling, and hidden dependency detection.', }) registerReadTools(server, projectRoot) - registerWriteTools(server, projectRoot) + registerResources(server, projectRoot) + registerPrompts(server, projectRoot) return server } diff --git a/src/mcp/tools/read.ts b/src/mcp/tools/read.ts index 0c8de03..a188c78 100644 --- a/src/mcp/tools/read.ts +++ b/src/mcp/tools/read.ts @@ -2,12 +2,8 @@ import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js' import { z } from 'zod' import { readFile, cortexPath } from '../../utils/files.js' import { readGraph, getModuleDependencies, getMostImportedFiles, getFileImporters } from '../../core/graph.js' -import { readModuleDoc, listModuleDocs } from '../../core/modules.js' -import { listSessions, readSession, getLatestSession } from '../../core/sessions.js' -import { listDecisions, readDecision } from '../../core/decisions.js' -import { searchKnowledge } from '../../core/search.js' import { computeFreshness } from '../../core/freshness.js' -import { capString, truncateArray } from '../../utils/truncate.js' +import { truncateArray } from '../../utils/truncate.js' import { readManifest } from '../../core/manifest.js' import { getSizeLimits, type SizeLimits, type DetailLevel } from '../../core/project-size.js' import type { TemporalData, SymbolIndex, FreshnessInfo } from '../../types/index.js' @@ -81,159 +77,7 @@ export function registerReadTools(server: McpServer, projectRoot: string): void } ) - // --- Tool 2: get_module_context --- - server.registerTool( - 'get_module_context', - { - description: 'Get deep context for a specific module: purpose, data flow, public API, gotchas, dependencies, and temporal signals (churn, coupling, bug history). Use after get_project_overview when you need to work on a specific module.', - inputSchema: { - name: z.string().describe('Module name (e.g., "scoring", "api", "indexer")'), - detail: z.enum(['brief', 'full']).default('brief').describe('Response detail level. "brief" (default) uses size-adaptive caps. "full" returns complete data (use only when you need exhaustive info).'), - }, - }, - async ({ name, detail }) => { - const limits = await getLimits(detail) - const doc = await readModuleDoc(projectRoot, name) - if (!doc) { - const available = await listModuleDocs(projectRoot) - return textResult({ found: false, name, available, message: `Module "${name}" not found. Available modules: ${available.join(', ')}` }) - } - - const cappedDoc = capString(doc, limits.moduleDocCap) - - const graph = await readGraph(projectRoot) - let depSummary = null - if (graph) { - const deps = getModuleDependencies(graph, name) - - const importsFrom = new Set() - const importedBy = new Set() - const externalDeps = new Set() - - for (const edge of deps.imports) { - const targetMod = graph.modules.find(m => m.files.includes(edge.target)) - if (targetMod && targetMod.name !== name) importsFrom.add(targetMod.name) - if (!edge.target.startsWith('.') && !edge.target.startsWith('/')) { - externalDeps.add(edge.target) - } - } - for (const edge of deps.importedBy) { - const sourceMod = graph.modules.find(m => m.files.includes(edge.source)) - if (sourceMod && sourceMod.name !== name) importedBy.add(sourceMod.name) - } - - const modFiles = new Set(graph.modules.find(m => m.name === name)?.files ?? []) - for (const [pkg, files] of Object.entries(graph.externalDeps)) { - if (files.some(f => modFiles.has(f))) externalDeps.add(pkg) - } - - const extDepsArr = [...externalDeps] - const importsFromArr = [...importsFrom] - const importedByArr = [...importedBy] - depSummary = { - importsFrom: importsFromArr.slice(0, limits.depModuleNameCap), - importedBy: importedByArr.slice(0, limits.depModuleNameCap), - totalImportsFrom: importsFromArr.length, - totalImportedBy: importedByArr.length, - externalDeps: extDepsArr.slice(0, limits.depExternalCap), - totalExternalDeps: extDepsArr.length, - } - } - - const freshness = await getFreshness() - - return textResult(withFreshness({ found: true, name, doc: cappedDoc, dependencies: depSummary }, freshness)) - } - ) - - // --- Tool 3: get_session_briefing --- - server.registerTool( - 'get_session_briefing', - { - description: 'Get a briefing of what changed since the last session. Shows files changed, modules affected, and decisions recorded. Use at the start of a new session to catch up.', - inputSchema: {}, - }, - async () => { - const latestId = await getLatestSession(projectRoot) - if (!latestId) { - return textResult({ hasSession: false, message: 'No previous sessions recorded.' }) - } - - const session = await readSession(projectRoot, latestId) - const allSessions = await listSessions(projectRoot) - const freshness = await getFreshness() - - return textResult(withFreshness({ - hasSession: true, - latest: session, - totalSessions: allSessions.length, - recentSessionIds: allSessions.slice(0, (await getLimits()).sessionsCap), - }, freshness)) - } - ) - - // --- Tool 4: search_knowledge --- - server.registerTool( - 'search_knowledge', - { - description: 'Find where a function, class, type, or file is DEFINED. Returns ranked results: exported definitions first, local vars demoted. For content/concept search ("how does X work?"), use grep instead — this tool searches symbol names, not file contents.', - inputSchema: { - query: z.string().describe('Search term or phrase (e.g., "auth", "processData", "gateway")'), - limit: z.number().int().min(1).max(50).optional().describe('Max results to return. Defaults to size-adaptive limit.'), - detail: z.enum(['brief', 'full']).default('brief').describe('Response detail level. "brief" (default) uses size-adaptive caps. "full" returns more results.'), - }, - }, - async ({ query, limit, detail }) => { - const limits = await getLimits(detail) - const effectiveLimit = limit ?? limits.searchDefaultLimit - const results = await searchKnowledge(projectRoot, query, effectiveLimit) - const freshness = await getFreshness() - - return textResult(withFreshness({ - query, - totalResults: results.length, - results, - }, freshness)) - } - ) - - // --- Tool 5: get_decision_history --- - server.registerTool( - 'get_decision_history', - { - description: 'Get architectural decision records. Shows WHY the codebase is built the way it is. Filter by topic keyword.', - inputSchema: { - topic: z.string().optional().describe('Optional keyword to filter decisions'), - detail: z.enum(['brief', 'full']).default('brief').describe('Response detail level. "brief" (default) uses size-adaptive caps. "full" returns complete data.'), - }, - }, - async ({ topic, detail }) => { - const limits = await getLimits(detail) - const ids = await listDecisions(projectRoot) - const decisions: string[] = [] - - for (const id of ids) { - const content = await readDecision(projectRoot, id) - if (content) { - if (!topic || content.toLowerCase().includes(topic.toLowerCase())) { - decisions.push(capString(content, limits.decisionCharCap)) - } - } - } - - const capped = truncateArray(decisions, limits.decisionCap, 'decisions') - const freshness = await getFreshness() - - return textResult(withFreshness({ - total: capped.total, - topic: topic || 'all', - decisions: capped.items, - ...(capped.truncated ? { truncated: capped.message } : {}), - }, freshness)) - } - ) - - // --- Tool 6: get_dependency_graph --- + // --- Tool 2: get_dependency_graph --- server.registerTool( 'get_dependency_graph', { @@ -292,7 +136,7 @@ export function registerReadTools(server: McpServer, projectRoot: string): void } ) - // --- Tool 7: lookup_symbol --- + // --- Tool 3: lookup_symbol --- server.registerTool( 'lookup_symbol', { @@ -326,7 +170,7 @@ export function registerReadTools(server: McpServer, projectRoot: string): void } ) - // --- Tool 8: get_change_coupling --- + // --- Tool 4: get_change_coupling --- server.registerTool( 'get_change_coupling', { @@ -367,60 +211,7 @@ export function registerReadTools(server: McpServer, projectRoot: string): void } ) - // --- Tool 9: get_hotspots --- - server.registerTool( - 'get_hotspots', - { - description: 'Get files ranked by risk: change frequency (churn), coupling count, and bug history. Volatile files with many couplings need extra care when editing.', - inputSchema: { - limit: z.number().int().min(1).max(50).default(10).describe('Number of files to return'), - }, - }, - async ({ limit }) => { - const content = await readFile(cortexPath(projectRoot, 'temporal.json')) - if (!content) return textResult({ found: false, message: 'No temporal data. Run codecortex init first.' }) - - const temporal: TemporalData = JSON.parse(content) - - // Calculate risk score: churn + coupling count + bug count - const riskMap = new Map() - - for (const h of temporal.hotspots) { - riskMap.set(h.file, { churn: h.changes, couplings: 0, bugs: 0, risk: h.changes }) - } - - for (const c of temporal.coupling) { - for (const f of [c.fileA, c.fileB]) { - const entry = riskMap.get(f) || { churn: 0, couplings: 0, bugs: 0, risk: 0 } - entry.couplings++ - entry.risk += c.strength * 2 - riskMap.set(f, entry) - } - } - - for (const b of temporal.bugHistory) { - const entry = riskMap.get(b.file) || { churn: 0, couplings: 0, bugs: 0, risk: 0 } - entry.bugs = b.fixCommits - entry.risk += b.fixCommits * 3 - riskMap.set(b.file, entry) - } - - const ranked = [...riskMap.entries()] - .sort((a, b) => b[1].risk - a[1].risk) - .slice(0, limit) - .map(([file, data]) => ({ file, ...data, risk: Math.round(data.risk * 100) / 100 })) - - const freshness = await getFreshness() - - return textResult(withFreshness({ - period: `${temporal.periodDays} days`, - totalCommits: temporal.totalCommits, - hotspots: ranked, - }, freshness)) - } - ) - - // --- Tool 10: get_edit_briefing --- + // --- Tool 5: get_edit_briefing --- server.registerTool( 'get_edit_briefing', { diff --git a/src/mcp/tools/write.ts b/src/mcp/tools/write.ts deleted file mode 100644 index ac30fcf..0000000 --- a/src/mcp/tools/write.ts +++ /dev/null @@ -1,99 +0,0 @@ -import type { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js' -import { z } from 'zod' -import { readFile as readFileUtil, cortexPath } from '../../utils/files.js' -import { writeDecision, createDecision } from '../../core/decisions.js' -import { addPattern } from '../../core/patterns.js' -import { writeFile, ensureDir } from '../../utils/files.js' - -function textResult(data: unknown) { - return { content: [{ type: 'text' as const, text: JSON.stringify(data, null, 2) }] } -} - -export function registerWriteTools(server: McpServer, projectRoot: string): void { - // --- Tool 11: record_decision --- - server.registerTool( - 'record_decision', - { - description: 'Record an architectural decision. Documents WHY something is built a certain way, what alternatives were considered, and consequences. Use whenever a non-obvious technical choice is made.', - inputSchema: { - title: z.string().describe('Decision title (e.g., "Use tree-sitter for parsing")'), - context: z.string().describe('What situation led to this decision'), - decision: z.string().describe('What was decided'), - alternatives: z.array(z.string()).default([]).describe('What other options were considered'), - consequences: z.array(z.string()).default([]).describe('Expected consequences of this decision'), - }, - }, - async ({ title, context, decision, alternatives, consequences }) => { - const record = createDecision({ title, context, decision, alternatives, consequences }) - await writeDecision(projectRoot, record) - - return textResult({ - recorded: true, - id: record.id, - path: `.codecortex/decisions/${record.id}.md`, - }) - } - ) - - // --- Tool 12: update_patterns --- - server.registerTool( - 'update_patterns', - { - description: 'Add or update a coding pattern. Patterns document HOW code is written in this project (naming conventions, error handling, testing approaches). Returns "added", "updated", or "noop".', - inputSchema: { - name: z.string().describe('Pattern name (e.g., "Error handling in API routes")'), - description: z.string().describe('What the pattern is and when to use it'), - example: z.string().describe('Code example showing the pattern'), - files: z.array(z.string()).default([]).describe('Files where this pattern is used'), - }, - }, - async ({ name, description, example, files }) => { - const result = await addPattern(projectRoot, { name, description, example, files }) - - return textResult({ - action: result, - pattern: name, - path: '.codecortex/patterns.md', - }) - } - ) - - // --- Tool 13: record_observation --- - server.registerTool( - 'record_observation', - { - description: 'Record something you learned about the codebase. Use this to capture observations, gotchas, undocumented dependencies, environment requirements, or anything future agents should know. Observations persist across sessions.', - inputSchema: { - topic: z.string().describe('Short topic label (e.g., "circular dependency in auth", "Docker required for tests")'), - observation: z.string().describe('What you observed or learned'), - files: z.array(z.string()).default([]).describe('Related file paths (optional)'), - reporter: z.string().default('agent').describe('Who is reporting (default: agent)'), - }, - }, - async ({ topic, observation, files, reporter }) => { - const dir = cortexPath(projectRoot, 'observations') - await ensureDir(dir) - - const entry = { - date: new Date().toISOString(), - topic, - observation, - files, - reporter, - } - - // Append to observations log - const obsPath = cortexPath(projectRoot, 'observations', 'log.json') - const existing = await readFileUtil(obsPath) - const entries = existing ? JSON.parse(existing) : [] - entries.push(entry) - await writeFile(obsPath, JSON.stringify(entries, null, 2)) - - return textResult({ - recorded: true, - totalObservations: entries.length, - message: 'Observation recorded. Future agents will see this.', - }) - } - ) -} diff --git a/tests/core/agent-instructions.test.ts b/tests/core/agent-instructions.test.ts index fa8f2c5..ede7d1e 100644 --- a/tests/core/agent-instructions.test.ts +++ b/tests/core/agent-instructions.test.ts @@ -2,7 +2,6 @@ import { describe, it, expect, beforeEach, afterEach } from 'vitest' import { mkdtemp, rm, mkdir, readFile, writeFile } from 'node:fs/promises' import { join } from 'node:path' import { tmpdir } from 'node:os' -import { existsSync } from 'node:fs' import { generateAgentInstructions, AGENT_INSTRUCTIONS } from '../../src/core/agent-instructions.js' let root: string @@ -24,15 +23,17 @@ describe('generateAgentInstructions', () => { expect(agentMd).toBe(AGENT_INSTRUCTIONS) }) - it('creates CLAUDE.md with pointer when none exists', async () => { + it('creates CLAUDE.md with inline context when none exists', async () => { await generateAgentInstructions(root) const claudeMd = await readFile(join(root, 'CLAUDE.md'), 'utf-8') expect(claudeMd).toContain('## CodeCortex') - expect(claudeMd).toContain('.codecortex/AGENT.md') + expect(claudeMd).toContain('') + expect(claudeMd).toContain('') + expect(claudeMd).toContain('get_edit_briefing') }) - it('appends pointer to existing CLAUDE.md', async () => { + it('appends inline context to existing CLAUDE.md', async () => { await writeFile(join(root, 'CLAUDE.md'), '# My Project\n\nSome instructions.\n', 'utf-8') await generateAgentInstructions(root) @@ -40,8 +41,8 @@ describe('generateAgentInstructions', () => { const claudeMd = await readFile(join(root, 'CLAUDE.md'), 'utf-8') expect(claudeMd).toContain('# My Project') expect(claudeMd).toContain('Some instructions.') - expect(claudeMd).toContain('## CodeCortex') - expect(claudeMd).toContain('.codecortex/AGENT.md') + expect(claudeMd).toContain('') + expect(claudeMd).toContain('get_edit_briefing') }) it('is idempotent — does not duplicate on re-run', async () => { @@ -103,22 +104,30 @@ describe('generateAgentInstructions', () => { expect(updated).toContain('AGENT.md') expect(updated).toContain('.cursorrules') expect(updated).toContain('AGENTS.md') - // CLAUDE.md also exists since it gets created as default? No — .cursorrules exists so CLAUDE.md only if it exists }) - it('AGENT.md contains all tool names', async () => { + it('AGENT.md contains the 5 kept tool names', async () => { await generateAgentInstructions(root) const agentMd = await readFile(join(root, '.codecortex', 'AGENT.md'), 'utf-8') const expectedTools = [ - 'get_project_overview', 'search_knowledge', 'get_edit_briefing', - 'get_change_coupling', 'lookup_symbol', 'get_module_context', - 'get_dependency_graph', 'get_hotspots', 'get_decision_history', - 'get_session_briefing', 'record_decision', 'update_patterns', - 'record_observation', + 'get_project_overview', 'get_edit_briefing', + 'get_change_coupling', 'lookup_symbol', + 'get_dependency_graph', ] for (const tool of expectedTools) { expect(agentMd).toContain(tool) } + + // Dropped tools should NOT be in AGENT.md + const droppedTools = [ + 'search_knowledge', 'get_module_context', + 'get_hotspots', 'get_decision_history', + 'get_session_briefing', 'record_decision', + 'update_patterns', 'record_observation', + ] + for (const tool of droppedTools) { + expect(agentMd).not.toContain(tool) + } }) }) diff --git a/tests/core/context-injection.test.ts b/tests/core/context-injection.test.ts new file mode 100644 index 0000000..62f7e1b --- /dev/null +++ b/tests/core/context-injection.test.ts @@ -0,0 +1,197 @@ +import { describe, it, expect, beforeEach, afterEach } from 'vitest' +import { mkdtemp, rm, mkdir, readFile, writeFile } from 'node:fs/promises' +import { join } from 'node:path' +import { tmpdir } from 'node:os' +import { generateInlineContext, injectIntoFile, injectAllAgentFiles } from '../../src/core/context-injection.js' + +let root: string + +beforeEach(async () => { + root = await mkdtemp(join(tmpdir(), 'codecortex-inject-test-')) + await mkdir(join(root, '.codecortex'), { recursive: true }) +}) + +afterEach(async () => { + await rm(root, { recursive: true, force: true }) +}) + +describe('generateInlineContext', () => { + it('produces sections even with no data', async () => { + const content = await generateInlineContext(root) + + expect(content).toContain('') + expect(content).toContain('') + expect(content).toContain('## CodeCortex') + expect(content).toContain('### Before Editing') + expect(content).toContain('### MCP Tools (if available)') + expect(content).toContain('### Project Knowledge') + }) + + it('includes architecture when manifest exists', async () => { + const manifest = { + version: '1.0.0', + project: 'test-project', + root: '.', + generated: new Date().toISOString(), + lastUpdated: new Date().toISOString(), + languages: ['typescript', 'python'], + totalFiles: 42, + totalSymbols: 500, + totalModules: 5, + projectSize: 'small', + tiers: { hot: [], warm: [], cold: [] }, + } + await writeFile(join(root, '.codecortex', 'cortex.yaml'), JSON.stringify(manifest), 'utf-8') + + // Write a minimal manifest that readManifest can parse + const { writeManifest, createManifest } = await import('../../src/core/manifest.js') + await writeManifest(root, createManifest({ + project: 'test-project', + root, + languages: ['typescript', 'python'], + totalFiles: 42, + totalSymbols: 500, + totalModules: 5, + })) + + const content = await generateInlineContext(root) + + expect(content).toContain('### Architecture') + expect(content).toContain('test-project') + expect(content).toContain('typescript') + }) + + it('includes risk map when temporal data exists', async () => { + const temporal = { + generated: new Date().toISOString(), + periodDays: 90, + totalCommits: 50, + hotspots: [ + { file: 'src/main.ts', changes: 12, stability: 'volatile', lastChanged: new Date().toISOString(), daysSinceChange: 1 }, + ], + coupling: [ + { fileA: 'src/a.ts', fileB: 'src/b.ts', cochanges: 5, strength: 0.75, hasImport: false }, + ], + bugHistory: [ + { file: 'src/main.ts', fixCommits: 3, lessons: ['fixed crash'] }, + ], + } + await writeFile(join(root, '.codecortex', 'temporal.json'), JSON.stringify(temporal), 'utf-8') + + const content = await generateInlineContext(root) + + expect(content).toContain('### Risk Map') + expect(content).toContain('src/main.ts') + expect(content).toContain('12 changes') + expect(content).toContain('3 bug-fixes') // Bug count shown inline with hotspot + expect(content).toContain('src/a.ts') + expect(content).toContain('75% co-change') + }) + + it('includes all 5 tool names', async () => { + const content = await generateInlineContext(root) + + expect(content).toContain('get_project_overview') + expect(content).toContain('get_dependency_graph') + expect(content).toContain('lookup_symbol') + expect(content).toContain('get_change_coupling') + expect(content).toContain('get_edit_briefing') + }) + + it('does not include dropped tool names', async () => { + const content = await generateInlineContext(root) + + expect(content).not.toContain('search_knowledge') + expect(content).not.toContain('get_module_context') + expect(content).not.toContain('get_hotspots') + expect(content).not.toContain('record_decision') + }) +}) + +describe('injectIntoFile', () => { + it('creates file if it does not exist', async () => { + const filePath = join(root, 'NEW.md') + const content = '\ntest\n\n' + + const result = await injectIntoFile(filePath, content) + + expect(result).toBe(true) + const written = await readFile(filePath, 'utf-8') + expect(written).toBe(content) + }) + + it('replaces between markers (idempotent)', async () => { + const filePath = join(root, 'TEST.md') + const original = '# Title\n\n\nold content\n\n\n# Footer\n' + await writeFile(filePath, original, 'utf-8') + + const newContent = '\nnew content\n\n' + const result = await injectIntoFile(filePath, newContent) + + expect(result).toBe(true) + const written = await readFile(filePath, 'utf-8') + expect(written).toContain('# Title') + expect(written).toContain('new content') + expect(written).not.toContain('old content') + expect(written).toContain('# Footer') + }) + + it('returns false when content unchanged', async () => { + const filePath = join(root, 'TEST.md') + const content = '\ntest\n\n' + await writeFile(filePath, content, 'utf-8') + + const result = await injectIntoFile(filePath, content.trimEnd()) + + expect(result).toBe(false) + }) + + it('migrates from old 3-line pointer', async () => { + const filePath = join(root, 'CLAUDE.md') + const old = '# My Project\n\n## CodeCortex\nThis project uses CodeCortex for codebase knowledge. See `.codecortex/AGENT.md` for available MCP tools and when to use them.\n' + await writeFile(filePath, old, 'utf-8') + + const newContent = '\n## CodeCortex — inline\n\n' + const result = await injectIntoFile(filePath, newContent) + + expect(result).toBe(true) + const written = await readFile(filePath, 'utf-8') + expect(written).toContain('# My Project') + expect(written).toContain('') + expect(written).not.toContain('.codecortex/AGENT.md') + }) + + it('appends to file without markers', async () => { + const filePath = join(root, 'CLAUDE.md') + await writeFile(filePath, '# My Project\n\nSome rules.\n', 'utf-8') + + const newContent = '\ninjected\n\n' + const result = await injectIntoFile(filePath, newContent) + + expect(result).toBe(true) + const written = await readFile(filePath, 'utf-8') + expect(written).toContain('# My Project') + expect(written).toContain('Some rules.') + expect(written).toContain('') + }) +}) + +describe('injectAllAgentFiles', () => { + it('creates CLAUDE.md when no config files exist', async () => { + const updated = await injectAllAgentFiles(root) + + expect(updated).toContain('CLAUDE.md') + const claudeMd = await readFile(join(root, 'CLAUDE.md'), 'utf-8') + expect(claudeMd).toContain('') + }) + + it('injects into all existing config files', async () => { + await writeFile(join(root, 'CLAUDE.md'), '# Project\n', 'utf-8') + await writeFile(join(root, '.cursorrules'), '# Cursor\n', 'utf-8') + + const updated = await injectAllAgentFiles(root) + + expect(updated).toContain('CLAUDE.md') + expect(updated).toContain('.cursorrules') + }) +}) diff --git a/tests/git/temporal-hotspots.test.ts b/tests/git/temporal-hotspots.test.ts new file mode 100644 index 0000000..c461765 --- /dev/null +++ b/tests/git/temporal-hotspots.test.ts @@ -0,0 +1,105 @@ +import { describe, it, expect } from 'vitest' +import { generateHotspotsMarkdown } from '../../src/git/temporal.js' +import type { TemporalData } from '../../src/types/index.js' + +const baseTemporal: TemporalData = { + generated: '2026-03-11T00:00:00.000Z', + periodDays: 90, + totalCommits: 50, + hotspots: [], + coupling: [], + bugHistory: [], +} + +describe('generateHotspotsMarkdown', () => { + it('produces valid markdown with header', () => { + const md = generateHotspotsMarkdown(baseTemporal) + + expect(md).toContain('# Risk-Ranked Files') + expect(md).toContain('50 commits analyzed over 90 days') + }) + + it('handles empty hotspots gracefully', () => { + const md = generateHotspotsMarkdown(baseTemporal) + + expect(md).toContain('No hotspots detected.') + expect(md).not.toContain('| File |') + }) + + it('generates markdown table from hotspots', () => { + const temporal: TemporalData = { + ...baseTemporal, + hotspots: [ + { file: 'src/main.ts', changes: 10, stability: 'volatile', lastChanged: '2026-03-10', daysSinceChange: 1 }, + { file: 'src/util.ts', changes: 3, stability: 'stable', lastChanged: '2026-02-01', daysSinceChange: 38 }, + ], + } + + const md = generateHotspotsMarkdown(temporal) + + expect(md).toContain('| File | Changes | Couplings | Bugs | Risk | Stability |') + expect(md).toContain('`src/main.ts`') + expect(md).toContain('`src/util.ts`') + expect(md).toContain('volatile') + expect(md).toContain('stable') + }) + + it('includes coupling and bug data in risk scores', () => { + const temporal: TemporalData = { + ...baseTemporal, + hotspots: [ + { file: 'src/risky.ts', changes: 5, stability: 'moderate', lastChanged: '2026-03-10', daysSinceChange: 1 }, + ], + coupling: [ + { fileA: 'src/risky.ts', fileB: 'src/other.ts', cochanges: 4, strength: 0.8, hasImport: false }, + ], + bugHistory: [ + { file: 'src/risky.ts', fixCommits: 2, lessons: ['fixed crash'] }, + ], + } + + const md = generateHotspotsMarkdown(temporal) + + // Risk = 5 (churn) + 0.8*2 (coupling) + 2*3 (bugs) = 12.6 + expect(md).toContain('`src/risky.ts`') + // Coupling count = 1, bugs = 2 + const lines = md.split('\n') + const riskyLine = lines.find(l => l.includes('src/risky.ts')) + expect(riskyLine).toContain('| 1 |') // couplings + expect(riskyLine).toContain('| 2 |') // bugs + }) + + it('caps at 30 files', () => { + const temporal: TemporalData = { + ...baseTemporal, + hotspots: Array.from({ length: 50 }, (_, i) => ({ + file: `src/file${i}.ts`, + changes: 50 - i, + stability: 'moderate' as const, + lastChanged: '2026-03-10', + daysSinceChange: 1, + })), + } + + const md = generateHotspotsMarkdown(temporal) + const tableRows = md.split('\n').filter(l => l.startsWith('| `')) + + expect(tableRows.length).toBe(30) + }) + + it('sorts by risk score descending', () => { + const temporal: TemporalData = { + ...baseTemporal, + hotspots: [ + { file: 'src/low.ts', changes: 1, stability: 'stable', lastChanged: '2026-03-10', daysSinceChange: 1 }, + { file: 'src/high.ts', changes: 20, stability: 'volatile', lastChanged: '2026-03-10', daysSinceChange: 1 }, + ], + } + + const md = generateHotspotsMarkdown(temporal) + const lines = md.split('\n').filter(l => l.startsWith('| `')) + + expect(lines[0]).toContain('src/high.ts') + expect(lines[1]).toContain('src/low.ts') + }) +}) diff --git a/tests/mcp/prompts.test.ts b/tests/mcp/prompts.test.ts new file mode 100644 index 0000000..036270c --- /dev/null +++ b/tests/mcp/prompts.test.ts @@ -0,0 +1,74 @@ +import { describe, it, expect, beforeAll, afterAll } from 'vitest' +import { createFixture, type Fixture } from '../fixtures/setup.js' +import { createServer } from '../../src/mcp/server.js' +import { readFile, cortexPath } from '../../src/utils/files.js' +import { writeSession, createSession } from '../../src/core/sessions.js' + +let fixture: Fixture + +beforeAll(async () => { + fixture = await createFixture() +}) + +afterAll(async () => { + await fixture.cleanup() +}) + +describe('MCP Prompts', () => { + it('registers 2 prompts on the server', () => { + const server = createServer(fixture.root) + expect(server).toBeDefined() + }) + + it('start_session prompt has constitution data available', async () => { + const constitution = await readFile(cortexPath(fixture.root, 'constitution.md')) + + expect(constitution).not.toBeNull() + expect(constitution).toContain('Constitution') + expect(constitution).toContain('test-project') + }) + + it('start_session prompt includes session when available', async () => { + // Seed a session + const session = createSession({ + filesChanged: ['src/test.ts'], + modulesAffected: ['core'], + summary: 'Test session for prompt testing.', + }) + await writeSession(fixture.root, session) + + // Verify session data exists + const { getLatestSession, readSession } = await import('../../src/core/sessions.js') + const latestId = await getLatestSession(fixture.root) + expect(latestId).not.toBeNull() + + const sessionContent = await readSession(fixture.root, latestId!) + expect(sessionContent).toContain('Test session for prompt testing') + }) + + it('before_editing prompt uses temporal data', async () => { + const temporalContent = await readFile(cortexPath(fixture.root, 'temporal.json')) + expect(temporalContent).not.toBeNull() + + const temporal = JSON.parse(temporalContent!) + + // Verify temporal data has the structure the prompt expects + expect(temporal.hotspots).toBeDefined() + expect(temporal.coupling).toBeDefined() + expect(temporal.bugHistory).toBeDefined() + }) + + it('before_editing prompt finds coupling for known file', async () => { + const temporalContent = await readFile(cortexPath(fixture.root, 'temporal.json')) + const temporal = JSON.parse(temporalContent!) + + const file = 'processor.ts' + const couplings = temporal.coupling.filter((c: { fileA: string; fileB: string }) => + c.fileA.includes(file) || c.fileB.includes(file) + ) + + expect(couplings.length).toBeGreaterThan(0) + // Hidden dependency should be flagged + expect(couplings.some((c: { hasImport: boolean }) => !c.hasImport)).toBe(true) + }) +}) diff --git a/tests/mcp/read-tools.test.ts b/tests/mcp/read-tools.test.ts index a18fcdc..41b9be6 100644 --- a/tests/mcp/read-tools.test.ts +++ b/tests/mcp/read-tools.test.ts @@ -11,10 +11,6 @@ import { createFixture, type Fixture } from '../fixtures/setup.js' import { readFile, cortexPath } from '../../src/utils/files.js' import { readManifest, updateManifest } from '../../src/core/manifest.js' import { readGraph, getModuleDependencies, getMostImportedFiles, getFileImporters } from '../../src/core/graph.js' -import { readModuleDoc, listModuleDocs } from '../../src/core/modules.js' -import { listSessions, getLatestSession } from '../../src/core/sessions.js' -import { listDecisions } from '../../src/core/decisions.js' -import { searchKnowledge } from '../../src/core/search.js' import type { TemporalData, SymbolIndex } from '../../src/types/index.js' let fixture: Fixture @@ -92,87 +88,7 @@ describe('get_project_overview (tool 1)', () => { }) }) -describe('get_module_context (tool 2)', () => { - it('returns null for modules that have no .md doc', async () => { - const doc = await readModuleDoc(fixture.root, 'core') - expect(doc).toBeNull() // No module doc created yet - }) - - it('lists available modules (empty until analysis)', async () => { - const available = await listModuleDocs(fixture.root) - expect(available).toEqual([]) // No .md files in modules/ yet - }) - - it('returns dependencies for known module', async () => { - const graph = await readGraph(fixture.root) - expect(graph).not.toBeNull() - const deps = getModuleDependencies(graph!, 'core') - expect(deps.imports.length).toBeGreaterThan(0) - }) -}) - -describe('get_session_briefing (tool 3)', () => { - it('returns null when no sessions exist', async () => { - const latest = await getLatestSession(fixture.root) - expect(latest).toBeNull() - }) - - it('lists zero sessions', async () => { - const sessions = await listSessions(fixture.root) - expect(sessions).toHaveLength(0) - }) -}) - -describe('search_knowledge (tool 4)', () => { - it('finds results across knowledge files', async () => { - const results = await searchKnowledge(fixture.root, 'typescript') - expect(results.length).toBeGreaterThan(0) - }) - - it('limits results to 20 (matching tool behavior)', async () => { - const results = await searchKnowledge(fixture.root, 'test') - const limited = results.slice(0, 20) - expect(limited.length).toBeLessThanOrEqual(20) - }) - - it('respects custom limit param', async () => { - const results = await searchKnowledge(fixture.root, 'process', 2) - expect(results.length).toBeLessThanOrEqual(2) - }) - - it('finds symbols by name with type=symbol', async () => { - const results = await searchKnowledge(fixture.root, 'processData') - const symbolResults = results.filter(r => r.type === 'symbol') - expect(symbolResults.length).toBeGreaterThan(0) - // exact(10) + function(2) + exported(1) = 13 - expect(symbolResults[0]!.score).toBeGreaterThanOrEqual(10) - }) - - it('searchDefaultLimit exists in size limits', async () => { - const { getSizeLimits } = await import('../../src/core/project-size.js') - const micro = getSizeLimits('micro') - const large = getSizeLimits('large') - expect(micro.searchDefaultLimit).toBe(10) - expect(large.searchDefaultLimit).toBe(20) - }) - - it('ranks symbols higher than file paths and docs', async () => { - const results = await searchKnowledge(fixture.root, 'auth') - if (results.length >= 2) { - // First result should be highest scored - expect(results[0]!.score).toBeGreaterThanOrEqual(results[1]!.score) - } - }) -}) - -describe('get_decision_history (tool 5)', () => { - it('returns empty when no decisions exist', async () => { - const ids = await listDecisions(fixture.root) - expect(ids).toHaveLength(0) - }) -}) - -describe('get_dependency_graph (tool 6)', () => { +describe('get_dependency_graph (tool 2)', () => { it('returns full graph when no filter', async () => { const graph = await readGraph(fixture.root) expect(graph).not.toBeNull() @@ -222,7 +138,7 @@ describe('get_dependency_graph (tool 6)', () => { }) }) -describe('lookup_symbol (tool 7)', () => { +describe('lookup_symbol (tool 3)', () => { it('finds symbol by name', async () => { const content = await readFile(cortexPath(fixture.root, 'symbols.json')) const index: SymbolIndex = JSON.parse(content!) @@ -251,7 +167,7 @@ describe('lookup_symbol (tool 7)', () => { }) }) -describe('get_change_coupling (tool 8)', () => { +describe('get_change_coupling (tool 4)', () => { it('reads coupling data', async () => { const content = await readFile(cortexPath(fixture.root, 'temporal.json')) const temporal: TemporalData = JSON.parse(content!) @@ -289,28 +205,7 @@ describe('get_change_coupling (tool 8)', () => { }) }) -describe('get_hotspots (tool 9)', () => { - it('reads hotspot data sorted by changes', async () => { - const content = await readFile(cortexPath(fixture.root, 'temporal.json')) - const temporal: TemporalData = JSON.parse(content!) - - expect(temporal.hotspots).toHaveLength(2) - expect(temporal.hotspots[0]!.file).toContain('processor.ts') - expect(temporal.hotspots[0]!.changes).toBe(8) - expect(temporal.hotspots[0]!.stability).toBe('volatile') - }) - - it('includes bug history', async () => { - const content = await readFile(cortexPath(fixture.root, 'temporal.json')) - const temporal: TemporalData = JSON.parse(content!) - - expect(temporal.bugHistory).toHaveLength(1) - expect(temporal.bugHistory[0]!.fixCommits).toBe(3) - expect(temporal.bugHistory[0]!.lessons).toHaveLength(2) - }) -}) - -describe('get_edit_briefing (tool 10)', () => { +describe('get_edit_briefing (tool 5)', () => { it('returns risk assessment for a volatile file', async () => { const content = await readFile(cortexPath(fixture.root, 'temporal.json')) const temporal: TemporalData = JSON.parse(content!) diff --git a/tests/mcp/resources.test.ts b/tests/mcp/resources.test.ts new file mode 100644 index 0000000..056e816 --- /dev/null +++ b/tests/mcp/resources.test.ts @@ -0,0 +1,64 @@ +import { describe, it, expect, beforeAll, afterAll } from 'vitest' +import { createFixture, type Fixture } from '../fixtures/setup.js' +import { createServer } from '../../src/mcp/server.js' + +let fixture: Fixture + +beforeAll(async () => { + fixture = await createFixture() +}) + +afterAll(async () => { + await fixture.cleanup() +}) + +describe('MCP Resources', () => { + it('registers 3 resources on the server', () => { + const server = createServer(fixture.root) + // Server creation should not throw — resources registered successfully + expect(server).toBeDefined() + }) + + it('project_overview resource returns constitution content', async () => { + const { readFile, cortexPath } = await import('../../src/utils/files.js') + const content = await readFile(cortexPath(fixture.root, 'constitution.md')) + + expect(content).not.toBeNull() + expect(content).toContain('Constitution') + }) + + it('project_hotspots resource returns hotspots or fallback', async () => { + const { readFile, cortexPath } = await import('../../src/utils/files.js') + const content = await readFile(cortexPath(fixture.root, 'hotspots.md')) + + // Hotspots may or may not exist in fixture — either is fine + // The resource handler returns a fallback message if missing + expect(content === null || typeof content === 'string').toBe(true) + }) + + it('module template lists available modules', async () => { + const { listModuleDocs } = await import('../../src/core/modules.js') + const modules = await listModuleDocs(fixture.root) + + // Fixture starts with no module docs + expect(Array.isArray(modules)).toBe(true) + }) + + it('module template reads module doc by name', async () => { + const { writeModuleDoc, readModuleDoc } = await import('../../src/core/modules.js') + + // Write a test module doc + await writeModuleDoc(fixture.root, { + name: 'test-mod', + purpose: 'Test module for resource tests.', + dataFlow: 'None.', + publicApi: ['testFn()'], + gotchas: [], + dependencies: [], + }) + + const content = await readModuleDoc(fixture.root, 'test-mod') + expect(content).not.toBeNull() + expect(content).toContain('Test module for resource tests') + }) +}) diff --git a/tests/mcp/simulation.test.ts b/tests/mcp/simulation.test.ts index ca7c9eb..6e97c20 100644 --- a/tests/mcp/simulation.test.ts +++ b/tests/mcp/simulation.test.ts @@ -22,7 +22,6 @@ import { readModuleDoc, writeModuleDoc, listModuleDocs } from '../../src/core/mo import { writeDecision, createDecision, listDecisions, readDecision } from '../../src/core/decisions.js' import { writeSession, createSession, listSessions, readSession, getLatestSession } from '../../src/core/sessions.js' import { addPattern, readPatterns } from '../../src/core/patterns.js' -import { searchKnowledge } from '../../src/core/search.js' import type { TemporalData, SymbolIndex, DependencyGraph, ModuleAnalysis } from '../../src/types/index.js' let fixture: Fixture @@ -77,12 +76,12 @@ describe('Persona 1: New Agent — codebase discovery', () => { expect(graphSummary.mostImported[0]!.file).toBe('src/core/types.ts') }) - it('Step 2: picks "core" module and calls get_module_context', async () => { + it('Step 2: picks "core" module and reads module doc + graph deps', async () => { // Agent picks the first module from the graph const targetModule = graph.modules[0]!.name expect(targetModule).toBe('core') - // Tool: get_module_context + // Agent reads .codecortex/modules/core.md directly (no MCP tool needed) const doc = await readModuleDoc(fixture.root, targetModule) const deps = getModuleDependencies(graph, targetModule) @@ -127,8 +126,8 @@ describe('Persona 1: New Agent — codebase discovery', () => { describe('Persona 2: Bug Fixer — risk-focused investigation', () => { let temporal: TemporalData - it('Step 1: calls get_hotspots to find risky files', async () => { - // Tool: get_hotspots { limit: 5 } + it('Step 1: reads hotspots.md to find risky files', async () => { + // Agent reads .codecortex/hotspots.md directly (static file, no MCP tool) const content = await readFile(cortexPath(fixture.root, 'temporal.json')) temporal = JSON.parse(content!) @@ -189,12 +188,14 @@ describe('Persona 2: Bug Fixer — risk-focused investigation', () => { expect(fileSymbols[0]!.name).toBe('processData') }) - it('Step 4: calls search_knowledge for bug-related context', async () => { - // Tool: search_knowledge { query: "processor" } - const results = await searchKnowledge(fixture.root, 'processor') + it('Step 4: reads constitution for project context', async () => { + // Agent reads constitution directly (no search_knowledge MCP tool) + const constitution = await readFile(cortexPath(fixture.root, 'constitution.md')) - // Agent finds references in constitution, symbols, graph - expect(results.length).toBeGreaterThan(0) + // Agent finds project knowledge in constitution + expect(constitution).not.toBeNull() + expect(constitution).toContain('Constitution') + expect(constitution).toContain('test-project') }) }) @@ -280,12 +281,10 @@ describe('Persona 3: Feature Developer — write workflow', () => { const patterns = await readPatterns(fixture.root) expect(patterns).toContain('Configuration Constants') - // search_knowledge should find the new content - const results = await searchKnowledge(fixture.root, 'TIMEOUT') - expect(results.length).toBeGreaterThan(0) - // Should find it in both the module doc and the patterns - const sources = results.map(r => r.file) - expect(sources.some(f => f.includes('modules/'))).toBe(true) + // Agent reads module doc directly to verify content + const utilsDoc = await readModuleDoc(fixture.root, 'utils') + expect(utilsDoc).not.toBeNull() + expect(utilsDoc).toContain('TIMEOUT') }) }) @@ -293,8 +292,8 @@ describe('Persona 3: Feature Developer — write workflow', () => { // Persona 4: Session Resumer — picking up after a break // ───────────────────────────────────────────────────── describe('Persona 4: Session Resumer — context recovery', () => { - it('Step 1: calls get_session_briefing to catch up', async () => { - // Tool: get_session_briefing + it('Step 1: reads session files directly to catch up', async () => { + // Agent reads .codecortex/sessions/ directly (no MCP tool) const latestId = await getLatestSession(fixture.root) expect(latestId).not.toBeNull() @@ -325,11 +324,11 @@ describe('Persona 4: Session Resumer — context recovery', () => { expect(hidden.length).toBeGreaterThan(0) }) - it('Step 3: calls get_hotspots to check if changed files are becoming volatile', async () => { + it('Step 3: reads hotspots.md to check if changed files are volatile', async () => { const content = await readFile(cortexPath(fixture.root, 'temporal.json')) const temporal: TemporalData = JSON.parse(content!) - // Tool: get_hotspots { limit: 5 } + // Agent reads .codecortex/hotspots.md directly (static file) const processorHotspot = temporal.hotspots.find(h => h.file.includes('processor.ts')) expect(processorHotspot).toBeDefined() expect(processorHotspot!.stability).toBe('volatile') @@ -338,18 +337,17 @@ describe('Persona 4: Session Resumer — context recovery', () => { // Agent notes: processor.ts is volatile with 8 changes — needs stabilization }) - it('Step 4: calls search_knowledge for context on the race condition fix', async () => { - // Tool: search_knowledge { query: "race condition" } - // The session log mentions "race condition" — search should find the session - const results = await searchKnowledge(fixture.root, 'race condition') + it('Step 4: reads session files for context on the race condition fix', async () => { + // Agent reads session files directly (no search_knowledge MCP tool) + const latestId = await getLatestSession(fixture.root) + const session = await readSession(fixture.root, latestId!) - // Should find it in the session file - expect(results.length).toBeGreaterThan(0) - expect(results.some(r => r.file.includes('sessions/'))).toBe(true) + // Session mentions the race condition fix + expect(session).toContain('race condition') }) - it('Step 5: checks decisions made since last session', async () => { - // Tool: get_decision_history + it('Step 5: reads decisions directory for recent decisions', async () => { + // Agent reads .codecortex/decisions/ directly (no MCP tool) const ids = await listDecisions(fixture.root) // Persona 3 recorded a decision — agent sees it @@ -406,26 +404,25 @@ describe('Cross-persona: knowledge store integrity', () => { } }) - it('search_knowledge finds content written by Feature Developer', async () => { - const results = await searchKnowledge(fixture.root, 'formatOutput') - expect(results.length).toBeGreaterThan(0) - - // Should be discoverable in module doc, patterns, and/or constitution - const files = results.map(r => r.file) - expect(files.some(f => f.includes('modules/utils.md'))).toBe(true) + it('module doc written by Feature Developer is readable', async () => { + const doc = await readModuleDoc(fixture.root, 'utils') + expect(doc).not.toBeNull() + expect(doc).toContain('formatOutput') }) - it('session + decision + pattern writes are all searchable', async () => { + it('session + decision + pattern writes are all readable', async () => { // Session content - const sessionResults = await searchKnowledge(fixture.root, 'mutex lock') - expect(sessionResults.length).toBeGreaterThan(0) + const latestId = await getLatestSession(fixture.root) + const session = await readSession(fixture.root, latestId!) + expect(session).toContain('mutex lock') // Decision content - const decisionResults = await searchKnowledge(fixture.root, 'flat JSON') - expect(decisionResults.length).toBeGreaterThan(0) + const decisionIds = await listDecisions(fixture.root) + const decision = await readDecision(fixture.root, decisionIds[0]!) + expect(decision).toContain('flat JSON') // Pattern content - const patternResults = await searchKnowledge(fixture.root, 'Configuration Constants') - expect(patternResults.length).toBeGreaterThan(0) + const patterns = await readPatterns(fixture.root) + expect(patterns).toContain('Configuration Constants') }) }) diff --git a/tests/mcp/write-tools.test.ts b/tests/mcp/write-tools.test.ts deleted file mode 100644 index eb1b12e..0000000 --- a/tests/mcp/write-tools.test.ts +++ /dev/null @@ -1,180 +0,0 @@ -/** - * Tests for MCP write tools. - * - * Uses temp fixture directory to test write operations without - * modifying the real .codecortex/ knowledge store. - */ - -import { describe, it, expect, beforeAll, afterAll } from 'vitest' -import { createFixture, type Fixture } from '../fixtures/setup.js' -import { readFile, cortexPath } from '../../src/utils/files.js' -import { writeModuleDoc, readModuleDoc, listModuleDocs } from '../../src/core/modules.js' -import { writeDecision, createDecision, listDecisions, readDecision } from '../../src/core/decisions.js' -import { writeSession, createSession, listSessions, readSession, getLatestSession } from '../../src/core/sessions.js' -import { addPattern, readPatterns } from '../../src/core/patterns.js' -import { writeFile, ensureDir } from '../../src/utils/files.js' -import type { ModuleAnalysis } from '../../src/types/index.js' - -let fixture: Fixture - -beforeAll(async () => { - fixture = await createFixture() -}) - -afterAll(async () => { - await fixture.cleanup() -}) - -describe('module doc write/read (used by structural gen)', () => { - it('writes module doc and can read it back', async () => { - const analysis: ModuleAnalysis = { - name: 'core', - purpose: 'Core processing logic', - dataFlow: 'Input → validate → process → output', - publicApi: ['processData', 'Result'], - gotchas: ['Async processing needs error boundaries'], - dependencies: ['utils/format for output formatting'], - } - - await writeModuleDoc(fixture.root, analysis) - - const doc = await readModuleDoc(fixture.root, 'core') - expect(doc).not.toBeNull() - expect(doc).toContain('Core processing logic') - expect(doc).toContain('processData') - }) - - it('appears in module list after writing', async () => { - const modules = await listModuleDocs(fixture.root) - expect(modules).toContain('core') - }) -}) - -describe('record_decision (tool 11)', () => { - it('writes decision and reads it back', async () => { - const decision = createDecision({ - title: 'Use tree-sitter for parsing', - context: 'Need to extract symbols from source code', - decision: 'Use tree-sitter native N-API bindings', - alternatives: ['ctags', 'regex'], - consequences: ['Requires native build'], - }) - - await writeDecision(fixture.root, decision) - - const content = await readDecision(fixture.root, decision.id) - expect(content).not.toBeNull() - expect(content).toContain('Use tree-sitter for parsing') - expect(content).toContain('ctags') - }) - - it('appears in decision list', async () => { - const ids = await listDecisions(fixture.root) - expect(ids).toContain('use-tree-sitter-for-parsing') - }) -}) - -describe('update_patterns (tool 12)', () => { - it('adds a new pattern', async () => { - const result = await addPattern(fixture.root, { - name: 'Error Handling', - description: 'All async functions should use try/catch', - example: 'try { await process() } catch (e) { log(e) }', - files: ['src/core/processor.ts'], - }) - - expect(result).toBe('added') - - const content = await readPatterns(fixture.root) - expect(content).toContain('Error Handling') - expect(content).toContain('try/catch') - }) - - it('updates an existing pattern', async () => { - const result = await addPattern(fixture.root, { - name: 'Error Handling', - description: 'Updated: All functions must use Result type', - example: 'const result: Result = process()', - files: ['src/core/processor.ts'], - }) - - expect(result).toBe('updated') - - const content = await readPatterns(fixture.root) - expect(content).toContain('Result type') - }) -}) - -describe('record_observation (tool 13)', () => { - it('records an observation entry', async () => { - const dir = cortexPath(fixture.root, 'observations') - await ensureDir(dir) - - const entry = { - date: new Date().toISOString(), - topic: 'circular dependency in auth', - observation: 'Auth module imports from user module which imports back from auth', - files: ['src/auth/index.ts', 'src/user/index.ts'], - reporter: 'agent', - } - - const obsPath = cortexPath(fixture.root, 'observations', 'log.json') - const existing = await readFile(obsPath) - const entries = existing ? JSON.parse(existing) : [] - entries.push(entry) - await writeFile(obsPath, JSON.stringify(entries, null, 2)) - - // Read back - const content = await readFile(obsPath) - const parsed = JSON.parse(content!) - expect(parsed).toHaveLength(1) - expect(parsed[0].topic).toBe('circular dependency in auth') - expect(parsed[0].observation).toContain('Auth module') - expect(parsed[0].reporter).toBe('agent') - }) - - it('appends multiple observation entries', async () => { - const obsPath = cortexPath(fixture.root, 'observations', 'log.json') - const existing = await readFile(obsPath) - const entries = existing ? JSON.parse(existing) : [] - entries.push({ - date: new Date().toISOString(), - topic: 'Docker required for tests', - observation: 'Integration tests need Docker running for the database container', - files: ['docker-compose.yml'], - reporter: 'agent', - }) - await writeFile(obsPath, JSON.stringify(entries, null, 2)) - - const content = await readFile(obsPath) - const parsed = JSON.parse(content!) - expect(parsed).toHaveLength(2) - }) -}) - -describe('session write/read round-trip', () => { - it('writes and reads a session', async () => { - const session = createSession({ - filesChanged: ['src/core/processor.ts'], - modulesAffected: ['core'], - summary: 'Test session', - }) - - await writeSession(fixture.root, session) - - const content = await readSession(fixture.root, session.id) - expect(content).not.toBeNull() - expect(content).toContain('Test session') - expect(content).toContain('src/core/processor.ts') - }) - - it('getLatestSession returns the most recent', async () => { - const latest = await getLatestSession(fixture.root) - expect(latest).not.toBeNull() - }) - - it('listSessions returns all sessions', async () => { - const sessions = await listSessions(fixture.root) - expect(sessions.length).toBeGreaterThanOrEqual(1) - }) -})