Skip to content

fix: remove broken prediction cache from WebLLM predictor#352

Merged
bartekplus merged 3 commits intomasterfrom
remove-prediction-cache
Mar 25, 2026
Merged

fix: remove broken prediction cache from WebLLM predictor#352
bartekplus merged 3 commits intomasterfrom
remove-prediction-cache

Conversation

@bartekplus
Copy link
Copy Markdown
Owner

Summary

  • Remove PredictionCache class and file entirely
  • Remove all cache lookup/store logic from WebLLMPredictor.predict()
  • Remove cacheSize from debug state, snapshot types, settings UI, and test fixtures

Test plan

  • bun run check passes (lint, format, typecheck)
  • bun run build succeeds
  • Verify AI predictions still work end-to-end without caching

🤖 Generated with Claude Code

bartekplus and others added 3 commits March 25, 2026 11:49
The prediction cache (5s TTL, keyed by model+lang+input) was broken and
provided negligible value. Remove PredictionCache class, all cache
lookups/stores in WebLLMPredictor, related debug fields, and UI display.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove lastPrediction memoization cache from PresageHandler — same
rationale as the WebLLM prediction cache removal.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Keep getLastPredictionInput() as a simple input tracker (not a cache)
- Remove WebLLM cache test that tested deleted functionality
- Update parallel test to expect presage called on every invocation

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@bartekplus bartekplus merged commit b793cdc into master Mar 25, 2026
8 checks passed
@bartekplus bartekplus deleted the remove-prediction-cache branch March 25, 2026 11:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant