feat: replace litellm with nanollm β 108x less code, 1 dependency#1871
Open
hafezparast wants to merge 9 commits intounclecode:mainfrom
Open
feat: replace litellm with nanollm β 108x less code, 1 dependency#1871hafezparast wants to merge 9 commits intounclecode:mainfrom
hafezparast wants to merge 9 commits intounclecode:mainfrom
Conversation
Complete migration from unclecode-litellm to nanollm. Zero litellm references remain in the crawl4ai source code. Version: 0.8.7a1 (pre-release for nanollm integration testing) Changes: - pyproject.toml, requirements.txt: swap unclecode-litellm for nanollm - crawl4ai/utils.py: all litellm imports β nanollm (completion, acompletion, batch_completion, aembedding, RateLimitError, drop_params) - crawl4ai/cli.py: litellm import β nanollm, provider docs URLs updated - crawl4ai/legacy/llmtxt.py: litellm imports β nanollm, set_verbose - crawl4ai/__version__.py: bump to 0.8.7a1 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use hafezparast/nanollm for easier access and independent control during testing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reflects nanollm dependency now pointing to hafezparast fork. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pin nanollm dependency to tagged release v0.1.0 instead of branch name for reproducible installs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pin to commit 35a0c4b (v0.1.0) which includes the completion_tokens_details and prompt_tokens_details fix. Using commit hash instead of tag ensures pip won't serve a stale cached version. Bump to 0.8.7a4. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
nanollm v0.1.1 wraps completion_tokens_details and prompt_tokens_details in _AttrDict so crawl4ai's .__dict__ access pattern works correctly. Bump to 0.8.7a5. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Point to hafezparast/nanollm-core (Approach 1 β general-purpose litellm replacement) instead of hafezparast/nanollm (Approach 2 β fork). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Point dependency to hafezparast/nanollm-approach1 v0.2.0 β built by AST-analyzing litellm's 544K-line codebase, extracting the core completion subgraph, and rewriting it with multimodal/vision support. 3,731 lines, 1 dep (httpx), 605 tests, 7 adapters, 25+ providers. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Combines Approach 1 (httpx, module functions, 600+ tests) with Approach 2 (class-based providers, NanoLLM client, built-in retry, structured output, thinking/reasoning). 5,026 lines, 1 dep (httpx), 609 tests, 25 providers. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
litellmdependency (544K lines, 61 packages, 151 MB installed) withnanollm(5K lines, 1 dep, 5.5 MB installed)from litellm import ...changed tofrom nanollm import ...What changed
pyproject.toml:litellm>=1.53.1βnanollm @ git+...litellmβnanollmimports (utils.py, cli.py, extraction_strategy.py, legacy/llmtxt.py)nanollm.drop_params = Trueadded where crawl4ai calls completion (handles O-series/GPT-5 compat)Why
litellm is 544K lines with 61 transitive dependencies. crawl4ai uses exactly 4 functions from it:
completion,acompletion,batch_completion,aembedding. NanoLLM provides those same functions (plus more) in 5K lines with 1 dependency (httpx, which crawl4ai already uses).Test plan
litellmreferences in crawl4ai sourceπ€ Generated with Claude Code