-
Notifications
You must be signed in to change notification settings - Fork 1
Implement support for Ollama AI provider #77
Copy link
Copy link
Open
Labels
aiAI functionalities, prompts, and integration with LLM modelsAI functionalities, prompts, and integration with LLM modelsenhancementImprovement or extension of an existing functionalityImprovement or extension of an existing functionalityfeatureNew featuresNew featuresgood first issueGood for newcomersGood for newcomers
Metadata
Metadata
Assignees
Labels
aiAI functionalities, prompts, and integration with LLM modelsAI functionalities, prompts, and integration with LLM modelsenhancementImprovement or extension of an existing functionalityImprovement or extension of an existing functionalityfeatureNew featuresNew featuresgood first issueGood for newcomersGood for newcomers
Problem Statement
I am currently limited to using cloud-based AI providers, which raises concerns regarding data privacy and incurs usage costs. I want to be able to utilize local LLMs to generate commit messages without sending code snippets to external servers.
Proposed Solution
I propose implementing support for Ollama as a new AI provider. This integration should allow users to configure MateCommit to communicate with a local Ollama instance (usually running on localhost:11434).
The implementation needs to:
AIProviderinterface.Alternatives Considered
I considered using generic HTTP bindings for local servers, but a dedicated Ollama provider ensures better default settings and easier configuration for end users.
Additional Context
This is a great entry point for contributors looking to work with AI integrations. The
AIProviderinterface is already established, so this task primarily involves implementing the specific API calls for Ollama.