Skip to content

feat: add MiniMax LLM provider instrumentation#140

Open
octo-patch wants to merge 1 commit intofuture-agi:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax LLM provider instrumentation#140
octo-patch wants to merge 1 commit intofuture-agi:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 12, 2026

Summary

Add OpenTelemetry instrumentation for MiniMax, an AI platform providing OpenAI-compatible API with powerful language models.

What is included:

  • New package: traceai-minimax - Full instrumentation for MiniMax LLM API calls
  • Models supported: MiniMax-M2.5 (204K context) and MiniMax-M2.5-highspeed
  • Features:
    • Sync and async chat completions tracing
    • Streaming response support
    • Function/tool calling instrumentation
    • Token usage tracking (input, output, total)
    • MiniMax-specific attributes (response_id, finish_reason, tool_calls)
  • Provider registry: Added MINIMAX to FiLLMProviderValues enum
  • README: Updated main README with MiniMax in supported frameworks table and compatibility matrix
  • Tests: 19 comprehensive unit tests covering all extractors and utilities
  • Examples: Basic chat, streaming, function calling, and highspeed model usage

Implementation approach:

MiniMax uses the OpenAI-compatible API (https://api.minimax.io/v1), so this integration follows the same pattern as the DeepSeek instrumentor - wrapping OpenAI SDK calls and detecting MiniMax clients by their base_url.

Files changed:

File Description
python/frameworks/minimax/ New MiniMax instrumentation package
python/fi_instrumentation/fi_types.py Added MINIMAX to provider enum
README.md Added MiniMax to supported frameworks

Test plan

  • All 19 unit tests pass (pytest tests/ -v)
  • Request attributes extraction (model, messages, tools, invocation params)
  • Response attributes extraction (tokens, content, tool calls, finish reason)
  • Client detection via base_url (api.minimax.io, api.minimax.chat)
  • Streaming response chunk accumulation
  • Multi-turn conversation support
  • Function calling flow

@NVJKKartik NVJKKartik self-requested a review March 13, 2026 10:17
- Both international (api.minimax.io) and domestic (api.minimaxi.com) endpoints
"""

__slots__ = (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

slots declares only 3 attributes, but line 105 and 112 assign self._original_protect. This will raise AttributeError at runtime when evaluations (Guardrailing) is utilized . Add "_original_protect" to slots. Or you can either drop the protect assignment, it is traced by other instrumentors

else:
self._original_protect = None

def _uninstrument(self, **kwargs: Any) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here uninstrument doesn't restore the wrapper. I would suggest to remove the protect wrapper itself

content = message.get("content", "")
if content:
yield SpanAttributes.OUTPUT_VALUE, content

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line 101 yields OUTPUT_VALUE with readable content, then line 104 unconditionally yields OUTPUT_VALUE again with the raw JSON dump. Since set_attribute overwrites, the human-readable content is silently lost.
Either remove line 104 or use a different attribute key for the raw output. ( I would suggest checking the Conventions)

"tool_calls": list(self._tool_calls.values()),
"finish_reason": self._finish_reason
}
yield SpanAttributes.OUTPUT_VALUE, safe_json_dumps(output_summary)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue here, The dual overwrite would lose the content which the function is trying to extract

MINIMAX_BASE_URLS = [
"api.minimax.io",
"api.minimaxi.com",
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider Using ENUMS for better accessability


if base_url is not None:
base_url_str = str(base_url).lower()
return any(url in base_url_str for url in MINIMAX_BASE_URLS)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not verified but if there are custom deployments allowed from minimax with custom domains then this can be utilized for sub string matching. If this isn't the case please consider using ENUMS and strict matching for safety purposes

Copy link
Contributor

@NVJKKartik NVJKKartik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @octo-patch Thank you for Contributing to the repo. The minimax provider is a great addition. The implementation is clean and follows the existing patterns well, I have flagged a few issues on the specific lines. Please address them and I will be happy to approve and publish

Add OpenTelemetry instrumentation for MiniMax (https://www.minimax.io/),
which provides an OpenAI-compatible API. This integration supports:

- MiniMax-M2.5 and MiniMax-M2.5-highspeed models (204K context)
- Sync and async chat completions
- Streaming responses
- Function/tool calling
- Token usage tracking

The implementation follows the same pattern as the existing DeepSeek
integration, detecting MiniMax clients by their base_url
(api.minimax.io) and wrapping OpenAI SDK calls accordingly.

Changes:
- New package: python/frameworks/minimax/ (traceai_minimax)
- Added MINIMAX to FiLLMProviderValues enum
- Updated README.md with MiniMax in supported frameworks
- Includes comprehensive tests (19 passing) and usage examples
@octo-patch octo-patch force-pushed the feature/add-minimax-provider branch from ac507f6 to f91e16c Compare March 15, 2026 14:13
@octo-patch octo-patch changed the title feat: Add MiniMax provider instrumentation support feat: add MiniMax LLM provider instrumentation Mar 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants