RSPEED-2691: Consolidate rlsapi_v1 unit tests to reduce duplication#1366
RSPEED-2691: Consolidate rlsapi_v1 unit tests to reduce duplication#1366major wants to merge 5 commits intolightspeed-core:mainfrom
Conversation
…t fixtures Signed-off-by: Major Hayden <major@redhat.com>
Signed-off-by: Major Hayden <major@redhat.com>
Signed-off-by: Major Hayden <major@redhat.com>
Add pylint disables for too-many-arguments and too-many-positional-arguments to handle parameterized test functions that require 6-7 fixture parameters. These are necessary for the consolidated test suite with shared fixtures. Signed-off-by: Major Hayden <major@redhat.com>
WalkthroughNew pytest fixtures introduced in conftest.py for mocking FastAPI Request and BackgroundTasks objects. Existing endpoint tests refactored to use these shared fixtures instead of local helpers, with multiple tests consolidated into parametrized versions. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: Major Hayden <major@redhat.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/unit/app/endpoints/test_rlsapi_v1.py`:
- Around line 727-785: Add a new explicit branch to
test_infer_extract_token_usage_on_failure_depends_on_verbose that covers
verbose_enabled=True but the response is None: when verbose is True, mock the
response path used by infer_endpoint (e.g., patch
"app.endpoints.rlsapi_v1.retrieve_simple_response" or the same async responder
used by _setup_responses_mock to return None) and ensure extract_token_usage
(patched via "app.endpoints.rlsapi_v1.extract_token_usage") is not called; keep
existing failure branches intact so the test asserts the response-is-not-None
gating in infer_endpoint around extract_token_usage.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 20f1d0d2-10cc-42ea-ba39-fa7a9c1466aa
📒 Files selected for processing (2)
tests/unit/app/endpoints/conftest.pytests/unit/app/endpoints/test_rlsapi_v1.py
| @pytest.mark.parametrize( | ||
| ("verbose_enabled", "expect_extract_called"), | ||
| [ | ||
| pytest.param(True, True, id="verbose_calls_extract"), | ||
| pytest.param(False, False, id="non_verbose_skips_extract"), | ||
| ], | ||
| ) | ||
| async def test_infer_extract_token_usage_on_failure_depends_on_verbose( | ||
| mocker: MockerFixture, | ||
| mock_configuration: AppConfig, | ||
| mock_auth_resolvers: None, | ||
| mock_request_factory: Callable[..., Any], | ||
| mock_background_tasks: Any, | ||
| verbose_enabled: bool, | ||
| expect_extract_called: bool, | ||
| ) -> None: | ||
| """Verify extract_token_usage called in except block when text extraction fails.""" | ||
| _setup_config_mock(mocker, mock_configuration, verbose_enabled=True) | ||
| mock_response = mocker.Mock() | ||
| mock_response.output = [_create_mock_response_output(mocker, "Response")] | ||
| mock_usage = mocker.Mock() | ||
| mock_usage.input_tokens = 50 | ||
| mock_usage.output_tokens = 25 | ||
| mock_response.usage = mock_usage | ||
| _setup_responses_mock(mocker, mocker.AsyncMock(return_value=mock_response)) | ||
| mocker.patch( | ||
| "app.endpoints.rlsapi_v1.extract_text_from_response_items", | ||
| side_effect=RuntimeError("text extraction failed"), | ||
| ) | ||
| mock_extract = mocker.patch("app.endpoints.rlsapi_v1.extract_token_usage") | ||
| with pytest.raises(RuntimeError): | ||
| await infer_endpoint( | ||
| infer_request=RlsapiV1InferRequest( | ||
| question="How do I list files?", include_metadata=True | ||
| ), | ||
| request=_create_mock_request(mocker), | ||
| background_tasks=_create_mock_background_tasks(mocker), | ||
| auth=MOCK_AUTH, | ||
| """Verify extract_token_usage is called on failure only when verbose is enabled.""" | ||
| _setup_config_mock(mocker, mock_configuration, verbose_enabled=verbose_enabled) | ||
|
|
||
| mock_usage: Any = None | ||
| if verbose_enabled: | ||
| mock_response = mocker.Mock() | ||
| mock_response.output = [_create_mock_response_output(mocker, "Response")] | ||
| mock_usage = mocker.Mock() | ||
| mock_usage.input_tokens = 50 | ||
| mock_usage.output_tokens = 25 | ||
| mock_response.usage = mock_usage | ||
| _setup_responses_mock(mocker, mocker.AsyncMock(return_value=mock_response)) | ||
| mocker.patch( | ||
| "app.endpoints.rlsapi_v1.extract_text_from_response_items", | ||
| side_effect=RuntimeError("text extraction failed"), | ||
| ) | ||
| else: | ||
| mocker.patch( | ||
| "app.endpoints.rlsapi_v1.retrieve_simple_response", | ||
| side_effect=RuntimeError("retrieval failed"), | ||
| ) | ||
| mock_extract.assert_called_once() | ||
| call_args = mock_extract.call_args | ||
| assert call_args[0][0] == mock_usage | ||
| assert call_args[0][1] == "openai/gpt-4-turbo" | ||
|
|
||
|
|
||
| async def test_infer_non_verbose_no_extract_token_usage_on_failure( | ||
| mocker: MockerFixture, | ||
| mock_configuration: AppConfig, | ||
| mock_auth_resolvers: None, | ||
| ) -> None: | ||
| """Verify extract_token_usage NOT called in except block for non-verbose.""" | ||
| _setup_config_mock(mocker, mock_configuration, verbose_enabled=False) | ||
| mocker.patch( | ||
| "app.endpoints.rlsapi_v1.retrieve_simple_response", | ||
| side_effect=RuntimeError("retrieval failed"), | ||
| ) | ||
| mock_extract = mocker.patch("app.endpoints.rlsapi_v1.extract_token_usage") | ||
|
|
||
| with pytest.raises(RuntimeError): | ||
| infer_request = RlsapiV1InferRequest(question="How do I list files?") | ||
| if verbose_enabled: | ||
| infer_request.include_metadata = True | ||
| await infer_endpoint( | ||
| infer_request=RlsapiV1InferRequest(question="How do I list files?"), | ||
| request=_create_mock_request(mocker), | ||
| background_tasks=_create_mock_background_tasks(mocker), | ||
| infer_request=infer_request, | ||
| request=mock_request_factory(), | ||
| background_tasks=mock_background_tasks, | ||
| auth=MOCK_AUTH, | ||
| ) | ||
| mock_extract.assert_not_called() | ||
|
|
||
| if expect_extract_called: | ||
| mock_extract.assert_called_once() | ||
| call_args = mock_extract.call_args | ||
| assert call_args[0][0] == mock_usage | ||
| assert call_args[0][1] == "openai/gpt-4-turbo" | ||
| else: | ||
| mock_extract.assert_not_called() | ||
|
|
There was a problem hiding this comment.
Add an explicit case for verbose_enabled=True with response is None.
extract_token_usage is gated by both conditions in src/app/endpoints/rlsapi_v1.py (Line 485-497). Current cases primarily vary verbose mode and failure path, but do not directly assert the response is not None branch behavior. A regression there could slip through.
💡 Suggested test expansion
`@pytest.mark.parametrize`(
- ("verbose_enabled", "expect_extract_called"),
+ ("verbose_enabled", "include_metadata", "expect_extract_called"),
[
- pytest.param(True, True, id="verbose_calls_extract"),
- pytest.param(False, False, id="non_verbose_skips_extract"),
+ pytest.param(True, True, True, id="verbose_calls_extract"),
+ pytest.param(False, True, False, id="non_verbose_skips_extract"),
+ pytest.param(True, False, False, id="verbose_but_no_response_skips_extract"),
],
)
async def test_infer_extract_token_usage_on_failure_depends_on_verbose(
@@
- verbose_enabled: bool,
+ verbose_enabled: bool,
+ include_metadata: bool,
expect_extract_called: bool,
) -> None:
@@
- if verbose_enabled:
+ if verbose_enabled and include_metadata:
mock_response = mocker.Mock()
mock_response.output = [_create_mock_response_output(mocker, "Response")]
@@
with pytest.raises(RuntimeError):
infer_request = RlsapiV1InferRequest(question="How do I list files?")
- if verbose_enabled:
- infer_request.include_metadata = True
+ infer_request.include_metadata = include_metadata
await infer_endpoint(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/app/endpoints/test_rlsapi_v1.py` around lines 727 - 785, Add a new
explicit branch to test_infer_extract_token_usage_on_failure_depends_on_verbose
that covers verbose_enabled=True but the response is None: when verbose is True,
mock the response path used by infer_endpoint (e.g., patch
"app.endpoints.rlsapi_v1.retrieve_simple_response" or the same async responder
used by _setup_responses_mock to return None) and ensure extract_token_usage
(patched via "app.endpoints.rlsapi_v1.extract_token_usage") is not called; keep
existing failure branches intact so the test asserts the response-is-not-None
gating in infer_endpoint around extract_token_usage.
Description
Pylint flags
test_rlsapi_v1.pywithtoo-many-lines(1,052 lines, 38 test functions). Several test groups were copy-paste duplicates differing only in a single parameter. This PR consolidates them using pytest parameterization and extracts shared mock helpers into a newconftest.py.Changes:
_create_mock_requestand_create_mock_background_tasksintotests/unit/app/endpoints/conftest.pyas factory fixturesResult: 1,052 to 1,018 lines, 38 to 33
def test_functions, 43 collected tests unchanged, 90% coverage preserved with identical missing lines.Type of change
Tools used to create PR
Related Tickets & Documents
Checklist before requesting a review
Testing
Coverage before and after is identical:
Integration tests (17 passed) are unaffected.
uv run make verifypasses clean with pylint score 10.00/10.Summary by CodeRabbit