LCORE-716: integration tests for conversation management endpoints#1362
LCORE-716: integration tests for conversation management endpoints#1362anik120 wants to merge 1 commit intolightspeed-core:mainfrom
Conversation
This PR introduces comprehensive integration tests for conversation management endpoints in
both v1 (database + Llama Stack) and v2 (cache-based) APIs.
**Test Coverage:**
- List conversations (GET `/v{1,2}/conversations`)
- Get conversation details (GET `/v{1,2}/conversations/{id}`)
- Delete conversation (DELETE `/v{1,2}/conversations/{id}`)
- Update conversation (PUT `/v{1,2}/conversations/{id}`)
**Scenarios Tested:**
- Success paths with real database/cache integration
- Error handling (invalid IDs, not found, connection errors, API errors)
- Edge cases (empty lists, non-existent resources, cache unavailable)
- Special features (tool calls, turn metadata, multi-turn conversations)
**Notes**
- Tests use real database sessions (SQLite in-memory) and cache instances for true integration testing
- Only external Llama Stack client is mocked to avoid external dependencies
Signed-off-by: Anik Bhattacharjee <anbhatta@redhat.com>
WalkthroughTwo new integration test modules added for Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~40 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/integration/endpoints/test_conversations_v1_integration.py (1)
548-624: Assert the turn metadata this test seeds.Right now this only proves
chat_historyis present. Ifprovider,model, or the turn timestamps stopped being returned, the test would still pass.Suggested assertions
# Verify response includes turn metadata assert response.conversation_id == TEST_CONVERSATION_ID assert response.chat_history is not None + assert len(response.chat_history) == 1 + + turn = response.chat_history[0] + assert turn.provider == "test-provider" + assert turn.model == "test-model" + assert turn.started_at is not None + assert turn.completed_at is not None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_conversations_v1_integration.py` around lines 548 - 624, The test_get_conversation_with_turns_metadata currently only checks response.conversation_id and that response.chat_history exists; update it to assert the seeded turn metadata is returned by get_conversation_endpoint_handler by verifying response.chat_history contains a turn for TEST_CONVERSATION_ID with the expected provider ("test-provider"), model ("test-model"), and the timestamps (started_at and completed_at) match or are present on the returned turn objects; ensure you reference the response object from get_conversation_endpoint_handler and its chat_history items when adding these assertions so the test will fail if provider, model, or turn timestamps stop being returned.tests/integration/endpoints/test_conversations_v2_integration.py (1)
90-127: KeepCacheEntryfixtures aligned with the model contract.
models/cache_entry.pyonly defines the query/response/provider/model/timestamp/tool fields. Passingconversation_id,user_id, andtopic_summaryintoCacheEntry(...)makes these fixtures depend on data the cache entry model does not own, which is easy to misread as “topic summary is being persisted here” when it is not.Also applies to: 516-550
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_conversations_v2_integration.py` around lines 90 - 127, The test fixture create_test_cache_entry is constructing a CacheEntry with fields (conversation_id, user_id, topic_summary) that are not part of the CacheEntry contract in models/cache_entry.py; update create_test_cache_entry to only pass the model-owned fields (query, response, provider, model, referenced_documents, tool_calls, tool_results, started_at, completed_at) when instantiating CacheEntry and remove conversation_id, user_id, and topic_summary from the CacheEntry(...) call (if tests need those values keep them as separate local variables or return a tuple/dict alongside the CacheEntry), ensuring all other usages of create_test_cache_entry are adjusted accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/endpoints/test_conversations_v1_integration.py`:
- Around line 488-544: The test asserts that an APIStatusError from
mock_llama_stack_client.conversations.items.list should map to a 500, but
get_conversation_endpoint_handler (see conversations_v1.py mapping around the
APIStatusError handling) currently maps APIStatusError with
response.status_code=404 to a 404; update the test to expect HTTP 404 instead of
500 and verify the error detail shape accordingly (i.e., assert
exc_info.value.status_code == status.HTTP_404_NOT_FOUND and that
exc_info.value.detail is the expected dict/structure).
---
Nitpick comments:
In `@tests/integration/endpoints/test_conversations_v1_integration.py`:
- Around line 548-624: The test_get_conversation_with_turns_metadata currently
only checks response.conversation_id and that response.chat_history exists;
update it to assert the seeded turn metadata is returned by
get_conversation_endpoint_handler by verifying response.chat_history contains a
turn for TEST_CONVERSATION_ID with the expected provider ("test-provider"),
model ("test-model"), and the timestamps (started_at and completed_at) match or
are present on the returned turn objects; ensure you reference the response
object from get_conversation_endpoint_handler and its chat_history items when
adding these assertions so the test will fail if provider, model, or turn
timestamps stop being returned.
In `@tests/integration/endpoints/test_conversations_v2_integration.py`:
- Around line 90-127: The test fixture create_test_cache_entry is constructing a
CacheEntry with fields (conversation_id, user_id, topic_summary) that are not
part of the CacheEntry contract in models/cache_entry.py; update
create_test_cache_entry to only pass the model-owned fields (query, response,
provider, model, referenced_documents, tool_calls, tool_results, started_at,
completed_at) when instantiating CacheEntry and remove conversation_id, user_id,
and topic_summary from the CacheEntry(...) call (if tests need those values keep
them as separate local variables or return a tuple/dict alongside the
CacheEntry), ensuring all other usages of create_test_cache_entry are adjusted
accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 552e05f7-46cf-405c-a40e-eeab62ea6e0b
📒 Files selected for processing (2)
tests/integration/endpoints/test_conversations_v1_integration.pytests/integration/endpoints/test_conversations_v2_integration.py
| async def test_get_conversation_handles_api_status_error( | ||
| test_config: AppConfig, | ||
| mock_llama_stack_client: AsyncMockType, | ||
| mock_request_with_auth: Request, | ||
| test_auth: AuthTuple, | ||
| patch_db_session: Session, | ||
| mocker: MockerFixture, | ||
| ) -> None: | ||
| """Test that get conversation handles Llama Stack API status errors. | ||
|
|
||
| This integration test verifies: | ||
| - API status errors from Llama Stack are handled | ||
| - HTTPException is raised with 500 status code | ||
| - Error handling works through the full stack | ||
|
|
||
| Parameters: | ||
| test_config: Test configuration | ||
| mock_llama_stack_client: Mocked Llama Stack client | ||
| mock_request_with_auth: FastAPI request with full authorization | ||
| test_auth: noop authentication tuple | ||
| patch_db_session: Test database session | ||
| mocker: pytest-mock fixture | ||
| """ | ||
| _ = test_config | ||
|
|
||
| user_id, _, _, _ = test_auth | ||
|
|
||
| # Create conversation in database | ||
| conversation = UserConversation( | ||
| id=TEST_CONVERSATION_ID, | ||
| user_id=user_id, | ||
| last_used_model="test-model", | ||
| last_used_provider="test-provider", | ||
| topic_summary="Test conversation", | ||
| message_count=1, | ||
| created_at=datetime.now(UTC), | ||
| ) | ||
| patch_db_session.add(conversation) | ||
| patch_db_session.commit() | ||
|
|
||
| # Configure mock to raise API status error | ||
| mock_llama_stack_client.conversations.items.list.side_effect = APIStatusError( | ||
| message="Not found", | ||
| response=mocker.Mock(status_code=404), | ||
| body=None, | ||
| ) | ||
|
|
||
| with pytest.raises(HTTPException) as exc_info: | ||
| await get_conversation_endpoint_handler( | ||
| request=mock_request_with_auth, | ||
| conversation_id=TEST_CONVERSATION_ID, | ||
| auth=test_auth, | ||
| ) | ||
|
|
||
| # Verify error details - APIStatusError from items.list is mapped to 500 | ||
| assert exc_info.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR | ||
| assert isinstance(exc_info.value.detail, dict) |
There was a problem hiding this comment.
Assert the GET API-status error the handler actually returns.
src/app/endpoints/conversations_v1.py:271-283 maps APIStatusError to a not-found response, so this test is asserting the wrong contract. With the current setup (status_code=404), the handler should raise 404 here, not 500.
Suggested fix
- This integration test verifies:
- - API status errors from Llama Stack are handled
- - HTTPException is raised with 500 status code
- - Error handling works through the full stack
+ This integration test verifies:
+ - API status errors from Llama Stack are handled
+ - HTTPException is raised with 404 status code
+ - Error handling works through the full stack
@@
- # Verify error details - APIStatusError from items.list is mapped to 500
- assert exc_info.value.status_code == status.HTTP_500_INTERNAL_SERVER_ERROR
+ # Verify error details - APIStatusError is mapped to 404
+ assert exc_info.value.status_code == status.HTTP_404_NOT_FOUND🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/endpoints/test_conversations_v1_integration.py` around
lines 488 - 544, The test asserts that an APIStatusError from
mock_llama_stack_client.conversations.items.list should map to a 500, but
get_conversation_endpoint_handler (see conversations_v1.py mapping around the
APIStatusError handling) currently maps APIStatusError with
response.status_code=404 to a 404; update the test to expect HTTP 404 instead of
500 and verify the error detail shape accordingly (i.e., assert
exc_info.value.status_code == status.HTTP_404_NOT_FOUND and that
exc_info.value.detail is the expected dict/structure).
There was a problem hiding this comment.
- Test mocks: conversations.items.list.side_effect = APIStatusError(...)
- Endpoint calls: get_all_conversation_items(client, llama_stack_conv_id) (line 246)
- Inside get_all_conversation_items() (utils/conversations.py:512-514):
- APIStatusError is caught
- Converted to HTTPException(500) via InternalServerErrorResponse.generic() - HTTPException propagates back to the endpoint
- Endpoint's except APIStatusError (lines 278-283) is NEVER reached because the exception is already an HTTPException, not APIStatusError
The test is CORRECT as-is - it should expect HTTP 500.
There was a problem hiding this comment.
🧩 Analysis chain
Let me verify this by checking the actual code paths.
🏁 Script executed:
#!/bin/bash
# Check get_all_conversation_items in utils/conversations.py
rg -n "get_all_conversation_items|APIStatusError|InternalServerErrorResponse" --type py src/utils/conversations.py -A 5 -B 2
echo "---"
# Check the endpoint handler in conversations_v1.py around lines 240-290
sed -n '230,295p' src/app/endpoints/conversations_v1.py
Length of output: 3947
@anik120, you're absolutely right — I apologize for the incorrect review. The code confirms your explanation:
- The mock raises
APIStatusErroronconversations.items.list. get_all_conversation_items()(Lines 512–514 ofutils/conversations.py) catchesAPIStatusErrorbefore it ever reaches the endpoint and converts it toHTTPException(500)viaInternalServerErrorResponse.generic().- The endpoint's own
except APIStatusErrorblock (Lines 278–283 ofconversations_v1.py) only catchesAPIStatusError, notHTTPException, so it's never reached. - The
HTTPException(500)propagates directly to the caller.
The test asserting HTTP_500_INTERNAL_SERVER_ERROR is correct. My original suggestion was wrong because I didn't account for the intermediate conversion in get_all_conversation_items().
🧠 Learnings used
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-02T16:38:53.776Z
Learning: Applies to src/**/*.py : Handle `APIConnectionError` from Llama Stack in API operations
tisnik
left a comment
There was a problem hiding this comment.
Please create 1st PR with refactoring of existing fixtures. Then these tests will be a bit shorter. Then we can look howto make these tests even shorter by refactoring common patterns from them.
|
|
||
|
|
||
| @pytest.fixture(name="mock_llama_stack_client") | ||
| def mock_llama_stack_client_fixture( |
There was a problem hiding this comment.
would be nice to refactor it out, as it is used in other tests too. So you can make module with this fixture
| yield mock_client | ||
|
|
||
|
|
||
| @pytest.fixture(name="patch_db_session", autouse=True) |
There was a problem hiding this comment.
again, probably can be refactored. In this case, it needs to be decided it autouse can still be used
|
|
||
| # Verify response includes turn metadata | ||
| assert response.conversation_id == TEST_CONVERSATION_ID | ||
| assert response.chat_history is not None |
There was a problem hiding this comment.
add asserts for history content?
Description
This PR introduces comprehensive integration tests for conversation management endpoints in both v1 (database + Llama Stack) and v2 (cache-based) APIs.
Test Coverage:
/v{1,2}/conversations)/v{1,2}/conversations/{id})/v{1,2}/conversations/{id})/v{1,2}/conversations/{id})Scenarios Tested:
Notes
Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
All 37 integration tests introduced here pass in 0.57s:
$ uv run pytest tests/integration/endpoints/test_conversations_v1_integration.py tests/integration/endpoints/test_conversations_v2_integration.py -v --tb=short) ⎿ ============================= test session starts ============================== platform darwin -- Python 3.13.7, pytest-9.0.2, pluggy-1.6.0 -- /Users/anbhatta/go/src/github.com/lightspeed/core/lightspeed-stack/.venv/bin/python3 cachedir: .pytest_cache benchmark: 5.2.3 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) rootdir: /Users/anbhatta/go/src/github.com/lightspeed/core/lightspeed-stack configfile: pyproject.toml plugins: anyio-4.12.1, benchmark-5.2.3, mock-3.15.1, asyncio-1.3.0, cov-7.0.0 asyncio: mode=Mode.AUTO, debug=False, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function collecting ... collected 37 items tests/integration/endpoints/test_conversations_v1_integration.py::test_list_conversations_returns_user_conversations PASSED [ 2%] tests/integration/endpoints/test_conversations_v1_integration.py::test_list_conversations_returns_empty_list_for_new_user PASSED [ 5%] tests/integration/endpoints/test_conversations_v1_integration.py::test_list_conversations_returns_multiple_conversations PASSED [ 8%] tests/integration/endpoints/test_conversations_v1_integration.py::test_get_conversation_returns_chat_history PASSED [ 10%] tests/integration/endpoints/test_conversations_v1_integration.py::test_get_conversation_invalid_id_format_returns_400 PASSED [ 13%] tests/integration/endpoints/test_conversations_v1_integration.py::test_get_conversation_not_found_returns_404 PASSED [ 16%] tests/integration/endpoints/test_conversations_v1_integration.py::test_get_conversation_handles_connection_error PASSED [ 18%] tests/integration/endpoints/test_conversations_v1_integration.py::test_get_conversation_handles_api_status_error PASSED [ 21%] tests/integration/endpoints/test_conversations_v1_integration.py::test_get_conversation_with_turns_metadata PASSED [ 24%] tests/integration/endpoints/test_conversations_v1_integration.py::test_delete_conversation_deletes_from_database_and_llama_stack PASSED [ 27%] tests/integration/endpoints/test_conversations_v1_integration.py::test_delete_conversation_invalid_id_format_returns_400 PASSED [ 29%] tests/integration/endpoints/test_conversations_v1_integration.py::test_delete_conversation_handles_connection_error PASSED [ 32%] tests/integration/endpoints/test_conversations_v1_integration.py::test_delete_conversation_handles_not_found_in_llama_stack PASSED [ 35%] tests/integration/endpoints/test_conversations_v1_integration.py::test_delete_conversation_non_existent_returns_success PASSED [ 37%] tests/integration/endpoints/test_conversations_v1_integration.py::test_update_conversation_updates_topic_summary PASSED [ 40%] tests/integration/endpoints/test_conversations_v1_integration.py::test_update_conversation_invalid_id_format_returns_400 PASSED [ 43%] tests/integration/endpoints/test_conversations_v1_integration.py::test_update_conversation_not_found_returns_404 PASSED [ 45%] tests/integration/endpoints/test_conversations_v1_integration.py::test_update_conversation_handles_connection_error PASSED [ 48%] tests/integration/endpoints/test_conversations_v1_integration.py::test_update_conversation_handles_api_status_error PASSED [ 51%] tests/integration/endpoints/test_conversations_v2_integration.py::test_list_conversations_returns_user_conversations PASSED [ 54%] tests/integration/endpoints/test_conversations_v2_integration.py::test_list_conversations_returns_empty_list_for_new_user PASSED [ 56%] tests/integration/endpoints/test_conversations_v2_integration.py::test_list_conversations_filters_by_user_id PASSED [ 59%] tests/integration/endpoints/test_conversations_v2_integration.py::test_list_conversations_handles_cache_unavailable PASSED [ 62%] tests/integration/endpoints/test_conversations_v2_integration.py::test_get_conversation_returns_chat_history PASSED [ 64%] tests/integration/endpoints/test_conversations_v2_integration.py::test_get_conversation_invalid_id_format_returns_400 PASSED [ 67%] tests/integration/endpoints/test_conversations_v2_integration.py::test_get_conversation_not_found_returns_404 PASSED [ 70%] tests/integration/endpoints/test_conversations_v2_integration.py::test_get_conversation_handles_cache_unavailable PASSED [ 72%] tests/integration/endpoints/test_conversations_v2_integration.py::test_get_conversation_with_tool_calls PASSED [ 75%] tests/integration/endpoints/test_conversations_v2_integration.py::test_delete_conversation_removes_from_cache PASSED [ 78%] tests/integration/endpoints/test_conversations_v2_integration.py::test_delete_conversation_invalid_id_format_returns_400 PASSED [ 81%] tests/integration/endpoints/test_conversations_v2_integration.py::test_delete_conversation_non_existent_returns_success PASSED [ 83%] tests/integration/endpoints/test_conversations_v2_integration.py::test_delete_conversation_handles_cache_unavailable PASSED [ 86%] tests/integration/endpoints/test_conversations_v2_integration.py::test_update_conversation_updates_topic_summary PASSED [ 89%] tests/integration/endpoints/test_conversations_v2_integration.py::test_update_conversation_invalid_id_format_returns_400 PASSED [ 91%] tests/integration/endpoints/test_conversations_v2_integration.py::test_update_conversation_not_found_returns_404 PASSED [ 94%] tests/integration/endpoints/test_conversations_v2_integration.py::test_update_conversation_handles_cache_unavailable PASSED [ 97%] tests/integration/endpoints/test_conversations_v2_integration.py::test_update_conversation_with_multiple_turns PASSED [100%] ============================== 37 passed in 0.57s ==============================Summary by CodeRabbit