Skip to content

chore: Bump FastMCP to v3.x and make the necessary changes to support it#425

Open
neelay-aign wants to merge 3 commits intomainfrom
task/changes-for-fastmcp-3
Open

chore: Bump FastMCP to v3.x and make the necessary changes to support it#425
neelay-aign wants to merge 3 commits intomainfrom
task/changes-for-fastmcp-3

Conversation

@neelay-aign
Copy link
Collaborator

Merge when FastMCP releases a v3.x to PyPI and we switch to that in the pyproject.toml

Copilot AI review requested due to automatic review settings February 10, 2026 20:34
@atlantis-platform-engineering
Error: This repo is not allowlisted for Atlantis.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Prepare the MCP integration code for FastMCP v3.x API changes (mount namespacing and async tool listing) ahead of switching the dependency in pyproject.toml.

Changes:

  • Update mcp.mount(..., prefix=...) to mcp.mount(..., namespace=...).
  • Replace get_tools() (dict) with list_tools() (list) and adjust tool formatting output.
  • Update docstrings/comments to reflect the new FastMCP API terminology.

seen_names.add(server.name)
logger.info(f"Mounting MCP server: {server.name}")
mcp.mount(server, prefix=server.name)
mcp.mount(server, namespace=server.name)
Copy link

Copilot AI Feb 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

namespace on mount() and list_tools() are FastMCP v3.x API calls; if this PR lands before the dependency is bumped to v3.x, this will raise at runtime (unexpected keyword argument / missing attribute). To make this change safe (and keep CI green) until pyproject.toml is updated, consider supporting both APIs via capability detection (e.g., try namespace then fallback to prefix, and try list_tools() then fallback to get_tools()).

Copilot uses AI. Check for mistakes.
# lazily initialize resources. We use asyncio.run() to bridge sync/async.
tools = asyncio.run(server.get_tools())
return [{"name": name, "description": tool.description or ""} for name, tool in tools.items()]
tools = asyncio.run(server.list_tools())
Copy link

Copilot AI Feb 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

namespace on mount() and list_tools() are FastMCP v3.x API calls; if this PR lands before the dependency is bumped to v3.x, this will raise at runtime (unexpected keyword argument / missing attribute). To make this change safe (and keep CI green) until pyproject.toml is updated, consider supporting both APIs via capability detection (e.g., try namespace then fallback to prefix, and try list_tools() then fallback to get_tools()).

Copilot uses AI. Check for mistakes.
tools = asyncio.run(server.get_tools())
return [{"name": name, "description": tool.description or ""} for name, tool in tools.items()]
tools = asyncio.run(server.list_tools())
return [{"name": tool.name, "description": tool.description or ""} for tool in tools]
Copy link

Copilot AI Feb 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

list_tools() returning a list can make output ordering depend on server/tool registration order, which may be non-deterministic across environments and can cause flaky CLI output/tests. Consider sorting the returned tools by tool.name before formatting the list so mcp_list_tools() has stable output.

Suggested change
return [{"name": tool.name, "description": tool.description or ""} for tool in tools]
# Sort tools by name to ensure deterministic output order across environments
sorted_tools = sorted(tools, key=lambda tool: tool.name)
return [{"name": tool.name, "description": tool.description or ""} for tool in sorted_tools]

Copilot uses AI. Check for mistakes.
Comment on lines +120 to +121
tools = asyncio.run(server.list_tools())
return [{"name": tool.name, "description": tool.description or ""} for tool in tools]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Tests in mcp_test.py use the outdated FastMCP v2 API (get_tools()), while production code uses the new v3 API (list_tools()), guaranteeing future test failures.
Severity: CRITICAL

Suggested Fix

Update the test file tests/aignostics/utils/mcp_test.py to use the new FastMCP v3.x API. Replace calls to server.get_tools() with server.list_tools(). Update the test logic to handle a list of Tool objects instead of a dictionary, accessing tool names via the tool.name attribute.

Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: src/aignostics/utils/_mcp.py#L120-L121

Potential issue: The production code was updated to use the FastMCP v3.x API,
specifically changing from `server.get_tools()` to `server.list_tools()`. However, the
corresponding test file, `tests/aignostics/utils/mcp_test.py`, was not updated. It still
calls the old `get_tools()` method and expects a dictionary, while the new
`list_tools()` method returns a list of `Tool` objects. When the FastMCP dependency is
updated to v3.x as intended for this pull request, the tests will fail with an
`AttributeError`, blocking deployment.

Did we get this right? 👍 / 👎 to inform future reviews.

@codecov
Copy link

codecov bot commented Feb 10, 2026

❌ 1 Tests Failed:

Tests completed Failed Passed Skipped
714 1 713 15
View the top 1 failed test(s) by shortest run time
tests.aignostics.application.cli_test::test_cli_run_submit_and_describe_and_cancel_and_download_and_delete
Stack Traces | 731s run time
runner = <typer.testing.CliRunner object at 0x7f01ed21e900>
tmp_path = PosixPath('.../pytest-20/popen-gw3/test_cli_run_submit_and_descri0')
silent_logging = None
record_property = <function record_property.<locals>.append_property at 0x7f01f93512d0>

    @pytest.mark.e2e
    @pytest.mark.long_running
    @pytest.mark.flaky(retries=3, delay=5)
    @pytest.mark.timeout(timeout=60 * 10)
    def test_cli_run_submit_and_describe_and_cancel_and_download_and_delete(  # noqa: PLR0915
        runner: CliRunner, tmp_path: Path, silent_logging, record_property
    ) -> None:
        """Check run submit command runs successfully."""
        record_property("tested-item-id", "TC-APPLICATION-CLI-02")
        csv_content = "external_id;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;"
        csv_content += "platform_bucket_url\n"
        csv_content += (
            f"{SPOT_0_FILENAME};{SPOT_0_CRC32C};{SPOT_0_RESOLUTION_MPP};{SPOT_0_WIDTH};{SPOT_0_HEIGHT}"
            f";H&E;LUNG;LUNG_CANCER;{SPOT_0_GS_URL}"
        )
        csv_path = tmp_path / "dummy.csv"
        csv_path.write_text(csv_content)
        result = runner.invoke(
            cli,
            [
                "application",
                "run",
                "submit",
                HETA_APPLICATION_ID,
                str(csv_path),
                "--note",
                "note_of_this_complex_test",
                "--tags",
                "cli-test,test_cli_run_submit_and_describe_and_cancel_and_download_and_delete,further-tag",
                "--deadline",
                (datetime.now(tz=UTC) + timedelta(minutes=10)).isoformat(),
                "--onboard-to-aignostics-portal",
                "--gpu-type",
                PIPELINE_GPU_TYPE,
                "--force",
            ],
        )
        output = normalize_output(result.stdout)
        assert re.search(
            r"Submitted run with id '[0-9a-f-]+' for '",
            output,
        ), f"Output '{output}' doesn't match expected pattern"
        assert result.exit_code == 0
    
        # Extract run ID from the output
        run_id_match = re.search(r"Submitted run with id '([0-9a-f-]+)' for '", output)
        assert run_id_match, f"Failed to extract run ID from output '{output}'"
        run_id = run_id_match.group(1)
    
        # Test that we can find this run by it's note via the query parameter
        list_result = runner.invoke(
            cli,
            [
                "application",
                "run",
                "list",
                "--query",
                "note_of_this_complex_test",
            ],
        )
        assert list_result.exit_code == 0
        list_output = normalize_output(list_result.stdout)
        assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by note via query"
    
        # Test that we can find this run by it's tag via the query parameter
        list_result = runner.invoke(
            cli,
            [
                "application",
                "run",
                "list",
                "--query",
                "test_cli_run_submit_and_describe_and_cancel_and_download_and_delete",
            ],
        )
        assert list_result.exit_code == 0
        list_output = normalize_output(list_result.stdout)
        assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by tag via query"
    
        # Test that we cannot find this run by another tag via the query parameter
        list_result = runner.invoke(
            cli,
            [
                "application",
                "run",
                "list",
                "--query",
                "another_tag",
            ],
        )
        assert list_result.exit_code == 0
        list_output = normalize_output(list_result.stdout)
        assert run_id not in list_output, f"Run ID '{run_id}' found when filtering by another tag via query"
    
        # Test that we can find this run by it's note
        list_result = runner.invoke(
            cli,
            [
                "application",
                "run",
                "list",
                "--note-regex",
                "note_of_this_complex_test",
            ],
        )
        assert list_result.exit_code == 0
        list_output = normalize_output(list_result.stdout)
        assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by note"
    
        # but not another note
        list_result = runner.invoke(
            cli,
            [
                "application",
                "run",
                "list",
                "--note-regex",
                "other_note",
            ],
        )
        assert list_result.exit_code == 0
        list_output = normalize_output(list_result.stdout)
        assert run_id not in list_output, f"Run ID '{run_id}' found when filtering by other note"
    
        # Test that we can find this run by one of its tags
        list_result = runner.invoke(
            cli,
            [
                "application",
                "run",
                "list",
                "--tags",
                "test_cli_run_submit_and_describe_and_cancel_and_download_and_delete",
            ],
        )
        assert list_result.exit_code == 0
        list_output = normalize_output(list_result.stdout)
>       assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by one tag"
E       AssertionError: Run ID 'f98b75a7-2a9c-467e-bb48-a325eb7ac135' not found when filtering by one tag
E       assert 'f98b75a7-2a9c-467e-bb48-a325eb7ac135' in 'Error: Failed to list runs: Failed to retrieve application runs: (500)Reason: Internal Server ErrorHTTP response headers: HTTPHeaderDict({\'date\': \'Tue, 17 Mar 2026 15:55:44 GMT\', \'server\': \'envoy\', \'content-length\': \'34\', \'content-type\': \'application/json\', \'x-trace-id\': \'bc106f9116a03143cec53dc61ace73bf\', \'x-envoy-upstream-service-time\': \'5091\', \'vary\': \'Accept-Encoding\'})HTTP response body: {"detail":"Internal server error"}'

.../aignostics/application/cli_test.py:503: AssertionError

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@sonarqubecloud
Copy link

@neelay-aign neelay-aign force-pushed the task/changes-for-fastmcp-3 branch from ca1cf0b to cf8dc7b Compare March 13, 2026 10:18
Copilot AI review requested due to automatic review settings March 13, 2026 10:58
@neelay-aign neelay-aign force-pushed the task/changes-for-fastmcp-3 branch from cf8dc7b to bf719e1 Compare March 13, 2026 10:58
@neelay-aign neelay-aign changed the title task: Changes required for FastMCP v3.x chore: Bump FastMCP to v3.x and make the necessary changes to support it Mar 13, 2026
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 5 changed files in this pull request and generated no new comments.

Copilot AI review requested due to automatic review settings March 17, 2026 15:22
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 5 changed files in this pull request and generated 1 comment.

tools = asyncio.run(server.list_tools())
tool_names = [t.name for t in tools]
assert len(tool_names) == 2
# Verify namespacing: tools should be prefixed with server name
@sonarqubecloud
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants