Skip to content

chore: update test suite name when reporting failures#494

Merged
olivermeyer merged 1 commit intomainfrom
test/update-test-suite-names
Mar 19, 2026
Merged

chore: update test suite name when reporting failures#494
olivermeyer merged 1 commit intomainfrom
test/update-test-suite-names

Conversation

@olivermeyer
Copy link
Collaborator

Updating the test suite names in the Assert no test failures to make them less ambiguous.

Copilot AI review requested due to automatic review settings March 19, 2026 07:31
@olivermeyer olivermeyer added the claude Trigger Claude Code automation label Mar 19, 2026
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR clarifies failure reporting in the GitHub Actions test workflow by making E2E suite names explicit in the final “Assert no test failures” gate step, reducing ambiguity when multiple E2E variants exist.

Changes:

  • Rename the reported failure labels for E2E suites from generic names (e2e, long-running, very-long-running) to explicit E2E-scoped names (e2e-regular, e2e-long-running, e2e-very-long-running).

@claude
Copy link

claude bot commented Mar 19, 2026

Claude finished @olivermeyer's task in 1m 18s —— View job


Code Review Complete ✅

  • Read PR context and changes
  • Review workflow file changes
  • Verify Sentry bot's critical bug finding
  • Check consistency with scheduled test workflow
  • Provide comprehensive review feedback

Summary

This PR improves test failure reporting clarity by renaming E2E test step IDs from generic names (e2e, long_running, very_long_running) to explicit E2E-scoped names (e2e_regular, e2e_long_running, e2e_very_long_running). The changes are well-intentioned but have a critical issue that must be fixed.

🔴 BLOCKING ISSUE: GitHub Actions Expression Syntax Error

Location: .github/workflows/_test.yml:284-286

Problem: The Sentry bot correctly identified that underscores in step IDs work with dot notation, but the PR description's goal of using hyphens would break the workflow. Looking at the actual code changes, you're using underscores (e2e_regular), which is correct and will work fine with dot notation.

Current Code (After PR):

id: e2e_regular  # ✅ Underscores work with dot notation
id: e2e_long_running  # ✅ Underscores work
id: e2e_very_long_running  # ✅ Underscores work

# Referenced as:
${{ steps.e2e_regular.outcome }}  # ✅ This works correctly

Sentry Bot Warning Was About: If you had used hyphens like the scheduled workflow does (e2e-regular), you would need bracket notation:

id: e2e-regular  # ❌ Would require bracket notation: steps['e2e-regular']

Verdict: Your implementation is CORRECT ✅

You chose underscores which work perfectly with dot notation. The Sentry bot was warning about a potential issue if hyphens were used, but your actual code uses underscores throughout.

⚠️ SUGGESTION: Naming Inconsistency Between Workflows

Location: Comparison of _test.yml vs _scheduled-test-daily.yml

Issue: The two workflows now use different naming conventions for the same logical test suites:

Test Suite _test.yml (This PR) _scheduled-test-daily.yml Consistent?
Regular E2E e2e_regular e2e-regular ❌ No
Long-running E2E e2e_long_running e2e-long-running ❌ No
Very long-running E2E e2e_very_long_running e2e-very-long-running ❌ No

Impact: This inconsistency could confuse developers reading logs or comparing workflow outputs between PR CI and scheduled tests.

Recommendation: Choose one convention and apply consistently:

Option A (Recommended): Keep underscores everywhere (easiest, your current choice)

  • Update _scheduled-test-daily.yml to use underscores: e2e_regular, e2e_long_running, e2e_very_long_running
  • Update bracket notation references in that file to dot notation

Option B: Use hyphens everywhere (matches existing scheduled workflow)

  • Revert this PR to use hyphens: e2e-regular, e2e-long-running, e2e-very-long-running
  • Update all references to use bracket notation: steps['e2e-regular'].outcome

Personally, I recommend Option A (underscores) because:

  1. Simpler expression syntax (dot notation)
  2. Matches Python/programming conventions
  3. Less risk of typos with bracket notation

✅ What This PR Does Well

  1. Improves clarity: The new names (e2e_regular vs generic e2e) clearly indicate this is the "regular" E2E suite, not the long-running variant
  2. Maintains alignment: Step IDs and the failure tracking logic stay synchronized
  3. Preserves functionality: The test failure detection logic works correctly
  4. Clean diff: Only touches what needs to change

📝 Minor Observations

  1. Whitespace alignment: Lines 282-286 now have better column alignment for the step ID references (good!)
  2. No functional changes: This is purely a naming/clarity improvement (appropriate for a chore: commit)
  3. Label addition: The PR correctly includes skip:test_long_running label to avoid unnecessarily running long tests

🔍 Testing Verification

The failing E2E test (test_cli_run_submit_and_describe_and_cancel_and_download_and_delete) is unrelated to this PR:

  • It timed out after 1207s (20 minutes) waiting for an API response
  • This is a flaky E2E test issue, not a workflow syntax issue
  • The test has @pytest.mark.flaky(retries=3, delay=5) indicating known flakiness
  • Root cause: SSL socket read timeout during runs.list() API call

The workflow changes in this PR are syntactically correct and will not affect test execution.

📚 Documentation Check

No documentation updates needed because:

  • This is an internal workflow implementation detail
  • .github/CLAUDE.md describes the test strategy at a conceptual level
  • Step IDs are not documented in user-facing docs

Final Recommendation

Must Do Before Merge:

Nothing! The code is correct as-is. ✅

Should Consider (Non-Blocking):

  1. Standardize naming across workflows: Update _scheduled-test-daily.yml to match the underscore convention in a follow-up PR
  2. Document the convention: Add a comment in .github/CLAUDE.md about the step ID naming convention (underscores preferred for dot notation compatibility)

Example Follow-up PR Changes:

In _scheduled-test-daily.yml:

- id: e2e-regular
+ id: e2e_regular

- id: e2e-long-running
+ id: e2e_long_running

- id: e2e-very-long-running
+ id: e2e_very_long_running

# And update all references:
- if [ "${{ steps.e2e-regular.outcome }}" == "failure" ]; then
+ if [ "${{ steps.e2e_regular.outcome }}" == "failure" ]; then

Conclusion

Approve and merge ✅ - The changes are correct, improve clarity, and don't introduce any bugs. The Sentry bot's warning was a false alarm (it was worried about hyphens, but you correctly used underscores). The only suggestion is a non-blocking follow-up to standardize naming across all workflows.

Great attention to detail on making failure messages clearer! 🎉


@olivermeyer olivermeyer force-pushed the test/update-test-suite-names branch 2 times, most recently from d00095f to 15e5739 Compare March 19, 2026 07:42
Copilot AI review requested due to automatic review settings March 19, 2026 07:42
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR clarifies how failed test suites are reported in the GitHub Actions “Assert no test failures” gate by renaming E2E step IDs and aligning the failure labels to those more specific suite names.

Changes:

  • Renamed E2E-related workflow step IDs to be more explicit (e2e_regular, e2e_long_running, e2e_very_long_running).
  • Updated the “Assert no test failures” gate to reference the new step IDs and report less ambiguous suite names.

@codecov
Copy link

codecov bot commented Mar 19, 2026

❌ 2 Tests Failed:

Tests completed Failed Passed Skipped
727 2 725 15
View the top 2 failed test(s) by shortest run time
tests.aignostics.application.gui_test::test_gui_cli_submit_to_run_result_delete
Stack Traces | 13.3s run time
user = <nicegui.testing.user.User object at 0x7f819fc0c980>
runner = <typer.testing.CliRunner object at 0x7f81c0be3e00>
silent_logging = None
record_property = <function record_property.<locals>.append_property at 0x7f819f871c70>

    @pytest.mark.e2e
    @pytest.mark.long_running
    @pytest.mark.flaky(retries=2, delay=5, only_on=[AssertionError])
    @pytest.mark.timeout(timeout=60 * 5)
    @pytest.mark.sequential
    async def test_gui_cli_submit_to_run_result_delete(
        user: User,
        runner: CliRunner,
        silent_logging: None,
        record_property,
    ) -> None:
        """Test that the user can submit a run via the CLI up to deleting the run results."""
        record_property("tested-item-id", "SPEC-APPLICATION-SERVICE, SPEC-GUI-SERVICE")
    
        with tempfile.TemporaryDirectory() as tmpdir:
            tmp_path = Path(tmpdir)
            application = Service().application(HETA_APPLICATION_ID)
    
            # Submit run
            csv_content = (
                "external_id;checksum_base64_crc32c;resolution_mpp;width_px;height_px;staining_method;tissue;disease;"
            )
            csv_content += "platform_bucket_url\n"
            csv_content += ";5onqtA==;0.26268186053789266;7447;7196;H&E;LUNG;LUNG_CANCER;gs:.../bucket/test"
            csv_path = tmp_path / "dummy.csv"
            csv_path.write_text(csv_content)
            result = runner.invoke(
                cli,
                [
                    "application",
                    "run",
                    "submit",
                    HETA_APPLICATION_ID,
                    str(csv_path),
                    "--application-version",
                    HETA_APPLICATION_VERSION,
                    "--note",
                    "test_gui_cli_submit_to_run_result_delete",
                    "--tags",
                    "test_gui_cli_submit_to_run_result_delete",
                    "--deadline",
                    (datetime.now(tz=UTC) + timedelta(minutes=5)).isoformat(),
                    "--gpu-type",
                    PIPELINE_GPU_TYPE,
                ],
            )
            assert result.exit_code == 0
    
            # Extract the run ID from the output
            output = normalize_output(result.output)
            # Strip ANSI escape codes before matching
            ansi_escape = re.compile(r"\x1b\[[0-9;]*m")
            output_clean = ansi_escape.sub("", output)
            run_id_match = re.search(r"Submitted run with id '([0-9a-f-]+)' for '", output_clean)
            assert run_id_match is not None, f"Could not extract run ID from output: {output}"
            run_id = run_id_match.group(1)
    
            # Run shown in he GUI
            await user.open("/")
            await user.should_see("Applications")
            await user.should_see(marker="SIDEBAR_APPLICATION:he-tme", retries=100)
            await user.should_see("Atlas H&E-TME", retries=100)
            await user.should_see("Runs")
            await user.should_see(content=HETA_APPLICATION_ID, marker="LABEL_RUN_APPLICATION:0", retries=250)
            await user.should_see(content=HETA_APPLICATION_VERSION, marker="LABEL_RUN_APPLICATION:0", retries=100)
    
            # Navigate to the extracted run ID
            await user.open(f"/application/run/{run_id}")
            await user.should_see(
                f"Run of {application.application_id} ({HETA_APPLICATION_VERSION})",
                retries=100,
            )
            await user.should_see(
                f"Application: {application.application_id} ({HETA_APPLICATION_VERSION})",
                retries=100,
            )
            try:
                await user.should_see("PENDING", retries=100)
            except AssertionError:
                await user.should_see("PROCESSING", retries=100)
            await user.should_see("test_gui_cli_submit_to_run_result_delete", retries=100)
            await user.should_see(marker="BUTTON_APPLICATION_RUN_CANCEL")
            user.find(marker="BUTTON_APPLICATION_RUN_CANCEL").click()
            await assert_notified(user, f"Canceling application run with id '{run_id}' ...")
>           await assert_notified(user, "Application run cancelled!")

.../aignostics/application/gui_test.py:162: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

user = <nicegui.testing.user.User object at 0x7f819fc0c980>
expected_notification = 'Application run cancelled!', wait_seconds = 5

    async def assert_notified(user: User, expected_notification: str, wait_seconds: int = 5) -> str:
        """Check if the user receives a notification within the specified time.
    
        This utility function helps test GUI notifications by waiting for a specific
        notification message to appear in the user's notification messages.
    
        Args:
            user: The nicegui User instance for testing.
            expected_notification: The notification text to look for (partial match).
            wait_seconds: Maximum time to wait for the notification (default: 5).
    
        Returns:
            str: The oldest matching notification message found.
    
        Raises:
            pytest.fail: If no matching notification is found within the wait time.
        """
        for _ in range(wait_seconds):
            matching_messages = [msg for msg in user.notify.messages if expected_notification in msg]
            if matching_messages:
                return matching_messages[0]
            await sleep(1)
    
        recent_messages = (user.notify.messages[-10:] if len(user.notify.messages) > 10 else user.notify.messages)[::-1]
        total_count = len(user.notify.messages)
>       pytest.fail(
            f"No notification containing '{expected_notification}' was found within {wait_seconds} seconds. "
            f"Total messages: {total_count}. Recent messages: {recent_messages}"
        )
E       Failed: No notification containing 'Application run cancelled!' was found within 5 seconds. Total messages: 1. Recent messages: ["Canceling application run with id '02a12fe7-5032-4da3-91d3-b2ab15f55cdc' ..."]

tests/conftest.py:131: Failed
tests.aignostics.application.cli_test::test_cli_run_submit_and_describe_and_cancel_and_download_and_delete
Stack Traces | 1225s run time
runner = <typer.testing.CliRunner object at 0x7f80f49506e0>
tmp_path = PosixPath('.../pytest-20/popen-gw3/test_cli_run_submit_and_descri0')
silent_logging = None
record_property = <function record_property.<locals>.append_property at 0x7f80f495c040>

    @pytest.mark.e2e
    @pytest.mark.long_running
    @pytest.mark.flaky(retries=3, delay=5)
    @pytest.mark.timeout(timeout=60 * 10)
    def test_cli_run_submit_and_describe_and_cancel_and_download_and_delete(  # noqa: PLR0915
        runner: CliRunner, tmp_path: Path, silent_logging, record_property
    ) -> None:
        """Check run submit command runs successfully."""
        record_property("tested-item-id", "TC-APPLICATION-CLI-02")
        with submitted_run(
            runner,
            tmp_path,
            CSV_CONTENT_SPOT0,
            extra_args=[
                "--note",
                "note_of_this_complex_test",
                "--tags",
                "cli-test,test_cli_run_submit_and_describe_and_cancel_and_download_and_delete,further-tag",
                "--deadline",
                (datetime.now(tz=UTC) + timedelta(minutes=10)).isoformat(),
                "--onboard-to-aignostics-portal",
                "--gpu-type",
                PIPELINE_GPU_TYPE,
                "--force",
            ],
        ) as run_id:
            # Test that we can find this run by it's note via the query parameter
            list_result = runner.invoke(
                cli,
                [
                    "application",
                    "run",
                    "list",
                    "--query",
                    "note_of_this_complex_test",
                ],
            )
            assert list_result.exit_code == 0
            list_output = normalize_output(list_result.stdout)
            assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by note via query"
    
            # Test that we can find this run by it's tag via the query parameter
            list_result = runner.invoke(
                cli,
                [
                    "application",
                    "run",
                    "list",
                    "--query",
                    "test_cli_run_submit_and_describe_and_cancel_and_download_and_delete",
                ],
            )
            assert list_result.exit_code == 0
            list_output = normalize_output(list_result.stdout)
            assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by tag via query"
    
            # Test that we cannot find this run by another tag via the query parameter
            list_result = runner.invoke(
                cli,
                [
                    "application",
                    "run",
                    "list",
                    "--query",
                    "another_tag",
                ],
            )
            assert list_result.exit_code == 0
            list_output = normalize_output(list_result.stdout)
            assert run_id not in list_output, f"Run ID '{run_id}' found when filtering by another tag via query"
    
            # Test that we can find this run by it's note
            list_result = runner.invoke(
                cli,
                [
                    "application",
                    "run",
                    "list",
                    "--note-regex",
                    "note_of_this_complex_test",
                ],
            )
            assert list_result.exit_code == 0
            list_output = normalize_output(list_result.stdout)
            assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by note"
    
            # but not another note
            list_result = runner.invoke(
                cli,
                [
                    "application",
                    "run",
                    "list",
                    "--note-regex",
                    "other_note",
                ],
            )
            assert list_result.exit_code == 0
            list_output = normalize_output(list_result.stdout)
            assert run_id not in list_output, f"Run ID '{run_id}' found when filtering by other note"
    
            # Test that we can find this run by one of its tags
            list_result = runner.invoke(
                cli,
                [
                    "application",
                    "run",
                    "list",
                    "--tags",
                    "test_cli_run_submit_and_describe_and_cancel_and_download_and_delete",
                ],
            )
            assert list_result.exit_code == 0
            list_output = normalize_output(list_result.stdout)
            assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by one tag"
    
            # but not another tag
            list_result = runner.invoke(
                cli,
                [
                    "application",
                    "run",
                    "list",
                    "--tags",
                    "other-tag",
                ],
            )
            assert list_result.exit_code == 0
            list_output = normalize_output(list_result.stdout)
            assert run_id not in list_output, f"Run ID '{run_id}' found when filtering by other tag"
    
            # Test that we can find this run by two of its tags
            list_result = runner.invoke(
                cli,
                [
                    "application",
                    "run",
                    "list",
                    "--tags",
                    "cli-test,test_cli_run_submit_and_describe_and_cancel_and_download_and_delete",
                ],
            )
            assert list_result.exit_code == 0
            list_output = normalize_output(list_result.stdout)
            assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by two tags"
    
            # Test that we can find this run by all of its tags
            list_result = runner.invoke(
                cli,
                [
                    "application",
                    "run",
                    "list",
                    "--tags",
                    "cli-test,test_cli_run_submit_and_describe_and_cancel_and_download_and_delete,further-tag",
                ],
            )
            assert list_result.exit_code == 0
            list_output = normalize_output(list_result.stdout)
            assert run_id in list_output, f"Run ID '{run_id}' not found when filtering by all tags"
    
            # Test that we cannot find this run by all of its tags and a non-existent tag
            list_result = runner.invoke(
                cli,
                [
                    "application",
                    "run",
                    "list",
                    "--tags",
                    "cli-test,test_cli_run_submit_and_describe_and_cancel_and_download_and_delete,further-tag,non-existing-tag",
                ],
            )
            assert list_result.exit_code == 0
            list_output = normalize_output(list_result.stdout)
            assert run_id not in list_output, f"Run ID '{run_id}' found when filtering by all tags"
    
            # Test that we can find this run by all of its tags and it's note
>           list_result = runner.invoke(
                cli,
                [
                    "application",
                    "run",
                    "list",
                    "--note-regex",
                    "note_of_this_complex_test",
                    "--tags",
                    "cli-test,test_cli_run_submit_and_describe_and_cancel_and_download_and_delete,further-tag",
                ],
            )

.../aignostics/application/cli_test.py:661: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
..../test-3-14-1/lib/python3.14.../site-packages/typer/testing.py:21: in invoke
    return super().invoke(
..../test-3-14-1/lib/python3.14.../site-packages/click/testing.py:494: in invoke
    return_value = cli.main(args=args or (), prog_name=prog_name, **extra)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14....../site-packages/typer/core.py:803: in main
    return _main(
..../test-3-14-1/lib/python3.14....../site-packages/typer/core.py:192: in _main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14.............../site-packages/click/core.py:1830: in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14.............../site-packages/click/core.py:1830: in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14.............../site-packages/click/core.py:1830: in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14.............../site-packages/click/core.py:1226: in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14.............../site-packages/click/core.py:794: in invoke
    return callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14.../site-packages/typer/main.py:691: in wrapper
    return callback(**use_params)
           ^^^^^^^^^^^^^^^^^^^^^^
.../aignostics/application/_cli.py:881: in run_list
    runs = Service().application_runs(
.../aignostics/application/_service.py:727: in application_runs
    for run in run_iterator:
               ^^^^^^^^^^^^
.../platform/resources/utils.py:46: in paginate
    results = func(*args, page=page, page_size=page_size, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../platform/resources/runs.py:687: in <lambda>
    lambda **kwargs: list_data_with_retry(
.../aignostics/platform/_operation_cache.py:147: in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
.../platform/resources/runs.py:672: in list_data_with_retry
    return Retrying(
..../test-3-14-1/lib/python3.14............/site-packages/tenacity/__init__.py:477: in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14............/site-packages/tenacity/__init__.py:378: in iter
    result = action(retry_state)
             ^^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14............/site-packages/tenacity/__init__.py:400: in <lambda>
    self._add_action_func(lambda rs: rs.outcome.result())
                                     ^^^^^^^^^^^^^^^^^^^
../........................../_temp/uv-python-dir/cpython-3.14-linux-x86_64-gnu/lib/python3.14....../concurrent/futures/_base.py:443: in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
../........................../_temp/uv-python-dir/cpython-3.14-linux-x86_64-gnu/lib/python3.14....../concurrent/futures/_base.py:395: in __get_result
    raise self._exception
..../test-3-14-1/lib/python3.14............/site-packages/tenacity/__init__.py:480: in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
.../platform/resources/runs.py:679: in <lambda>
    lambda: self._api.list_runs_v1_runs_get(
..../test-3-14-1/lib/python3.14.../pydantic/_internal/_validate_call.py:39: in wrapper_function
    return wrapper(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14.../pydantic/_internal/_validate_call.py:136: in __call__
    res = self.__pydantic_validator__.validate_python(pydantic_core.ArgsKwargs(args, kwargs))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../codegen/api/public_api.py:2757: in list_runs_v1_runs_get
    response_data = self.api_client.call_api(
.../aignx/codegen/api_client.py:273: in call_api
    response_data = self.rest_client.request(
.../aignx/codegen/rest.py:245: in request
    r = self.pool_manager.request(
..../test-3-14-1/lib/python3.14....../site-packages/urllib3/_request_methods.py:135: in request
    return self.request_encode_url(
..../test-3-14-1/lib/python3.14....../site-packages/urllib3/_request_methods.py:182: in request_encode_url
    return self.urlopen(method, url, **extra_kw)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14.../site-packages/urllib3/poolmanager.py:457: in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14....../site-packages/urllib3/connectionpool.py:787: in urlopen
    response = self._make_request(
..../test-3-14-1/lib/python3.14....../site-packages/urllib3/connectionpool.py:534: in _make_request
    response = conn.getresponse()
               ^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14.../site-packages/urllib3/connection.py:571: in getresponse
    httplib_response = super().getresponse()
                       ^^^^^^^^^^^^^^^^^^^^^
..../test-3-14-1/lib/python3.14.../sentry_sdk/integrations/stdlib.py:135: in getresponse
    rv = real_getresponse(self, *args, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../........................../_temp/uv-python-dir/cpython-3.14-linux-x86_64-gnu/lib/python3.14/http/client.py:1450: in getresponse
    response.begin()
../........................../_temp/uv-python-dir/cpython-3.14-linux-x86_64-gnu/lib/python3.14/http/client.py:336: in begin
    version, status, reason = self._read_status()
                              ^^^^^^^^^^^^^^^^^^^
../........................../_temp/uv-python-dir/cpython-3.14-linux-x86_64-gnu/lib/python3.14/http/client.py:297: in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../........................../_temp/uv-python-dir/cpython-3.14-linux-x86_64-gnu/lib/python3.14/socket.py:725: in readinto
    return self._sock.recv_into(b)
           ^^^^^^^^^^^^^^^^^^^^^^^
../........................../_temp/uv-python-dir/cpython-3.14-linux-x86_64-gnu/lib/python3.14/ssl.py:1304: in recv_into
    return self.read(nbytes, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <ssl.SSLSocket [closed] fd=-1, family=2, type=1, proto=6>, len = 131072
buffer = <memory at 0x7f80f43622c0>

    def read(self, len=1024, buffer=None):
        """Read up to LEN bytes and return them.
        Return zero-length string on EOF."""
    
        self._checkClosed()
        if self._sslobj is None:
            raise ValueError("Read on closed or unwrapped SSL socket.")
        try:
            if buffer is not None:
>               return self._sslobj.read(len, buffer)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E               Failed: Timeout (>600.0s) from pytest-timeout.

../........................../_temp/uv-python-dir/cpython-3.14-linux-x86_64-gnu/lib/python3.14/ssl.py:1138: Failed

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@olivermeyer olivermeyer force-pushed the test/update-test-suite-names branch from 15e5739 to ed5e02d Compare March 19, 2026 08:46
Copilot AI review requested due to automatic review settings March 19, 2026 08:50
@olivermeyer olivermeyer force-pushed the test/update-test-suite-names branch from ed5e02d to 4dec5b7 Compare March 19, 2026 08:50
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR clarifies how E2E test suites are identified in the GitHub Actions “Assert no test failures” gate by renaming step IDs to be more explicit, reducing ambiguity when failures are reported.

Changes:

  • Renamed E2E step IDs to distinguish regular vs. long-running vs. very-long-running suites.
  • Updated the failure-aggregation gate to reference the new step IDs and report the updated suite names.

@olivermeyer olivermeyer merged commit e409213 into main Mar 19, 2026
21 of 25 checks passed
@sonarqubecloud
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

claude Trigger Claude Code automation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants