Skip to content

tests: refactor task-list-tests (direct functional calls + happy path)#305

Open
saathviksheerla wants to merge 5 commits intoopenml:mainfrom
saathviksheerla:refactor/task-list-tests-direct-calls
Open

tests: refactor task-list-tests (direct functional calls + happy path)#305
saathviksheerla wants to merge 5 commits intoopenml:mainfrom
saathviksheerla:refactor/task-list-tests-direct-calls

Conversation

@saathviksheerla
Copy link
Copy Markdown
Contributor

Description

Fixes #300

  • Migration test already covered in tasks_migration_test.py
  • happy path test of the python endpoint by callin the API
  • have tests on input validation that call the Python API. Specifically for input validation that are handled through FastAPI+Pydantic's function type annotation
  • the other tests are direct functional tests which test the behavior of the functions, including correct processing of parameters and raising of errors, through calling the Python function which implements the endpoint correctly, as suggested.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 1, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Tests for the task-list endpoint were refactored to call the async data-access function list_tasks(...) directly using an expdb_test: AsyncConnection fixture instead of relying solely on HTTP endpoint calls. Filter, pagination, and number-of-instances checks were moved or reimplemented as direct-call assertions on returned task objects; HTTP-level tests were added/kept for input validation, pagination clamping, schema errors (422), and mapping NoResultsError to a 404 problem response. A new HTTP "happy path" test asserts exact field values for task_id=1.

Possibly related PRs

  • feat: Add GET/POST /task/list endpoint #277: Introduces/uses list_tasks, TaskStatusFilter, Pagination, and AsyncConnection types and refactors tests to call list_tasks directly, matching the code-level changes in this PR.

Suggested labels

tests

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main change: refactoring task-list tests to use direct functional calls and adding a happy path test.
Description check ✅ Passed The description is related to the changeset, referencing issue #300 and outlining the testing approach used in the refactoring with checkboxes for completed items.
Linked Issues check ✅ Passed The pull request meets all four objectives from issue #300: migration test coverage [#300], happy-path API test [#300], input-validation tests calling Python API [#300], and direct functional tests for parameter processing and error raising [#300].
Out of Scope Changes check ✅ Passed All changes are in scope—the pull request only modifies test files to refactor task-list tests as required by issue #300, with no unrelated code changes.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 1, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 93.26%. Comparing base (3f1a7ac) to head (ed857e0).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #305      +/-   ##
==========================================
+ Coverage   93.04%   93.26%   +0.21%     
==========================================
  Files          71       72       +1     
  Lines        2991     3059      +68     
  Branches      221      222       +1     
==========================================
+ Hits         2783     2853      +70     
+ Misses        152      151       -1     
+ Partials       56       55       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@saathviksheerla saathviksheerla marked this pull request as ready for review April 1, 2026 03:50
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • The test_list_tasks_negative_pagination_safely_clamped docstring describes potential future behavior (422 from Pydantic) that is not asserted in the test, which can become misleading over time; consider either removing the forward-looking note or encoding that expectation explicitly (e.g., via parametrization) when/if the behavior changes.
  • Now that the module mixes HTTP-level tests (py_api) and direct function-call tests (list_tasks + expdb_test), consider grouping or naming the tests to make the separation of concerns clearer (e.g., split into separate files or use naming prefixes for API vs. direct-call tests).
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `test_list_tasks_negative_pagination_safely_clamped` docstring describes potential future behavior (422 from Pydantic) that is not asserted in the test, which can become misleading over time; consider either removing the forward-looking note or encoding that expectation explicitly (e.g., via parametrization) when/if the behavior changes.
- Now that the module mixes HTTP-level tests (`py_api`) and direct function-call tests (`list_tasks` + `expdb_test`), consider grouping or naming the tests to make the separation of concerns clearer (e.g., split into separate files or use naming prefixes for API vs. direct-call tests).

## Individual Comments

### Comment 1
<location path="tests/routers/openml/task_list_test.py" line_range="241-244" />
<code_context>
+async def test_list_tasks_no_results(payload: dict[str, Any], expdb_test: AsyncConnection) -> None:
</code_context>
<issue_to_address>
**suggestion (testing):** Consider keeping an HTTP-level `py_api` test for 404 NoResultsError mapping in addition to the direct-call exception test.

The previous version validated the full HTTP behavior (404 status, `application/problem+json` content type, and inclusion of `NoResultsError.uri` in the body). The new version only confirms that `list_tasks` raises the correct Python exception, but not that it’s mapped to the expected HTTP response. Please either reintroduce an HTTP-level test (e.g. `test_list_tasks_no_results_http`) or add a small `py_api` test that asserts 404 + `problem+json` + error type, and keep this direct-call test as-is.

Suggested implementation:

```python
@pytest.mark.parametrize(
    "payload",
    [
        # A tag that does not exist
        {"tag": "nonexistent-tag-123"},
        # A task ID that does not exist
        {"task_id": 999999999},
        # A data_name that does not match any task
        {"data_name": "nonexistent-dataset-123"},
    ],
    ids=["bad_tag", "bad_task_id", "bad_data_name"],
)
async def test_list_tasks_no_results(payload: dict[str, Any], expdb_test: AsyncConnection) -> None:
    """Filters matching nothing raise NoResultsError at the Python API level."""
    # Direct-call test that ensures the router logic raises the domain exception.
    with pytest.raises(NoResultsError):
        await list_tasks(expdb_test, **payload)


@pytest.mark.parametrize(
    "payload",
    [
        {"tag": "nonexistent-tag-123"},
        {"task_id": 999999999},
        {"data_name": "nonexistent-dataset-123"},
    ],
    ids=["bad_tag", "bad_task_id", "bad_data_name"],
)
async def test_list_tasks_no_results_http(
    payload: dict[str, Any],
    async_client: AsyncClient,
) -> None:
    """Filters matching nothing are mapped to HTTP 404 problem+json with NoResultsError type."""
    response = await async_client.get("/openml/tasks", params=payload)

    assert response.status_code == status.HTTP_404_NOT_FOUND
    assert response.headers["content-type"].startswith("application/problem+json")

    body = response.json()
    # Error type should include the NoResultsError identifier/URI
    assert isinstance(body.get("type"), str)
    assert "NoResultsError" in body["type"]

```

To make this compile and follow your project’s conventions, you will likely need to:

1. Ensure imports at the top of `tests/routers/openml/task_list_test.py`:
   - `from httpx import AsyncClient` (or whatever async client type your `async_client` fixture uses).
   - `from starlette import status` or `from fastapi import status` (whichever you already use elsewhere).
   - `from ...routers.openml.task_list import list_tasks` (adjust module path to the actual router function).
   - `from ...exceptions import NoResultsError` (or your actual domain error module).
2. If your test client fixture is named differently (e.g. `client` instead of `async_client`) or is sync-only, update the `test_list_tasks_no_results_http` signature and call style accordingly.
3. If you already have shared “no-results” payload constants or fixtures in this test module, you may want to reuse them rather than hard-coding the example payloads in both parametrizations.
</issue_to_address>

### Comment 2
<location path="tests/routers/openml/task_list_test.py" line_range="86-94" />
<code_context>
+    ],
+    ids=["bad_limit_type", "bad_offset_type"],
+)
+async def test_list_tasks_invalid_pagination_type(
+    pagination_override: dict[str, Any], py_api: httpx.AsyncClient
+) -> None:
+    """Invalid pagination types return 422 Unprocessable Entity."""
+    response = await py_api.post(
+        "/tasks/list",
+        json={"pagination": pagination_override},
+    )
+    assert response.status_code == HTTPStatus.UNPROCESSABLE_ENTITY
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Strengthen the 422 tests by asserting on the validation error structure, not only the status code.

These tests only assert the 422 status. To better guard against regressions, also assert key fields in the error body (e.g. `detail[0]['loc']` includes `pagination.limit`/`pagination.offset` and the error type is `type_error.integer` or equivalent). This ensures the request fails for the intended validation reason, not some other 422 path introduced later.

Suggested implementation:

```python
@pytest.mark.parametrize(
    ("pagination_override", "expected_loc_suffix"),
    [
        ({"limit": "abc", "offset": 0}, ("pagination", "limit")),  # Invalid type
        ({"limit": 5, "offset": "xyz"}, ("pagination", "offset")),  # Invalid type
    ],
    ids=["bad_limit_type", "bad_offset_type"],
)

```

```python
async def test_list_tasks_invalid_pagination_type(
    pagination_override: dict[str, Any],
    expected_loc_suffix: tuple[str, str],
    py_api: httpx.AsyncClient,
) -> None:
    """Invalid pagination types return 422 Unprocessable Entity with expected validation errors."""
    response = await py_api.post(
        "/tasks/list",
        json={"pagination": pagination_override},
    )
    assert response.status_code == HTTPStatus.UNPROCESSABLE_ENTITY

    body = response.json()
    # FastAPI / Pydantic-style validation error structure
    assert "detail" in body
    assert isinstance(body["detail"], list)
    assert body["detail"], "Expected at least one validation error"

    error = body["detail"][0]
    # Ensure we are failing specifically on the pagination field we overrode
    assert "loc" in error
    assert error["loc"][-2:] == list(expected_loc_suffix)

    # Allow for common integer type error codes across Pydantic versions
    assert error.get("type") in {"type_error.integer", "int_parsing", "int_type"}

async def test_list_tasks_default(py_api: httpx.AsyncClient) -> None:

```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +86 to +94
async def test_list_tasks_invalid_pagination_type(
pagination_override: dict[str, Any], py_api: httpx.AsyncClient
) -> None:
"""Invalid pagination types return 422 Unprocessable Entity."""
response = await py_api.post(
"/tasks/list",
json={"pagination": pagination_override},
)
assert response.status_code == HTTPStatus.UNPROCESSABLE_ENTITY
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Strengthen the 422 tests by asserting on the validation error structure, not only the status code.

These tests only assert the 422 status. To better guard against regressions, also assert key fields in the error body (e.g. detail[0]['loc'] includes pagination.limit/pagination.offset and the error type is type_error.integer or equivalent). This ensures the request fails for the intended validation reason, not some other 422 path introduced later.

Suggested implementation:

@pytest.mark.parametrize(
    ("pagination_override", "expected_loc_suffix"),
    [
        ({"limit": "abc", "offset": 0}, ("pagination", "limit")),  # Invalid type
        ({"limit": 5, "offset": "xyz"}, ("pagination", "offset")),  # Invalid type
    ],
    ids=["bad_limit_type", "bad_offset_type"],
)
async def test_list_tasks_invalid_pagination_type(
    pagination_override: dict[str, Any],
    expected_loc_suffix: tuple[str, str],
    py_api: httpx.AsyncClient,
) -> None:
    """Invalid pagination types return 422 Unprocessable Entity with expected validation errors."""
    response = await py_api.post(
        "/tasks/list",
        json={"pagination": pagination_override},
    )
    assert response.status_code == HTTPStatus.UNPROCESSABLE_ENTITY

    body = response.json()
    # FastAPI / Pydantic-style validation error structure
    assert "detail" in body
    assert isinstance(body["detail"], list)
    assert body["detail"], "Expected at least one validation error"

    error = body["detail"][0]
    # Ensure we are failing specifically on the pagination field we overrode
    assert "loc" in error
    assert error["loc"][-2:] == list(expected_loc_suffix)

    # Allow for common integer type error codes across Pydantic versions
    assert error.get("type") in {"type_error.integer", "int_parsing", "int_type"}

async def test_list_tasks_default(py_api: httpx.AsyncClient) -> None:

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/routers/openml/task_list_test.py`:
- Around line 210-213: The test currently skips validation if NumberOfInstances
is missing; update the assertion to first require that "NumberOfInstances" is
present in the qualities dict (qualities = {q["name"]: q["value"] for q in
task["quality"]}) and then assert that min_instances <=
float(qualities["NumberOfInstances"]) <= max_instances so the test fails when
the filter is absent and only passes when the value exists and lies within the
expected range.
- Around line 143-151: The test test_list_tasks_filter_task_id currently relies
on list_tasks defaulting status to active which makes the ID equality assertion
brittle; update the call to list_tasks in that test to pass an explicit status
argument (e.g., Status.active) alongside Pagination() and task_id to ensure the
filter is deterministic—locate the list_tasks invocation in
test_list_tasks_filter_task_id and add the status parameter using the project's
Status enum or equivalent.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 403743a0-a30b-428d-b187-a1445b1ae6b9

📥 Commits

Reviewing files that changed from the base of the PR and between 3f1a7ac and eb9c646.

📒 Files selected for processing (1)
  • tests/routers/openml/task_list_test.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
tests/routers/openml/task_list_test.py (1)

150-160: ⚠️ Potential issue | 🟡 Minor

Make the remaining direct filter tests explicit about status.

These tests still rely on the route default TaskStatusFilter.ACTIVE, so a fixture status change can fail them even when the filter under test still works. If the goal is to isolate task_type_id/tag/data_id/data_name processing, pass TaskStatusFilter.ALL explicitly the same way test_list_tasks_filter_task_id already does.

💡 Suggested change
 async def test_list_tasks_filter_type(expdb_test: AsyncConnection) -> None:
     """Filter by task_type_id returns only tasks of that type."""
-    tasks = await list_tasks(pagination=Pagination(), task_type_id=1, expdb=expdb_test)
+    tasks = await list_tasks(
+        pagination=Pagination(),
+        task_type_id=1,
+        status=TaskStatusFilter.ALL,
+        expdb=expdb_test,
+    )
@@
 async def test_list_tasks_filter_tag(expdb_test: AsyncConnection) -> None:
     """Filter by tag returns only tasks with that tag."""
-    tasks = await list_tasks(pagination=Pagination(), tag="OpenML100", expdb=expdb_test)
+    tasks = await list_tasks(
+        pagination=Pagination(),
+        tag="OpenML100",
+        status=TaskStatusFilter.ALL,
+        expdb=expdb_test,
+    )
@@
 async def test_list_tasks_filter_data_id(expdb_test: AsyncConnection) -> None:
     """Filter by data_id returns only tasks that use that dataset."""
     data_id = 10
-    tasks = await list_tasks(pagination=Pagination(), data_id=[data_id], expdb=expdb_test)
+    tasks = await list_tasks(
+        pagination=Pagination(),
+        data_id=[data_id],
+        status=TaskStatusFilter.ALL,
+        expdb=expdb_test,
+    )
@@
 async def test_list_tasks_filter_data_name(expdb_test: AsyncConnection) -> None:
     """Filter by data_name returns only tasks whose dataset matches."""
-    tasks = await list_tasks(pagination=Pagination(), data_name="mfeat-pixel", expdb=expdb_test)
+    tasks = await list_tasks(
+        pagination=Pagination(),
+        data_name="mfeat-pixel",
+        status=TaskStatusFilter.ALL,
+        expdb=expdb_test,
+    )

Also applies to: 177-189

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/task_list_test.py` around lines 150 - 160, The tests
test_list_tasks_filter_type and test_list_tasks_filter_tag (and the other
remaining direct filter tests) rely on the default TaskStatusFilter.ACTIVE; to
isolate the filtering under test pass TaskStatusFilter.ALL explicitly when
calling list_tasks (same approach as test_list_tasks_filter_task_id) so the
assertions don't break if fixture statuses change—update calls to
list_tasks(..., task_type_id=..., pagination=Pagination(), expdb=expdb_test,
status=TaskStatusFilter.ALL) and similarly for tag/data_id/data_name tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/routers/openml/task_list_test.py`:
- Around line 238-244: The test functions (e.g.,
test_list_tasks_inputs_are_basic_subset) currently only validate names inside
the inner loop, so a regression that yields empty task["input"] or
task["quality"] would still pass; update the tests that call list_tasks (using
Pagination(...), expdb=expdb_test) to first assert that each sampled task has a
non-empty "input" (and similarly "quality" in the other test) array (e.g.,
assert task["input"] and assert task["quality"]) before iterating over entries
and checking names/types; keep using the same variables (tasks, task, inp) and
the same basic_inputs/basic_qualities sets to perform the existing membership
assertions after the non-empty check.
- Around line 170-174: The test currently collapses tasks to a set (returned_ids
= {t["task_id"] for t in tasks}), which hides duplicate tasks; change the
assertion to compare ordered lists instead: extract the list of task IDs from
tasks (e.g., [t["task_id"] for t in tasks]) and assert that sorted(actual_ids)
== sorted(expected_ids) (where expected_ids is the ids variable) so duplicates
and ordering regressions are detected; update the assertion around the
list_tasks call accordingly.
- Around line 42-71: The test test_list_tasks_negative_pagination_safely_clamped
currently only asserts row count for the negative-offset case so it could pass
even if the server returned the wrong page; update the negative-offset branch to
make an explicit second request with pagination {"limit": limit, "offset": 0} to
/tasks/list using the same py_api client and compare that the returned task_id
list from the negative-offset response equals the task_id list from the offset=0
response (use the JSON bodies from both responses to extract task_id arrays)
instead of only checking len(...) <= expected_max_results.

---

Duplicate comments:
In `@tests/routers/openml/task_list_test.py`:
- Around line 150-160: The tests test_list_tasks_filter_type and
test_list_tasks_filter_tag (and the other remaining direct filter tests) rely on
the default TaskStatusFilter.ACTIVE; to isolate the filtering under test pass
TaskStatusFilter.ALL explicitly when calling list_tasks (same approach as
test_list_tasks_filter_task_id) so the assertions don't break if fixture
statuses change—update calls to list_tasks(..., task_type_id=...,
pagination=Pagination(), expdb=expdb_test, status=TaskStatusFilter.ALL) and
similarly for tag/data_id/data_name tests.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 920de864-fc17-451b-aef8-d65460607a4c

📥 Commits

Reviewing files that changed from the base of the PR and between eb9c646 and d59466e.

📒 Files selected for processing (1)
  • tests/routers/openml/task_list_test.py

@saathviksheerla saathviksheerla changed the title refactor functional tests tests: refactor task-list-tests (direct functional calls + happy path) Apr 1, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
tests/routers/openml/task_list_test.py (2)

213-216: Assert the requested page slice, not just the upper bound.

len(tasks) <= limit doesn't prove the offset=5 case returned the second page; it still passes if the implementation returns an under-filled first-page slice. Comparing IDs against the corresponding slice from an offset=0 baseline would make this test match its docstring.

♻️ Suggested tightening
 async def test_list_tasks_pagination(limit: int, offset: int, expdb_test: AsyncConnection) -> None:
     """Pagination limit and offset are respected."""
     tasks = await list_tasks(pagination=Pagination(limit=limit, offset=offset), expdb=expdb_test)
     assert len(tasks) <= limit
+    baseline = await list_tasks(
+        pagination=Pagination(limit=limit + offset, offset=0),
+        expdb=expdb_test,
+    )
+    assert [task["task_id"] for task in tasks] == [
+        task["task_id"] for task in baseline
+    ][offset : offset + limit]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/task_list_test.py` around lines 213 - 216, The test
currently only asserts len(tasks) <= limit which doesn't verify offset behavior;
update test_list_tasks_pagination to compute a baseline page by calling
list_tasks(pagination=Pagination(limit=limit, offset=0), expdb=expdb_test), then
build the expected slice using offset and limit (e.g.,
baseline[offset:offset+limit]) and assert that the returned tasks from
list_tasks(pagination=Pagination(limit=limit, offset=offset), expdb=expdb_test)
have the same IDs/order as that expected slice; use the list_tasks and
Pagination symbols to locate the calls and compare identifiers rather than only
length.

57-60: Correct the NoResultsError code in this docstring.

Line 59 says 482, but this path maps NoResultsError to HTTP 404 and legacy code 372. Keeping the wrong code here will make failure triage misleading.

✏️ Suggested wording
-    A limit clamped to 0 returns a 482 NoResultsError (404 Not Found).
+    A limit clamped to 0 raises NoResultsError, which the API maps to HTTP 404.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/task_list_test.py` around lines 57 - 60, Update the
docstring under the "Negative pagination values..." test to correct the HTTP
code for NoResultsError: replace the incorrect "482" with the actual HTTP
mapping "404" and optionally note the legacy code "372" so it reads like "404
NoResultsError (legacy 372)"; locate the sentence referencing NoResultsError in
tests/routers/openml/task_list_test.py and edit that docstring accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/routers/openml/task_list_test.py`:
- Around line 213-216: The test currently only asserts len(tasks) <= limit which
doesn't verify offset behavior; update test_list_tasks_pagination to compute a
baseline page by calling list_tasks(pagination=Pagination(limit=limit,
offset=0), expdb=expdb_test), then build the expected slice using offset and
limit (e.g., baseline[offset:offset+limit]) and assert that the returned tasks
from list_tasks(pagination=Pagination(limit=limit, offset=offset),
expdb=expdb_test) have the same IDs/order as that expected slice; use the
list_tasks and Pagination symbols to locate the calls and compare identifiers
rather than only length.
- Around line 57-60: Update the docstring under the "Negative pagination
values..." test to correct the HTTP code for NoResultsError: replace the
incorrect "482" with the actual HTTP mapping "404" and optionally note the
legacy code "372" so it reads like "404 NoResultsError (legacy 372)"; locate the
sentence referencing NoResultsError in tests/routers/openml/task_list_test.py
and edit that docstring accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 1c93713d-6602-4c2e-a3f5-3c5ff3a4858d

📥 Commits

Reviewing files that changed from the base of the PR and between d59466e and 17fc532.

📒 Files selected for processing (1)
  • tests/routers/openml/task_list_test.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/routers/openml/task_list_test.py`:
- Around line 281-284: Update the test docstring to reflect the direct-call
behavior: replace the HTTP wording "return 404" with a description that the call
to list_tasks(...) raises NoResultsError when filters match nothing (e.g.,
"Filters matching nothing raise NoResultsError"). Keep reference to the test
function name test_list_tasks_no_results and the call signature using
Pagination(), expdb_test, and payload to ensure clarity.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 71322a7c-13b6-4ed2-8a2f-136770f92bb8

📥 Commits

Reviewing files that changed from the base of the PR and between 17fc532 and 62dec6d.

📒 Files selected for processing (1)
  • tests/routers/openml/task_list_test.py

@saathviksheerla
Copy link
Copy Markdown
Contributor Author

Hi @PGijsbers , the PR is ready for your review.
Please let me know your thoughts (if any feedback).
Thanks!

Copy link
Copy Markdown
Contributor

@PGijsbers PGijsbers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good but if I am not mistaken there is no 'happy path' API test that validates the JSON response of a successful request. Please point me to it if it exists (sorry, if I missed it, in a bit of a rush!), or add it otherwise.

@saathviksheerla
Copy link
Copy Markdown
Contributor Author

saathviksheerla commented Apr 3, 2026

You are right, I overlooked it, sorry for that. I'll add the happy path test and push the changes, Thanks for pointing it out.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
tests/routers/openml/task_list_test.py (1)

316-319: ⚠️ Potential issue | 🟡 Minor

Update the docstring to describe the direct-call path.

This test now verifies that list_tasks(...) raises NoResultsError directly, not that the API maps it to HTTP 404.

✏️ Suggested change
 async def test_list_tasks_no_results(payload: dict[str, Any], expdb_test: AsyncConnection) -> None:
-    """Filters matching nothing return 404 NoResultsError."""
+    """Filters matching nothing raise NoResultsError on direct calls."""
     with pytest.raises(NoResultsError):
         await list_tasks(pagination=Pagination(), expdb=expdb_test, **payload)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/task_list_test.py` around lines 316 - 319, Update the
docstring of test_list_tasks_no_results to state that the test verifies the
direct call to list_tasks(...) raises NoResultsError (not that the HTTP API
returns 404); mention the function under test (list_tasks) and that the test
asserts the direct exception path when called with Pagination() and expdb_test,
so readers know this is a unit-level check rather than an API mapping test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/routers/openml/task_list_test.py`:
- Around line 261-270: In test_list_tasks_pagination_order_stable replace the
conditional guard with explicit assertions so the cross-page ordering is always
checked: after computing ids1 and ids2, assert that both lists are non-empty
(e.g., assert ids1 and ids2) and then unconditionally assert max(ids1) <
min(ids2); this ensures list_tasks with Pagination(limit=5, offset=0) and
Pagination(limit=5, offset=5) actually return pages and that the highest id on
the first page is less than the lowest id on the second page.

---

Duplicate comments:
In `@tests/routers/openml/task_list_test.py`:
- Around line 316-319: Update the docstring of test_list_tasks_no_results to
state that the test verifies the direct call to list_tasks(...) raises
NoResultsError (not that the HTTP API returns 404); mention the function under
test (list_tasks) and that the test asserts the direct exception path when
called with Pagination() and expdb_test, so readers know this is a unit-level
check rather than an API mapping test.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: ac28fec4-a816-488e-8681-9f187c4582c8

📥 Commits

Reviewing files that changed from the base of the PR and between 62dec6d and ed857e0.

📒 Files selected for processing (1)
  • tests/routers/openml/task_list_test.py

Comment on lines +261 to 270
async def test_list_tasks_pagination_order_stable(expdb_test: AsyncConnection) -> None:
"""Results are ordered by task_id — consecutive pages are in ascending order."""
r1 = await py_api.post("/tasks/list", json={"pagination": {"limit": 5, "offset": 0}})
r2 = await py_api.post("/tasks/list", json={"pagination": {"limit": 5, "offset": 5}})
ids1 = [t["task_id"] for t in r1.json()]
ids2 = [t["task_id"] for t in r2.json()]
tasks1 = await list_tasks(pagination=Pagination(limit=5, offset=0), expdb=expdb_test)
tasks2 = await list_tasks(pagination=Pagination(limit=5, offset=5), expdb=expdb_test)
ids1 = [t["task_id"] for t in tasks1]
ids2 = [t["task_id"] for t in tasks2]
assert ids1 == sorted(ids1)
assert ids2 == sorted(ids2)
if ids1 and ids2:
assert max(ids1) < min(ids2)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Make the cross-page ordering assertion unconditional.

The if ids1 and ids2: guard means this test still passes if the offset=5 page unexpectedly comes back empty, so it stops verifying the page boundary it was added for.

✏️ Suggested change
 async def test_list_tasks_pagination_order_stable(expdb_test: AsyncConnection) -> None:
     """Results are ordered by task_id — consecutive pages are in ascending order."""
     tasks1 = await list_tasks(pagination=Pagination(limit=5, offset=0), expdb=expdb_test)
     tasks2 = await list_tasks(pagination=Pagination(limit=5, offset=5), expdb=expdb_test)
     ids1 = [t["task_id"] for t in tasks1]
     ids2 = [t["task_id"] for t in tasks2]
+    assert ids1, "expected the first page to contain tasks"
+    assert ids2, "expected the second page to contain tasks"
     assert ids1 == sorted(ids1)
     assert ids2 == sorted(ids2)
-    if ids1 and ids2:
-        assert max(ids1) < min(ids2)
+    assert max(ids1) < min(ids2)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/task_list_test.py` around lines 261 - 270, In
test_list_tasks_pagination_order_stable replace the conditional guard with
explicit assertions so the cross-page ordering is always checked: after
computing ids1 and ids2, assert that both lists are non-empty (e.g., assert ids1
and ids2) and then unconditionally assert max(ids1) < min(ids2); this ensures
list_tasks with Pagination(limit=5, offset=0) and Pagination(limit=5, offset=5)
actually return pages and that the highest id on the first page is less than the
lowest id on the second page.

@saathviksheerla
Copy link
Copy Markdown
Contributor Author

Hi @PGijsbers , I have added happy path test now, please review,
thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[maintenance] Refactor task list tests with direct method calls

2 participants