Conversation
…ck notification for tasks marked complete
anselrognlie
left a comment
There was a problem hiding this comment.
Looks good! Please review my comments, and let me know if you have any questions. Nice job!
|
|
||
|
|
||
| # ------ Added Tests ------- | ||
| def test_slack_notif_successful(client, one_task): |
There was a problem hiding this comment.
There is a test for this scenario in wave 3. In wave 3, we "mock" the requests package so that a message isn't actually sent to slack every time we run our tests. We should avoid writing tests that have observable external results.
|
|
||
| def test_send_slack_notif_helper_success(client): | ||
| task = Task(title="Task", description="Task description") | ||
| response = send_slack_notif(task) |
There was a problem hiding this comment.
In principle, this is a nice targetted test of the helper method that sends a slack message. However, this wil actually send a slack message every time we run our tests! We should avoid writing tests that have observable external results (or in general, which depend on external functionality). We accomplished this in wave 3 by "mocking" the requests package so that no request was ever actually sent.
| @@ -1,9 +1,10 @@ | |||
| from app.models.task import Task | |||
| from sqlalchemy import null | |||
There was a problem hiding this comment.
Watch out for imports that you don't eventually use. VS Code itself can bring in imports in response to autocomplete selections, so if you ever accidentally pick an unintended autocomplete, it's good to check whether an import was also added.
| assert response_body == { | ||
| "message": "Task 1 not found" | ||
| } |
There was a problem hiding this comment.
Checking for the "not found" message is the main behavior we want to verify. If we were particularly paranoid, we might also check that no record was surreptitiously added to the database. This is less of a concern for the other "not found" tests, since they wouldn't involve any logic that could conceivably add a new record, but since PUT replaces data, it could be the case that an implementer might have added logic to create the record if it were missing. That is, they could treat a PUT to a nonexistent record as a request to create that particular record.
Again, that's not a requirement, but it is something to keep in mind.
|
|
||
|
|
||
| # -------- Added Tests ---------- | ||
| def test_get_tasks_name_param(client, one_task, three_tasks): |
There was a problem hiding this comment.
👍 Nice to see these additional tests for your optional filtering enhancement!
| if TYPE_CHECKING: | ||
| from .goal import Goal |
There was a problem hiding this comment.
👍 Nice use of this snippet to suppress the squiggles in VS Code.
| if self.goal_id: | ||
| task_as_dict["goal_id"] = self.goal_id |
There was a problem hiding this comment.
👍 Nice logic to add the goal data to the task dictionary only when it's present.
| for task in goal.tasks: | ||
| task.goal_id = None | ||
|
|
||
| tasks = [validate_model(Task, id) for id in request_body["task_ids"]] |
There was a problem hiding this comment.
👍 Nice way to validate each of the task ids, getting an actual Task model.
| for task in tasks: | ||
| task.goal_id = goal_id |
There was a problem hiding this comment.
We could make use of the tasks relationship to allow us to replace all of the associated tasks in one fell swoop. Once we have a list of Task models, we can assign that list directly to the relationship.
goal.tasks = tasks| response = goal.to_dict() | ||
| response["tasks"] = [task.to_dict() for task in goal.tasks] |
There was a problem hiding this comment.
👍 Nice reuse of your to_dict methods to build up your response. Rather than leaving this logic here in the route, we could move it to an additional Goal method. Perhaps to_dict_with_tasks, or we could add a parameter to the main Goal to_dict that determines whether or not to include the Task details.
No description provided.