From e38306df11045a9132772a74c3c995cce3f4c07f Mon Sep 17 00:00:00 2001 From: SylviaWhittle Date: Fri, 17 Oct 2025 14:44:22 +0100 Subject: [PATCH 1/3] Add requirements.txt --- requirements.txt | 4 ++++ 1 file changed, 4 insertions(+) create mode 100644 requirements.txt diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 00000000..a560e452 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,4 @@ +numpy +pandas +matplotlib +pre-commit From 29f220fdbe63d01301914740dd7d2c62db8a940b Mon Sep 17 00:00:00 2001 From: SylviaWhittle Date: Fri, 17 Oct 2025 14:45:58 +0100 Subject: [PATCH 2/3] Add basic pre-commit configuration --- .pre-commit-config.yaml | 10 ++++++++++ 1 file changed, 10 insertions(+) create mode 100644 .pre-commit-config.yaml diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml new file mode 100644 index 00000000..fd16ba2d --- /dev/null +++ b/.pre-commit-config.yaml @@ -0,0 +1,10 @@ +# See https://pre-commit.com for more information +# See https://pre-commit.com/hooks.html for more hooks +repos: +- repo: https://github.com/pre-commit/pre-commit-hooks + rev: v3.2.0 + hooks: + - id: trailing-whitespace + - id: end-of-file-fixer + - id: check-yaml + - id: check-added-large-files From 4052af7ed01abbcd2184de7fbbbc5b1ea2119790 Mon Sep 17 00:00:00 2001 From: SylviaWhittle Date: Fri, 17 Oct 2025 16:13:12 +0100 Subject: [PATCH 3/3] Lint the text & markdown files & fix spelling --- .github/workflows/README.md | 62 +++++++++--------- .gitignore | 2 +- .markdownlint-cli2.yaml | 23 +++++++ .pre-commit-config.yaml | 34 +++++++--- CONTRIBUTING.md | 11 ++-- README.md | 38 ++++++----- config.yaml | 2 - episodes/00-introduction.Rmd | 9 ++- episodes/01-why-test-my-code.Rmd | 17 +++-- episodes/02-simple-tests.Rmd | 9 ++- episodes/03-interacting-with-tests.Rmd | 15 +++-- episodes/04-unit-tests-best-practices.Rmd | 63 +++++++++---------- episodes/05-testing-exceptions.Rmd | 11 ++-- episodes/06-testing-data-structures.Rmd | 19 +++--- episodes/07-fixtures.Rmd | 15 +++-- episodes/08-parametrization.Rmd | 9 ++- episodes/09-testing-output-files.Rmd | 15 +++-- episodes/10-CI.Rmd | 11 ++-- index.md | 1 + instructors/instructor-notes.md | 2 +- .../files/05-testing-exceptions/calculator.py | 1 - .../files/06-data-structures/calculator.py | 1 - learners/files/07-fixtures/calculator.py | 1 - .../files/08-parametrization/calculator.py | 1 - .../09-testing-output-files/calculator.py | 1 - ...est_stats.test_very_complex_processing.out | 2 +- learners/reference.md | 2 - learners/setup.md | 33 ++++++---- links.md | 10 +-- profiles/learner-profiles.md | 2 +- site/README.md | 2 +- 31 files changed, 227 insertions(+), 197 deletions(-) create mode 100644 .markdownlint-cli2.yaml diff --git a/.github/workflows/README.md b/.github/workflows/README.md index 7076ddd9..45eb96cb 100644 --- a/.github/workflows/README.md +++ b/.github/workflows/README.md @@ -10,7 +10,7 @@ R console: ```r # Install/Update sandpaper -options(repos = c(carpentries = "https://carpentries.r-universe.dev/", +options(repos = c(carpentries = "https://carpentries.r-universe.dev/", CRAN = "https://cloud.r-project.org")) install.packages("sandpaper") @@ -42,13 +42,13 @@ This workflow does the following: #### Caching -This workflow has two caches; one cache is for the lesson infrastructure and +This workflow has two caches; one cache is for the lesson infrastructure and the other is for the the lesson dependencies if the lesson contains rendered content. These caches are invalidated by new versions of the infrastructure and -the `renv.lock` file, respectively. If there is a problem with the cache, +the `renv.lock` file, respectively. If there is a problem with the cache, manual invaliation is necessary. You will need maintain access to the repository and you can either go to the actions tab and [click on the caches button to find -and invalidate the failing cache](https://github.blog/changelog/2022-10-20-manage-caches-in-your-actions-workflows-from-web-interface/) +and invalidate the failing cache](https://github.blog/changelog/2022-10-20-manage-caches-in-your-actions-workflows-from-web-interface/) or by setting the `CACHE_VERSION` secret to the current date (which will invalidate all of the caches). @@ -58,32 +58,32 @@ invalidate all of the caches). These workflows run on a schedule and at the maintainer's request. Because they create pull requests that update workflows/require the downstream actions to run, -they need a special repository/organization secret token called -`SANDPAPER_WORKFLOW` and it must have the `public_repo` and `workflow` scope. +they need a special repository/organization secret token called +`SANDPAPER_WORKFLOW` and it must have the `public_repo` and `workflow` scope. This can be an individual user token, OR it can be a trusted bot account. If you have a repository in one of the official Carpentries accounts, then you do not need to worry about this token being present because the Carpentries Core Team will take care of supplying this token. -If you want to use your personal account: you can go to +If you want to use your personal account: you can go to to create a token. Once you have created your token, you should copy it to your clipboard and then go to your repository's settings > secrets > actions and create or edit the `SANDPAPER_WORKFLOW` secret, pasting in the generated token. If you do not specify your token correctly, the runs will not fail and they will -give you instructions to provide the token for your repository. +give you instructions to provide the token for your repository. ### 02 Maintain: Update Workflow Files (update-workflow.yaml) -The {sandpaper} repository was designed to do as much as possible to separate -the tools from the content. For local builds, this is absolutely true, but -there is a minor issue when it comes to workflow files: they must live inside -the repository. +The {sandpaper} repository was designed to do as much as possible to separate +the tools from the content. For local builds, this is absolutely true, but +there is a minor issue when it comes to workflow files: they must live inside +the repository. This workflow ensures that the workflow files are up-to-date. The way it work is -to download the update-workflows.sh script from GitHub and run it. The script +to download the update-workflows.sh script from GitHub and run it. The script will do the following: 1. check the recorded version of sandpaper against the current version on github @@ -100,25 +100,25 @@ This update is run weekly or on demand. For lessons that have generated content, we use {renv} to ensure that the output is stable. This is controlled by a single lockfile which documents the packages -needed for the lesson and the version numbers. This workflow is skipped in +needed for the lesson and the version numbers. This workflow is skipped in lessons that do not have generated content. Because the lessons need to remain current with the package ecosystem, it's a -good idea to make sure these packages can be updated periodically. The +good idea to make sure these packages can be updated periodically. The update cache workflow will do this by checking for updates, applying them in a branch called `updates/packages` and creating a pull request with _only the -lockfile changed_. +lockfile changed_. From here, the markdown documents will be rebuilt and you can inspect what has -changed based on how the packages have updated. +changed based on how the packages have updated. ## Pull Request and Review Management -Because our lessons execute code, pull requests are a secruity risk for any -lesson and thus have security measures associted with them. **Do not merge any +Because our lessons execute code, pull requests are a security risk for any +lesson and thus have security measures associated with them. **Do not merge any pull requests that do not pass checks and do not have bots commented on them.** -This series of workflows all go together and are described in the following +This series of workflows all go together and are described in the following diagram and the below sections: ![Graph representation of a pull request](https://carpentries.github.io/sandpaper/articles/img/pr-flow.dot.svg) @@ -129,22 +129,22 @@ This workflow runs every time a pull request is created and its purpose is to validate that the pull request is okay to run. This means the following things: 1. The pull request does not contain modified workflow files -2. If the pull request contains modified workflow files, it does not contain +2. If the pull request contains modified workflow files, it does not contain modified content files (such as a situation where @carpentries-bot will make an automated pull request) 3. The pull request does not contain an invalid commit hash (e.g. from a fork that was made before a lesson was transitioned from styles to use the workbench). -Once the checks are finished, a comment is issued to the pull request, which -will allow maintainers to determine if it is safe to run the +Once the checks are finished, a comment is issued to the pull request, which +will allow maintainers to determine if it is safe to run the "Receive Pull Request" workflow from new contributors. ### Receive Pull Request (pr-receive.yaml) **Note of caution:** This workflow runs arbitrary code by anyone who creates a pull request. GitHub has safeguarded the token used in this workflow to have no -priviledges in the repository, but we have taken precautions to protect against +privileges in the repository, but we have taken precautions to protect against spoofing. This workflow is triggered with every push to a pull request. If this workflow @@ -154,8 +154,8 @@ started. The first step of this workflow is to check if it is valid (e.g. that no workflow files have been modified). If there are workflow files that have been -modified, a comment is made that indicates that the workflow is not run. If -both a workflow file and lesson content is modified, an error will occurr. +modified, a comment is made that indicates that the workflow is not run. If +both a workflow file and lesson content is modified, an error will occur. The second step (if valid) is to build the generated content from the pull request. This builds the content and uploads three artifacts: @@ -164,7 +164,7 @@ request. This builds the content and uploads three artifacts: 2. A summary of changes after the rendering process (diff) 3. The rendered files (build) -Because this workflow builds generated content, it follows the same general +Because this workflow builds generated content, it follows the same general process as the `sandpaper-main` workflow with the same caching mechanisms. The artifacts produced are used by the next workflow. @@ -183,9 +183,9 @@ The steps in this workflow are: Importantly: if the pull request is invalid, the branch is not created so any malicious code is not published. -From here, the maintainer can request changes from the author and eventually -either merge or reject the PR. When this happens, if the PR was valid, the -preview branch needs to be deleted. +From here, the maintainer can request changes from the author and eventually +either merge or reject the PR. When this happens, if the PR was valid, the +preview branch needs to be deleted. ### Send Close PR Signal (pr-close-signal.yaml) @@ -194,5 +194,5 @@ pull request number for the next action ### Remove Pull Request Branch (pr-post-remove-branch.yaml) -Tiggered by `pr-close-signal.yaml`. This removes the temporary branch associated with +Triggered by `pr-close-signal.yaml`. This removes the temporary branch associated with the pull request (if it was created). diff --git a/.gitignore b/.gitignore index b1213ffc..d66e21ea 100644 --- a/.gitignore +++ b/.gitignore @@ -61,4 +61,4 @@ __pycache__/ .dir-locals.el # OSX -.DS_Store \ No newline at end of file +.DS_Store diff --git a/.markdownlint-cli2.yaml b/.markdownlint-cli2.yaml new file mode 100644 index 00000000..4e741871 --- /dev/null +++ b/.markdownlint-cli2.yaml @@ -0,0 +1,23 @@ +config: + + line_length: + line_length: 120 + code_blocks: false + tables: false + html: + allowed_elements: + - div + no-duplicate-heading: false + +globs: + +- "**/*.md" +- "*.md" + +ignores: + +- "site/**/*.md" +- ".github/**/*.md" +- "renv/**/*.md" + +fix: true diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index fd16ba2d..e8e84700 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -1,10 +1,30 @@ # See https://pre-commit.com for more information # See https://pre-commit.com/hooks.html for more hooks repos: -- repo: https://github.com/pre-commit/pre-commit-hooks - rev: v3.2.0 - hooks: - - id: trailing-whitespace - - id: end-of-file-fixer - - id: check-yaml - - id: check-added-large-files +- repo: https://github.com/pre-commit/pre-commit-hooks + rev: v6.0.0 + hooks: + - id: check-added-large-files + args: ['--maxkb=2048'] + - id: check-merge-conflict + - id: check-yaml + - id: end-of-file-fixer + - id: no-commit-to-branch + - id: trailing-whitespace + +- repo: https://github.com/DavidAnson/markdownlint-cli2 + rev: v0.18.1 + hooks: + - id: markdownlint-cli2 + args: [] +- repo: https://github.com/codespell-project/codespell + rev: v2.4.1 + hooks: + - id: codespell + +ci: + autofix_prs: true + autofix_commit_msg: '[pre-commit.ci] Fixing issues with pre-commit' + autoupdate_schedule: weekly + autoupdate_commit_msg: '[pre-commit.ci] pre-commit-autoupdate' + skip: [] # Optionally list ids of hooks to skip on CI diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 84c56141..0994189a 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,10 +1,10 @@ -## Contributing +# Contributing We welcome contributions to this open educational resource: fixes to existing material, bug reports, and reviews of proposed changes are all welcome. -### Contributor Agreement +## Contributor Agreement By contributing, you agree that we may redistribute your work under [our license](LICENSE.md). In exchange, we will address your issues and/or assess @@ -12,7 +12,7 @@ your change proposal as promptly as we can, and help you become a member of our community. All contributors agree to abide by our [code of conduct](CODE_OF_CONDUCT.md). -### How to Contribute +## How to Contribute 1. If you do not have a [GitHub][github] account, you can [send us comments by email][contact]. However, we will be able to respond more quickly if you use @@ -28,9 +28,9 @@ our [code of conduct](CODE_OF_CONDUCT.md). [included below](#using-github). Note: if you want to build the website locally, please refer to [The Workbench -documentation][template-doc]. +documentation](template-doc). -### Using GitHub +## Using GitHub If you choose to contribute via GitHub, you may want to look at [How to Contribute to an Open Source Project on GitHub][how-contribute]. In brief, we @@ -53,4 +53,3 @@ NB: The published copy of the lesson is usually in the `main` branch. [github-flow]: https://guides.github.com/introduction/flow/ [github-join]: https://github.com/join [how-contribute]: https://egghead.io/courses/how-to-contribute-to-an-open-source-project-on-github -[template-doc]: https://carpentries.github.io/workbench/ diff --git a/README.md b/README.md index 363fa64f..bd9c868e 100644 --- a/README.md +++ b/README.md @@ -6,11 +6,14 @@ This lesson uses [The Carpentries Workbench][workbench] template. ## Course Description -Whether you are a seasoned developer or just write the occasional script, it's important to know that your code does what you intend, and will continue to do so as you make changes. +Whether you are a seasoned developer or just write the occasional script, it's important to know that your code does +what you intend, and will continue to do so as you make changes. -Software testing is a methodology of automatically ensuring that your code works correctly, without having to go back and manually verify after each change. +Software testing is a methodology of automatically ensuring that your code works correctly, without having to go back +and manually verify after each change. -This course seeks to provide you with conceptual understanding and the tools you need to start ensuring the robustness of your code. +This course seeks to provide you with conceptual understanding and the tools you need to start ensuring the robustness +of your code. ### Contents @@ -26,28 +29,31 @@ This course seeks to provide you with conceptual understanding and the tools you ## Contributions -Contributions are welcome, please refer to the [contribution guidelines](CONTRIBUTING.md) of how to do so and ensure that you adhere to the [Code of Conduct](CODE_OF_CONDUCT.md). +Contributions are welcome, please refer to the [contribution guidelines](CONTRIBUTING.md) of how to do so and ensure +that you adhere to the [Code of Conduct](CODE_OF_CONDUCT.md). ### Build the lesson locally -To render the lesson locally, you will need to have [R][r] installed. Instructions for using R with the Carpentries template is [available](https://carpentries.github.io/workbench/#installation) but some additional setps have been taken to make sure the enivronment is reproducible using the [`{renv}`](https://rstudio.github.io/renv/articles/renv.html) package and an `renv.lockfile` is included which allows the environment to be re-created along with dependencies. +To render the lesson locally, you will need to have [R][rlang] installed. Instructions for using R with the Carpentries +template is [available](https://carpentries.github.io/workbench/#installation) but some additional setps have been +taken to make sure the environment is reproducible using the +[`{renv}`](https://rstudio.github.io/renv/articles/renv.html) package and an `renv.lockfile` is included which allows +the environment to be re-created along with dependencies. After cloning the repository, you can set up the `renv` and install all packages with: -``` r + renv::restore() -# Optionally update packages + renv::update() + ``` -Once you have installed the dependencies, you can render the pages locally by starting R in the project root and running: + +Once you have installed the dependencies, you can render the pages locally by starting R in the project root and +running: ``` r sandpaper::serve() -``` -This will build the pages and start a local web-server in R and open it in your browser. These pages are "live" and will respond to local file changes if you save them. +This will build the pages and start a local web-server in R and open it in your browser. These pages are "live" and +will respond to local file changes if you save them. -[git]: https://git-scm.com -[r]: https://www.r-project.org/ +[rlang]: https://www.r-project.org/ [workbench]: https://carpentries.github.io/workbench/ - - - - diff --git a/config.yaml b/config.yaml index d370fac4..ccebc7a1 100644 --- a/config.yaml +++ b/config.yaml @@ -86,5 +86,3 @@ profiles: # sandpaper and varnish versions) should live varnish: RSE-Sheffield/uos-varnish@main url: 'https://sylviawhittle.github.io/python-testing-for-research/' - - diff --git a/episodes/00-introduction.Rmd b/episodes/00-introduction.Rmd index eae28034..c83b841b 100644 --- a/episodes/00-introduction.Rmd +++ b/episodes/00-introduction.Rmd @@ -4,7 +4,7 @@ teaching: 10 exercises: 2 --- -:::::::::::::::::::::::::::::::::::::: questions +:::::::::::::::::::::::::::::::::::::: questions - What are the goals of this course? @@ -29,7 +29,7 @@ By the end of this course, you should: - Understand how testing can be used to improve code & research reliability - Be comfortable with writing basic tests & running them - Be able to construct a simple Python project that incorporates tests -- Be familiar with testing best practices such as unit testing & the AAA pattern +- Be familiar with testing best practices such as unit testing & the AAA pattern - Be aware of more advanced testing features such as fixtures & parametrization - Understand what Continuous Integration is and why it is useful - Be able to add testing to a GitHub repository with simple Continuous Integration @@ -37,7 +37,7 @@ By the end of this course, you should: ## Code of Conduct -This course is covered by the [Carpentries Code of Conduct](https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html). +This course is covered by the [Carpentries Code of Conduct](https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html). As mentioned in the Carpentries Code of Conduct, we encourage you to: @@ -70,11 +70,10 @@ This course uses blocks like the one below to indicate an exercise for you to at :::::::::::::::::::::::::::::::::::::::::::::::: -::::::::::::::::::::::::::::::::::::: keypoints +::::::::::::::::::::::::::::::::::::: keypoints - This course will teach you how to write effective tests and ensure the quality and reliability of your research software - No prior testing experience is required - You can catch up on practicals by copying the corresponding folder from the `files` directory of this course's materials :::::::::::::::::::::::::::::::::::::::::::::::: - diff --git a/episodes/01-why-test-my-code.Rmd b/episodes/01-why-test-my-code.Rmd index 8ee4886c..5071c5d8 100644 --- a/episodes/01-why-test-my-code.Rmd +++ b/episodes/01-why-test-my-code.Rmd @@ -4,7 +4,7 @@ teaching: 10 exercises: 2 --- -:::::::::::::::::::::::::::::::::::::: questions +:::::::::::::::::::::::::::::::::::::: questions - Why should I test my code? @@ -27,7 +27,7 @@ This might seem like a lot of effort, so let's go over some of the reasons you m ## Catching bugs -Whether you are writing the occasional script or developing a large software, mistakes are inevitable. Sometimes you don't even know when a mistake creeps into the code, and it gets published. +Whether you are writing the occasional script or developing a large software, mistakes are inevitable. Sometimes you don't even know when a mistake creeps into the code, and it gets published. Consider the following function: @@ -66,10 +66,10 @@ def test_add(): print("Test failed!") ``` -Here we check that the function works for a set of test cases. We ensure that it works for positive numbers, negative numbers, and zero. +Here we check that the function works for a set of test cases. We ensure that it works for positive numbers, negative numbers, and zero. -::::::::::::::::::::::::::::::::::::: challenge +::::::::::::::::::::::::::::::::::::: challenge ## Challenge 1: What could go wrong? @@ -87,10 +87,10 @@ def gradient(x1, y1, x2, y2): return (y2 - y1) / (x2 - x1) ``` -:::::::::::::::::::::::: solution +:::::::::::::::::::::::: solution ## Answer - + The first function will incorrectly greet the user, as it is missing a space after "Hello". It would print `HelloAlice!` instead of `Hello Alice!`. If we wrote a test for this function, we would have noticed that it was not working as expected: @@ -171,7 +171,7 @@ def drive_car(speed, direction): ... # complex car driving code return speed, direction, brake_status - + ``` @@ -186,7 +186,7 @@ def drive_car(speed, direction): ::::::::::::::::::::::::::::::::::::: :::::::::::::::::::::::::::::::::::::::: -::::::::::::::::::::::::::::::::::::: keypoints +::::::::::::::::::::::::::::::::::::: keypoints - Automated testing helps to catch hard to spot errors in code & find the root cause of complex issues. - Tests reduce the time spent manually verifying (and re-verifying!) that code works. @@ -194,4 +194,3 @@ def drive_car(speed, direction): - Tests are especially useful when working in a team, as they help to ensure that everyone can trust the code. :::::::::::::::::::::::::::::::::::::::::::::::: - diff --git a/episodes/02-simple-tests.Rmd b/episodes/02-simple-tests.Rmd index 1f7d0420..538cd1dc 100644 --- a/episodes/02-simple-tests.Rmd +++ b/episodes/02-simple-tests.Rmd @@ -4,7 +4,7 @@ teaching: 10 exercises: 2 --- -:::::::::::::::::::::::::::::::::::::: questions +:::::::::::::::::::::::::::::::::::::: questions - How to write a simple test? - How to run the test? @@ -151,13 +151,13 @@ What's more, is that if any of these assert statements fail, it will flag to pytest that the test has failed, and pytest will let you know. -Make the `add` function return the wrong value, and run the test again to see that the test +Make the `add` function return the wrong value, and run the test again to see that the test fails and the text turns **red** as we expect. So if this was a real testing situation, we would know to investigate the `add` function to see why it's not behaving as expected. -::::::::::::::::::::::::::::::::::::: challenge +::::::::::::::::::::::::::::::::::::: challenge ## Challenge 2: Write a test for a multiply function @@ -199,7 +199,7 @@ def test_multiply(): Run the test using `pytest ./` to check that it passes. If it doesn't, don't worry, that's the point of testing - to find bugs in code. -::::::::::::::::::::::::::::::::::::: keypoints +::::::::::::::::::::::::::::::::::::: keypoints - The `assert` keyword is used to check if a statement is true and is a shorthand for writing `if` statements in tests. - Pytest is invoked by running the command `pytest ./` in the terminal. @@ -208,4 +208,3 @@ Run the test using `pytest ./` to check that it passes. If it doesn't, don't wor - It's best practice to write tests in a separate file from the code they are testing. Eg: `scripts.py` and `test_scripts.py`. :::::::::::::::::::::::::::::::::::::::::::::::: - diff --git a/episodes/03-interacting-with-tests.Rmd b/episodes/03-interacting-with-tests.Rmd index c91b5668..13197684 100644 --- a/episodes/03-interacting-with-tests.Rmd +++ b/episodes/03-interacting-with-tests.Rmd @@ -4,7 +4,7 @@ teaching: 10 exercises: 2 --- -:::::::::::::::::::::::::::::::::::::: questions +:::::::::::::::::::::::::::::::::::::: questions - How do I use pytest to run my tests? - What does the output of pytest look like and how do I interpret it? @@ -102,7 +102,7 @@ collected 3 items - This simply tells us that 3 tests have been found and are ready to be run. ``` -advanced/test_advanced_calculator.py . +advanced/test_advanced_calculator.py . test_calculator.py .. [100%] ``` - These two lines tells us that the tests in `test_calculator.py` and `advanced/test_advanced_calculator.py` have passed. Each `.` means that a test has passed. There are two of them beside `test_calculator.py` because there are two tests in `test_calculator.py` If a test fails, it will show an `F` instead of a `.`. @@ -131,7 +131,7 @@ But now we see that the tests have failed: ``` advanced/test_advanced_calculator.py . [ 33%] -test_calculator.py F. +test_calculator.py F. ``` These `F` tells us that a test has failed. The output then tells us which test has failed: @@ -149,14 +149,14 @@ E + where -1 = add(1, 2) test_calculator.py:21: AssertionError ``` -This is where we get detailled information about what exactly broke in the test. +This is where we get detailed information about what exactly broke in the test. - The `>` chevron points to the line that failed in the test. In this case, the assertion `assert add(1, 2) == 3` failed. - The following line tells us what the assertion tried to do. In this case, it tried to assert that the number -1 was equal to 3. Which of course it isn't. - The next line goes into more detail about why it tried to equate -1 to 3. It tells us that -1 is the result of calling `add(1, 2)`. - The final line tells us where the test failed. In this case, it was on line 21 of `test_calculator.py`. -Using this detailled output, we can quickly find the exact line that failed and know the inputs that caused the failure. From there, we can examine exactly what went wrong and fix it. +Using this detailed output, we can quickly find the exact line that failed and know the inputs that caused the failure. From there, we can examine exactly what went wrong and fix it. Finally, pytest prints out a short summary of all the failed tests: ``` @@ -182,7 +182,7 @@ Matplotlib: 3.9.0 Freetype: 2.6.1 rootdir: /Users/sylvi/Documents/GitKraken/python-testing-for-research/episodes/files/03-interacting-with-tests.Rmd plugins: mpl-0.17.0, regtest-2.1.1 -collected 1 item / 1 error +collected 1 item / 1 error === ERRORS === ___ ERROR collecting test_calculator.py ___ @@ -243,7 +243,7 @@ Try running pytest with the above options, editing the code to make the tests fa ::::::::::::::::::::::::::::::::::::::::: -::::::::::::::::::::::::::::::::::::: keypoints +::::::::::::::::::::::::::::::::::::: keypoints - You can run multiple tests at once by running `pytest` in the terminal. - Pytest searches for tests in files that start or end with 'test' in the current directory and subdirectories. @@ -251,4 +251,3 @@ Try running pytest with the above options, editing the code to make the tests fa - Flags such as `-v`, `-q`, `-k`, and `-x` can be used to get more detailed output, less detailed output, run specific tests, and stop running tests after the first failure, respectively. :::::::::::::::::::::::::::::::::::::::::::::::: - diff --git a/episodes/04-unit-tests-best-practices.Rmd b/episodes/04-unit-tests-best-practices.Rmd index 1cbe4af3..71f3e638 100644 --- a/episodes/04-unit-tests-best-practices.Rmd +++ b/episodes/04-unit-tests-best-practices.Rmd @@ -4,7 +4,7 @@ teaching: 10 exercises: 2 --- -:::::::::::::::::::::::::::::::::::::: questions +:::::::::::::::::::::::::::::::::::::: questions - What to do about complex functions & tests? - What are some testing best practices for testing? @@ -40,7 +40,7 @@ def process_data(data: list, maximum_value: float): for i in range(len(data_negative_removed)): if data_negative_removed[i] <= maximum_value: data_maximum_removed.append(data_negative_removed[i]) - + # Calculate the mean mean = sum(data_maximum_removed) / len(data_maximum_removed) @@ -63,7 +63,7 @@ def test_process_data(): ``` -This test is very complex and hard to debug if it fails. Imagine if the calculation of the mean broke - the test would fail but it would not tell us what part of the function was broken, requiring us to +This test is very complex and hard to debug if it fails. Imagine if the calculation of the mean broke - the test would fail but it would not tell us what part of the function was broken, requiring us to check each function manually to find the bug. Not very efficient! ## Unit Testing @@ -156,10 +156,10 @@ This makes your tests easier to read and understand for both yourself and others def test_calculate_mean(): # Arrange data = [1, 2, 3, 4, 5] - + # Act mean = calculate_mean(data) - + # Assert assert mean == 3 ``` @@ -190,10 +190,10 @@ Here is an example of the TDD process: def test_calculate_mean(): # Arrange data = [1, 2, 3, 4, 5] - + # Act mean = calculate_mean(data) - + # Assert assert mean == 3.5 ``` @@ -244,7 +244,7 @@ Random seeds work by setting the initial state of the random number generator. This means that if you set the seed to the same value, you will get the same sequence of random numbers each time you run the function. -::::::::::::::::::::::::::::::::::::: challenge +::::::::::::::::::::::::::::::::::::: challenge ## Challenge: Write your own unit tests @@ -258,21 +258,21 @@ Take this complex function, break it down and write unit tests for it. import random def randomly_sample_and_filter_participants( - participants: list, - sample_size: int, - min_age: int, - max_age: int, - min_height: int, + participants: list, + sample_size: int, + min_age: int, + max_age: int, + min_height: int, max_height: int ): """Participants is a list of tuples, containing the age and height of each participant participants = [ - {age: 25, height: 180}, - {age: 30, height: 170}, - {age: 35, height: 160}, + {age: 25, height: 180}, + {age: 30, height: 170}, + {age: 35, height: 160}, ] """ - + # Get the indexes to sample indexes = random.sample(range(len(participants)), sample_size) @@ -280,13 +280,13 @@ def randomly_sample_and_filter_participants( sampled_participants = [] for i in indexes: sampled_participants.append(participants[i]) - + # Remove participants that are outside the age range sampled_participants_age_filtered = [] for participant in sampled_participants: if participant['age'] >= min_age and participant['age'] <= max_age: sampled_participants_age_filtered.append(participant) - + # Remove participants that are outside the height range sampled_participants_height_filtered = [] for participant in sampled_participants_age_filtered: @@ -299,7 +299,7 @@ def randomly_sample_and_filter_participants( - Create a new file called `test_stats.py` in the `statistics` directory - Write unit tests for the `randomly_sample_and_filter_participants` function in `test_stats.py` -:::::::::::::::::::::::: solution +:::::::::::::::::::::::: solution The function can be broken down into smaller functions, each of which can be tested separately: @@ -307,7 +307,7 @@ The function can be broken down into smaller functions, each of which can be tes import random def sample_participants( - participants: list, + participants: list, sample_size: int ): indexes = random.sample(range(len(participants)), sample_size) @@ -317,8 +317,8 @@ def sample_participants( return sampled_participants def filter_participants_by_age( - participants: list, - min_age: int, + participants: list, + min_age: int, max_age: int ): filtered_participants = [] @@ -328,8 +328,8 @@ def filter_participants_by_age( return filtered_participants def filter_participants_by_height( - participants: list, - min_height: int, + participants: list, + min_height: int, max_height: int ): filtered_participants = [] @@ -339,11 +339,11 @@ def filter_participants_by_height( return filtered_participants def randomly_sample_and_filter_participants( - participants: list, - sample_size: int, - min_age: int, - max_age: int, - min_height: int, + participants: list, + sample_size: int, + min_age: int, + max_age: int, + min_height: int, max_height: int ): sampled_participants = sample_participants(participants, sample_size) @@ -447,7 +447,7 @@ When time is limited, it's often better to only write tests for the most critica You should discuss with your team how much of the code you think should be tested, and what the most critical parts of the code are in order to prioritize your time. -::::::::::::::::::::::::::::::::::::: keypoints +::::::::::::::::::::::::::::::::::::: keypoints - Complex functions can be broken down into smaller, testable units. - Testing each unit separately is called unit testing. @@ -457,4 +457,3 @@ You should discuss with your team how much of the code you think should be teste - Adding tests to an existing project can be done incrementally, starting with regression tests. :::::::::::::::::::::::::::::::::::::::::::::::: - diff --git a/episodes/05-testing-exceptions.Rmd b/episodes/05-testing-exceptions.Rmd index 88c1a673..750964d8 100644 --- a/episodes/05-testing-exceptions.Rmd +++ b/episodes/05-testing-exceptions.Rmd @@ -4,7 +4,7 @@ teaching: 10 exercises: 2 --- -:::::::::::::::::::::::::::::::::::::: questions +:::::::::::::::::::::::::::::::::::::: questions - How to check that a function raises an exception? @@ -45,7 +45,7 @@ def test_square_root(): Here, `pytest.raises` is a context manager that checks that the code inside the `with` block raises a `ValueError` exception. If it doesn't, the test fails. -If you want to get more detailled with things, you can test what the error message says too: +If you want to get more detailed with things, you can test what the error message says too: ```python def test_square_root(): @@ -55,7 +55,7 @@ def test_square_root(): ``` -::::::::::::::::::::::::::::::::::::: challenge +::::::::::::::::::::::::::::::::::::: challenge ## Challenge : Ensure that the divide function raises a ZeroDivisionError when the denominator is zero. @@ -71,7 +71,7 @@ def divide(numerator, denominator): - Write a test in `test_calculator.py` that checks that the divide function raises a `ZeroDivisionError` when the denominator is zero. -:::::::::::::::::::::::: solution +:::::::::::::::::::::::: solution ```python import pytest @@ -87,9 +87,8 @@ def test_divide(): :::::::::::::::::::::::::::::::::::::::::::::::: -::::::::::::::::::::::::::::::::::::: keypoints +::::::::::::::::::::::::::::::::::::: keypoints - Use `pytest.raises` to check that a function raises an exception. :::::::::::::::::::::::::::::::::::::::::::::::: - diff --git a/episodes/06-testing-data-structures.Rmd b/episodes/06-testing-data-structures.Rmd index 8b82784a..8391be1f 100644 --- a/episodes/06-testing-data-structures.Rmd +++ b/episodes/06-testing-data-structures.Rmd @@ -4,7 +4,7 @@ teaching: 10 exercises: 2 --- -:::::::::::::::::::::::::::::::::::::: questions +:::::::::::::::::::::::::::::::::::::: questions - How do you compare data structures such as lists and dictionaries? - How do you compare objects in libraries like `pandas` and `numpy`? @@ -194,7 +194,7 @@ def test_pandas_series(): ``` -::::::::::::::::::::::::::::::::::::: challenge +::::::::::::::::::::::::::::::::::::: challenge ## Challenge : Comparing Data Structures @@ -211,13 +211,13 @@ def remove_anomalies(data: list, maximum_value: float, minimum_value: float) -> for i in data: if i <= maximum_value and i >= minimum_value: result.append(i) - + return result ``` Then write a test for this function by comparing lists. -:::::::::::::::::::::::: solution +:::::::::::::::::::::::: solution ```python from stats import remove_anomalies @@ -280,7 +280,7 @@ import numpy as np def calculate_cumulative_sum(array: np.ndarray) -> np.ndarray: """Calculate the cumulative sum of a numpy array""" - + # don't use the built-in numpy function result = np.zeros(array.shape) result[0] = array[0] @@ -315,7 +315,7 @@ In `statistics/stats.py` add this function to calculate the total score of each def calculate_player_total_scores(participants: dict): """Calculate the total score of each player in a dictionary. - + Example input: { "Alice": { @@ -345,7 +345,7 @@ def calculate_player_total_scores(participants: dict): }, } """" - + for player in participants: participants[player]["total_score"] = np.sum(participants[player]["scores"]) @@ -401,7 +401,7 @@ import pandas as pd def calculate_player_average_scores(df: pd.DataFrame) -> pd.DataFrame: """Calculate the average score of each player in a pandas DataFrame. - + Example input: | | player | score_1 | score_2 | |---|---------|---------|---------| @@ -460,7 +460,7 @@ def test_calculate_player_average_scores(): :::::::::::::::::::::::::::::::::::::::::::::::: -::::::::::::::::::::::::::::::::::::: keypoints +::::::::::::::::::::::::::::::::::::: keypoints - You can test equality of lists and dictionaries using the `==` operator. - Numpy arrays cannot be compared using the `==` operator. Instead, use `numpy.testing.assert_array_equal` and `numpy.testing.assert_allclose`. @@ -468,4 +468,3 @@ def test_calculate_player_average_scores(): - Pandas DataFrames and Series should be compared using `pandas.testing.assert_frame_equal` and `pandas.testing.assert_series_equal`. :::::::::::::::::::::::::::::::::::::::::::::::: - diff --git a/episodes/07-fixtures.Rmd b/episodes/07-fixtures.Rmd index 4d08ad4e..963b25c2 100644 --- a/episodes/07-fixtures.Rmd +++ b/episodes/07-fixtures.Rmd @@ -4,7 +4,7 @@ teaching: 10 exercises: 2 --- -:::::::::::::::::::::::::::::::::::::: questions +:::::::::::::::::::::::::::::::::::::: questions - How to reuse data and objects in tests? @@ -20,7 +20,7 @@ exercises: 2 When writing more complex tests, you may find that you need to reuse data or objects across multiple tests. -Here is an example of a set of tests that re-use the same data a lot. +Here is an example of a set of tests that reuse the same data a lot. We have a class, `Point`, that represents a point in 2D space. We have a few tests that check the behaviour of the class. Notice how we have to repeat the exact same setup code in each test. @@ -37,7 +37,7 @@ class Point: def move(self, dx, dy): self.x += dx self.y += dy - + def reflect_over_x(self): self.y = -self.y @@ -216,7 +216,7 @@ def test_reflect_over_y(point_positive_3_4, point_negative_3_4, point_mixed_3_4) With the setup code defined in the fixtures, the tests are more concise and it won't take as much effort to add more tests in the future. -::::::::::::::::::::::::::::::::::::: challenge +::::::::::::::::::::::::::::::::::::: challenge ## Challenge : Write your own fixture @@ -343,7 +343,7 @@ def test_randomly_sample_and_filter_participants(): - Try making these tests more concise by creating a fixture for the input data. -:::::::::::::::::::::::: solution +:::::::::::::::::::::::: solution ```python import pytest @@ -427,12 +427,11 @@ project_directory/ In this case, the fixtures defined in `conftest.py` can be used in any of the test files in the `tests` directory, provided that the fixtures are imported. -::::::::::::::::::::::::::::::::::::: keypoints +::::::::::::::::::::::::::::::::::::: keypoints -- Fixtures are useful way to store data, objects and automations to re-use them in many different tests. +- Fixtures are useful way to store data, objects and automations to reuse them in many different tests. - Fixtures are defined using the `@pytest.fixture` decorator. - Tests can use fixtures by passing them as arguments. - Fixtures can be placed in a separate file or in the same file as the tests. :::::::::::::::::::::::::::::::::::::::::::::::: - diff --git a/episodes/08-parametrization.Rmd b/episodes/08-parametrization.Rmd index d9e434f5..8cd34d56 100644 --- a/episodes/08-parametrization.Rmd +++ b/episodes/08-parametrization.Rmd @@ -4,7 +4,7 @@ teaching: 10 exercises: 2 --- -:::::::::::::::::::::::::::::::::::::: questions +:::::::::::::::::::::::::::::::::::::: questions - Is there a better way to test a function with lots of different inputs than writing a separate test for each one? @@ -133,7 +133,7 @@ For example, `pytest.param(0, 0, 2, 0, 1, 1.7320, 6, id="Equilateral triangle")` This is a much more concise way to write tests for functions that need to be tested with lots of different inputs, especially when there is a lot of repetition in the setup for each of the different test cases. -::::::::::::::::::::::::::::::::::::: challenge +::::::::::::::::::::::::::::::::::::: challenge ## Challenge - Practice with Parametrization @@ -153,7 +153,7 @@ def is_prime(n: int) -> bool: ``` -:::::::::::::::::::::::: solution +:::::::::::::::::::::::: solution ```python import pytest @@ -197,10 +197,9 @@ def test_is_prime(n, expected): :::::::::::::::::::::::::::::::::::::::::::::::: -::::::::::::::::::::::::::::::::::::: keypoints +::::::::::::::::::::::::::::::::::::: keypoints - Parametrization is a way to run the same test with different parameters in a concise and more readable way, especially when there is a lot of repetition in the setup for each of the different test cases. - Use the `@pytest.mark.parametrize` decorator to define a parametrized test. :::::::::::::::::::::::::::::::::::::::::::::::: - diff --git a/episodes/09-testing-output-files.Rmd b/episodes/09-testing-output-files.Rmd index 56b81327..f4c0dd38 100644 --- a/episodes/09-testing-output-files.Rmd +++ b/episodes/09-testing-output-files.Rmd @@ -4,7 +4,7 @@ teaching: 10 exercises: 2 --- -:::::::::::::::::::::::::::::::::::::: questions +:::::::::::::::::::::::::::::::::::::: questions - How to test for changes in program outputs? - How to test for changes in plots? @@ -70,7 +70,7 @@ def test_very_complex_processing(regtest): regtest.write(str(processed_data)) ``` -- Now because we haven't run the test yet, there is no reference output to compare against, +- Now because we haven't run the test yet, there is no reference output to compare against, so we need to generate it using the `--regtest-generate` flag: ```bash @@ -128,7 +128,7 @@ def plot_data(data: list): This function takes a list of points to plot, plots them and returns the figure produced. -In order to test that this funciton produces the correct plots, we will need to store the correct plots to compare against. +In order to test that this function produces the correct plots, we will need to store the correct plots to compare against. - Create a new folder called `test_plots` inside the `plotting` folder. This is where we will store the reference images. `pytest-mpl` adds the `@pytest.mark.mpl_image_compare` decorator that is used to compare the output of a test function to a reference image. @@ -187,13 +187,13 @@ def plot_data(data: list): ___ test_plot_data ___ Error: Image files did not match. RMS Value: 15.740441786649093 - Expected: + Expected: /var/folders/sr/wjtfqr9s6x3bw1s647t649x80000gn/T/tmp6d0p4yvm/test_plotting.test_plot_data/baseline.png - Actual: + Actual: /var/folders/sr/wjtfqr9s6x3bw1s647t649x80000gn/T/tmp6d0p4yvm/test_plotting.test_plot_data/result.png Difference: /var/folders/sr/wjtfqr9s6x3bw1s647t649x80000gn/T/tmp6d0p4yvm/test_plotting.test_plot_data/result-failed-diff.png - Tolerance: + Tolerance: 2 ``` @@ -213,11 +213,10 @@ This doesn't just work with line plots, but with any type of plot that matplotli Testing your plots can be very useful especially if your project allows users to define their own plots. -::::::::::::::::::::::::::::::::::::: keypoints +::::::::::::::::::::::::::::::::::::: keypoints - Regression testing ensures that the output of a function remains consistent between changes and are a great first step in adding tests to an existing project. - `pytest-regtest` provides a simple way to do regression testing. - `pytest-mpl` provides a simple way to test plots by comparing the output of a test function to a reference image. :::::::::::::::::::::::::::::::::::::::::::::::: - diff --git a/episodes/10-CI.Rmd b/episodes/10-CI.Rmd index 0dcab072..c3b826e2 100644 --- a/episodes/10-CI.Rmd +++ b/episodes/10-CI.Rmd @@ -4,7 +4,7 @@ teaching: 10 exercises: 2 --- -:::::::::::::::::::::::::::::::::::::: questions +:::::::::::::::::::::::::::::::::::::: questions - How can I automate the testing of my code? - What are GitHub Actions? @@ -75,7 +75,7 @@ on: # This is a list of jobs that the action will run. In this case, we have only one job called build. jobs: build: - # This is the environment that the job will run on. In this case, we are using the latest version of Ubuntu, however you can ues other operating systems like Windows or MacOS if you like! + # This is the environment that the job will run on. In this case, we are using the latest version of Ubuntu, however you can use other operating systems like Windows or MacOS if you like! runs-on: ubuntu-latest # This is a list of steps that the job will run. Each step is a command that will be executed on the environment. @@ -88,13 +88,13 @@ jobs: uses: actions/setup-python@v3 with: python-version: "3.10" - + # This step installs the dependencies for the project such as pytest, numpy, pandas, etc using the requirements.txt file we created earlier. - name: Install dependencies run: | python -m pip install --upgrade pip pip install -r requirements.txt - + # This step runs the tests using the pytest command. Remember to use the --mpl and --regtest flags to run the tests that use matplotlib and pytest-regtest. - name: Run tests run: | @@ -166,7 +166,7 @@ So now, when you or your team want to make a feature or just update the code, th This will greatly improve the quality of your code and make it easier to collaborate with others. -::::::::::::::::::::::::::::::::::::: keypoints +::::::::::::::::::::::::::::::::::::: keypoints - Continuous Integration (CI) is the practice of automating the merging of code changes into a project. - GitHub Actions is a feature of GitHub that allows you to automate the testing of your code. @@ -174,4 +174,3 @@ This will greatly improve the quality of your code and make it easier to collabo - You can use GitHub Actions to only allow code to be merged into the main branch if the tests pass. :::::::::::::::::::::::::::::::::::::::::::::::: - diff --git a/index.md b/index.md index af662764..32c28f50 100644 --- a/index.md +++ b/index.md @@ -1,3 +1,4 @@ + --- site: sandpaper::sandpaper_site --- diff --git a/instructors/instructor-notes.md b/instructors/instructor-notes.md index d9a67aaa..e2eec5f5 100644 --- a/instructors/instructor-notes.md +++ b/instructors/instructor-notes.md @@ -2,4 +2,4 @@ title: 'Instructor Notes' --- -This is a placeholder file. Please add content here. +This is a placeholder file. Please add content here. diff --git a/learners/files/05-testing-exceptions/calculator.py b/learners/files/05-testing-exceptions/calculator.py index 6fee97f3..4f08658c 100644 --- a/learners/files/05-testing-exceptions/calculator.py +++ b/learners/files/05-testing-exceptions/calculator.py @@ -12,4 +12,3 @@ def divide(numerator, denominator): if denominator == 0: raise ZeroDivisionError("Cannot divide by zero!") return numerator / denominator - diff --git a/learners/files/06-data-structures/calculator.py b/learners/files/06-data-structures/calculator.py index 6fee97f3..4f08658c 100644 --- a/learners/files/06-data-structures/calculator.py +++ b/learners/files/06-data-structures/calculator.py @@ -12,4 +12,3 @@ def divide(numerator, denominator): if denominator == 0: raise ZeroDivisionError("Cannot divide by zero!") return numerator / denominator - diff --git a/learners/files/07-fixtures/calculator.py b/learners/files/07-fixtures/calculator.py index 6fee97f3..4f08658c 100644 --- a/learners/files/07-fixtures/calculator.py +++ b/learners/files/07-fixtures/calculator.py @@ -12,4 +12,3 @@ def divide(numerator, denominator): if denominator == 0: raise ZeroDivisionError("Cannot divide by zero!") return numerator / denominator - diff --git a/learners/files/08-parametrization/calculator.py b/learners/files/08-parametrization/calculator.py index 6fee97f3..4f08658c 100644 --- a/learners/files/08-parametrization/calculator.py +++ b/learners/files/08-parametrization/calculator.py @@ -12,4 +12,3 @@ def divide(numerator, denominator): if denominator == 0: raise ZeroDivisionError("Cannot divide by zero!") return numerator / denominator - diff --git a/learners/files/09-testing-output-files/calculator.py b/learners/files/09-testing-output-files/calculator.py index 6fee97f3..4f08658c 100644 --- a/learners/files/09-testing-output-files/calculator.py +++ b/learners/files/09-testing-output-files/calculator.py @@ -12,4 +12,3 @@ def divide(numerator, denominator): if denominator == 0: raise ZeroDivisionError("Cannot divide by zero!") return numerator / denominator - diff --git a/learners/files/09-testing-output-files/statistics/_regtest_outputs/test_stats.test_very_complex_processing.out b/learners/files/09-testing-output-files/statistics/_regtest_outputs/test_stats.test_very_complex_processing.out index 310d008d..7300baa3 100644 --- a/learners/files/09-testing-output-files/statistics/_regtest_outputs/test_stats.test_very_complex_processing.out +++ b/learners/files/09-testing-output-files/statistics/_regtest_outputs/test_stats.test_very_complex_processing.out @@ -1 +1 @@ -[2, 4, 6] \ No newline at end of file +[2, 4, 6] diff --git a/learners/reference.md b/learners/reference.md index 8e48c1bf..d7439687 100644 --- a/learners/reference.md +++ b/learners/reference.md @@ -26,5 +26,3 @@ title: 'Reference' | Test Coverage | A measure of how much of the code is tested by tests. | | Test Driven Development (TDD) | A practice where tests are written before the code that they test. | | Unit test | A test that checks the behavior of a single function or method. | - - diff --git a/learners/setup.md b/learners/setup.md index bf0b22e4..f2512d2c 100644 --- a/learners/setup.md +++ b/learners/setup.md @@ -4,24 +4,31 @@ title: Setup ## Python testing for research -This course aims to equip you with the tools and knowledge required to get started with software testing. It assumes no prior knowledge of testing, just basic familiarity with Python programming. Over the course of these lessons, you will learn what software testing entails, how to write tests, best practices, some more niche & powerful functionality and finally how to incorporate tests in a GitHub repository. +This course aims to equip you with the tools and knowledge required to get started with software testing. It assumes +no prior knowledge of testing, just basic familiarity with Python programming. Over the course of these lessons, you +will learn what software testing entails, how to write tests, best practices, some more +niche & powerful functionality and finally how to incorporate tests in a GitHub repository. ## Software Setup -Please complete these setup instructions before the course starts. This is to ensure that the course can start on time and all of the content can be covered. If you have any issues with the setup instructions, please reach out to a course instructor / coordinator. +Please complete these setup instructions before the course starts. This is to ensure that the course can start on time +and all of the content can be covered. If you have any issues with the setup instructions, please reach out to a course +instructor / coordinator. For this course, you will need: ### A Terminal -Such as Terminal on MacOS / Linux or command prompt on Windows. This is so that you can run Python scripts and commit code to GitHub. ### A Text Editor -Preferably a code editor like Visual Studio Code but any text editor will do, such as notepad. This is so that you can write and edit Python scripts. A code editor will provide a better experience for writing code in this course. We recommend Visual Studio Code as it is free and very popular with minimal setup required. + +Preferably a code editor like Visual Studio Code but any text editor will do, such as notepad. This is so that you can +write and edit Python scripts. A code editor will provide a better experience for writing code in this course. We +recommend Visual Studio Code as it is free and very popular with minimal setup required. ### Python -Preferably Python 3.10 or 3.11. You can download Python from [Python's official website](https://www.python.org/downloads/) -It is recommended that you use a virtual environment for this course. This can be a standard Python virtual environment or a conda environment. You can create a virtual environment using the following commands: +It is recommended that you use a virtual environment for this course. This can be a standard Python virtual environment +or a conda environment. You can create a virtual environment using the following commands: ```bash # For a standard Python virtual environment @@ -37,16 +44,16 @@ There are some python packages that will be needed in this course, you can insta ```bash pip install numpy pandas matplotlib pytest pytest-regtest pytest-mpl + ``` ### Git -This course touches on some features of GitHub and requires Git to be installed. You can download Git from the [official Git website](https://git-scm.com/downloads). If this is your first time using Git, you may want to check out the [Git Handbook](https://guides.github.com/introduction/git-handbook/). - -### A GitHub account -A GitHub accound is required for the Continuous Integration section of this course. -You can sign up for a GitHub account on the [GitHub Website](github.com) - - +This course touches on some features of GitHub and requires Git to be installed. You can download Git from the +[official Git website](https://git-scm.com/downloads). If this is your first time using Git, you may want to check out +the [Git Handbook](https://guides.github.com/introduction/git-handbook/). +### A GitHub account +A GitHub account is required for the Continuous Integration section of this course. +You can sign up for a GitHub account on the [GitHub Website](github.com) diff --git a/links.md b/links.md index 4c5cd2f9..dd0ac7ba 100644 --- a/links.md +++ b/links.md @@ -1,10 +1,4 @@ - - -[pandoc]: https://pandoc.org/MANUAL.html -[r-markdown]: https://rmarkdown.rstudio.com/ -[rstudio]: https://www.rstudio.com/ -[carpentries-workbench]: https://carpentries.github.io/sandpaper-docs/ - diff --git a/profiles/learner-profiles.md b/profiles/learner-profiles.md index 434e335a..75b2c5ca 100644 --- a/profiles/learner-profiles.md +++ b/profiles/learner-profiles.md @@ -2,4 +2,4 @@ title: FIXME --- -This is a placeholder file. Please add content here. +This is a placeholder file. Please add content here. diff --git a/site/README.md b/site/README.md index 42997e3d..0a00291c 100644 --- a/site/README.md +++ b/site/README.md @@ -1,2 +1,2 @@ This directory contains rendered lesson materials. Please do not edit files -here. +here.