We use the unit tests to cover internal behavior that can work without the web / backend counterpart.
We aim for high unit test coverage (90% or higher) of our Python code in lib/streamlit.
- Prefer pytest or pytest plugins over unittest.
- For every new test function, please add a brief docstring comment (numpydoc style).
- New tests should be fully annotated with types.
- Skip tests (via
pytest.mark.skipif) requiring CI secrets if the environment variables are not set. - Parameterized Tests: Use
@parameterized.expandwhenever it is possible to combine overlapping tests with varying inputs. - Include a negative / anti-regression assertion when practical: Don't only test that the “right” behavior happens; also test that a plausible “wrong” behavior does not happen.
- Examples:
- If you assert a flag becomes
True, also assert a mutually exclusive flag remainsFalse. - If you expect a function to return a value, also assert it doesn't return a plausible-but-wrong value.
- If you expect success, also cover one relevant failure mode (invalid input, boundary condition, or raised exception) when practical.
- If you assert a flag becomes
- Examples:
- Prefer targeted negatives over exhaustive matrices: Add one high-signal negative check per behavior; don't balloon test cases without a regression history.
- Run all with (execute from repo root):
make python-tests- Run a specific test file with:
PYTHONPATH=lib pytest lib/tests/streamlit/my_example_test.py- Run a specific test inside a test file with:
PYTHONPATH=lib pytest lib/tests/streamlit/my_example_test.py -k test_that_something_works