A comprehensive guide to Test-Driven Development (TDD) using Python's testing frameworks and tools.
- Introduction to TDD
- unittest - Python's Built-in Testing Framework
- nose - Extended Testing Framework
- nose-cov - Coverage Plugin for nose
- rednose - Colored Test Output
- pytest - Modern Testing Framework
Test-Driven Development is a software development approach where tests are written before the actual code. The TDD cycle follows three simple steps:
- Red - Write a failing test that defines a desired improvement or new function
- Green - Write the minimum amount of code to make the test pass
- Refactor - Clean up the code while ensuring tests still pass
- Catches bugs early in development
- Encourages modular, maintainable code
- Serves as living documentation
- Increases confidence when refactoring
- Reduces debugging time
Python's built-in testing framework, inspired by JUnit. It comes standard with Python, requiring no additional installation.
import unittest
class TestCalculator(unittest.TestCase):
def setUp(self):
"""Runs before each test method"""
self.calc = Calculator()
def tearDown(self):
"""Runs after each test method"""
pass
def test_addition(self):
result = self.calc.add(2, 3)
self.assertEqual(result, 5)
def test_division_by_zero(self):
with self.assertRaises(ZeroDivisionError):
self.calc.divide(10, 0)
if __name__ == '__main__':
unittest.main()assertEqual(a, b)- Check if a == bassertTrue(x)- Check if x is TrueassertFalse(x)- Check if x is FalseassertRaises(Exception)- Check if exception is raisedassertIn(a, b)- Check if a is in bassertIsNone(x)- Check if x is None
# Run a single test file
python -m unittest test_module.py
# Run with verbose output
python -m unittest -v test_module.py
# Auto-discover and run all tests
python -m unittest discover- Object-oriented approach using test classes
- Test fixtures with setUp and tearDown methods
- Test discovery mechanism
- Rich assertion methods
- Test suites for grouping tests
An extension of unittest that makes testing easier. While nose is now in maintenance mode (last release in 2015), it introduced many features that influenced modern testing tools.
pip install noseNose can run unittest tests and its own tests. It automatically discovers tests without requiring unittest.main():
# test_math.py
def test_addition():
assert 1 + 1 == 2
def test_subtraction():
assert 5 - 3 == 2
class TestMultiplication:
def test_positive(self):
assert 2 * 3 == 6
def test_negative(self):
assert -2 * 3 == -6# Run all tests in current directory
nosetests
# Run specific test file
nosetests test_math.py
# Run with verbose output
nosetests -v
# Run tests matching a pattern
nosetests -m "test_add.*"- Automatic test discovery (finds tests without explicit test suites)
- Supports unittest, doctest, and simple assert statements
- Plugin architecture for extensibility
- Generates XML output for CI integration
- Multiprocess test execution
Create a setup.cfg or .noserc file:
[nosetests]
verbosity=2
with-doctest=1
doctest-extension=.txtA coverage plugin for nose that measures code coverage during test execution. It's built on top of the coverage.py library.
pip install nose-cov# Run tests with coverage
nosetests --with-cov
# Specify modules to cover
nosetests --with-cov --cov=mypackage
# Generate HTML coverage report
nosetests --with-cov --cov-report=html
# Set minimum coverage threshold
nosetests --with-cov --cov=mypackage --cov-fail-under=80term- Terminal output (default)html- HTML report in htmlcov/ directoryxml- XML report for CI toolsannotate- Annotated source files
[nosetests]
with-cov=1
cov=mypackage
cov-report=term-missing
cov-report=html- Statements - Lines of code executed
- Branches - Decision paths taken (if/else)
- Missing - Lines not covered by tests
A typical report shows:
Name Stmts Miss Cover Missing
--------------------------------------------------
mypackage/__init__.py 10 2 80% 15-16
mypackage/core.py 45 5 89% 23, 67-70
--------------------------------------------------
TOTAL 55 7 87%
A colorized test output plugin for nose that makes test results more readable with colored output and real-time feedback.
pip install rednose# Run with colored output
nosetests --rednose
# Combine with other options
nosetests --rednose -v
# Force color output (useful for CI)
nosetests --rednose --force-colorAdd to setup.cfg:
[nosetests]
rednose=1- Green - Passing tests
- Red - Failing tests
- Yellow - Errors and skipped tests
- Progress bar showing test completion
- Clear failure messages with context
- Real-time output as tests run
Rednose transforms boring test output into an engaging, easy-to-scan format that helps developers quickly identify issues. It shows a progress indicator, groups failures clearly, and uses color psychology to make test results intuitive at a glance.
The modern, feature-rich testing framework that has become the de facto standard for Python testing. It offers powerful features while remaining simple to use.
pip install pytest
pip install pytest-cov # For coveragePytest uses simple assert statements instead of special assertion methods:
# test_calculator.py
import pytest
def test_addition():
assert 2 + 2 == 4
def test_division():
assert 10 / 2 == 5
def test_division_by_zero():
with pytest.raises(ZeroDivisionError):
1 / 0
@pytest.mark.parametrize("a,b,expected", [
(2, 3, 5),
(0, 0, 0),
(-1, 1, 0),
])
def test_addition_parametrized(a, b, expected):
assert a + b == expectedFixtures provide a way to set up test dependencies and share code across tests:
@pytest.fixture
def sample_data():
return [1, 2, 3, 4, 5]
@pytest.fixture
def database():
db = Database()
db.connect()
yield db
db.disconnect()
def test_sum(sample_data):
assert sum(sample_data) == 15
def test_database_query(database):
result = database.query("SELECT * FROM users")
assert len(result) > 0# Run all tests
pytest
# Run specific file
pytest test_calculator.py
# Run specific test
pytest test_calculator.py::test_addition
# Run with verbose output
pytest -v
# Run with coverage
pytest --cov=mypackage
# Run only failed tests from last run
pytest --lf
# Run tests matching pattern
pytest -k "addition"
# Stop after first failure
pytest -x
# Show local variables in tracebacks
pytest -lOrganize and selectively run tests using markers:
@pytest.mark.slow
def test_complex_calculation():
# Long-running test
pass
@pytest.mark.integration
def test_api_endpoint():
# Integration test
pass
@pytest.mark.skip(reason="Not implemented yet")
def test_future_feature():
pass
@pytest.mark.xfail
def test_known_bug():
# Test expected to fail
passRun specific marked tests:
pytest -m slow
pytest -m "not slow"
pytest -m "integration and not slow"Create a pytest.ini file:
[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
markers =
slow: marks tests as slow
integration: marks tests as integration tests
addopts = -v --tb=shortOr use pyproject.toml:
[tool.pytest.ini_options]
testpaths = ["tests"]
addopts = "-v --cov=mypackage"- Simple syntax - Uses plain assert statements with intelligent introspection
- Fixtures - Powerful dependency injection system
- Parametrization - Run tests with multiple input sets
- Plugins - Rich ecosystem (pytest-django, pytest-asyncio, pytest-mock, etc.)
- Detailed failure reports - Shows context and variable values
- Test discovery - Automatically finds tests following conventions
- Parallel execution - Use pytest-xdist for faster test runs
- Mocking - Integrates well with unittest.mock and pytest-mock
Parametrize with fixtures:
@pytest.fixture(params=[1, 2, 3])
def number(request):
return request.param
def test_square(number):
assert number ** 2 > 0Custom fixtures with scope:
@pytest.fixture(scope="session")
def expensive_resource():
# Created once per test session
return setup_resource()
@pytest.fixture(scope="module")
def shared_data():
# Created once per test module
return load_data()Monkeypatching:
def test_api_call(monkeypatch):
def mock_get(*args, **kwargs):
return {"status": "success"}
monkeypatch.setattr("requests.get", mock_get)
result = my_api_function()
assert result["status"] == "success"unittest - Use when:
- Working on projects requiring no external dependencies
- You prefer object-oriented test structure
- Need compatibility with older Python codebases
nose - Generally avoided now:
- Deprecated in favor of pytest
- Only use if maintaining legacy projects
pytest - Recommended for most projects:
- Modern syntax and features
- Extensive plugin ecosystem
- Better error messages and debugging
- Actively maintained
Great! I'll help you get started with nose2, the Python testing framework.
First, install nose2 via pip:
pip install nose2nose2 automatically discovers and runs tests. Here's a simple example:## Running Tests
Run your tests with:
# Run all tests in current directory
nose2
# Run with verbose output
nose2 -v
# Run specific test file
nose2 test_calculator
# Run specific test class
nose2 test_calculator.TestCalculator
# Run specific test method
nose2 test_calculator.TestCalculator.test_additionCreate a unittest.cfg or nose2.cfg file for configuration:## Key Features
Test Discovery: nose2 finds tests based on naming patterns:
- Test files:
test*.pyor*_test.py - Test classes: classes that inherit from
unittest.TestCase - Test functions: functions starting with
test_ - Test methods: methods starting with
test_
Useful Plugins:
nose2.plugins.coverage- Code coveragenose2.plugins.mp- Multiprocess testingnose2.plugins.junitxml- JUnit XML output
If moving from unittest to pytest:
- Pytest can run unittest tests without modification
- Gradually refactor to use pytest fixtures and features
- Take advantage of parametrization for data-driven tests
- Use pytest plugins to enhance functionality
- Write tests first - Define behavior before implementation
- Keep tests small - Each test should verify one thing
- Use descriptive names - Test names should describe what they verify
- Aim for high coverage - Strive for 80%+ coverage, but don't obsess over 100%
- Test edge cases - Include boundary conditions and error scenarios
- Keep tests fast - Slow tests discourage running them frequently
- Make tests independent - Tests should not depend on each other
- Use fixtures wisely - Share setup code without creating tight coupling