
Running Tests Selectively with pytest
When working on large test suites, running every single test every time can be inefficient and time-consuming. Luckily, pytest offers several powerful ways to run tests selectively, allowing you to focus on specific areas of your codebase. Whether you're debugging a single function or running tests related to a particular feature, selective test execution can save you significant development time.
Using Markers to Categorize Tests
One of the most common ways to organize and selectively run tests is by using pytest markers. Markers allow you to tag your tests with custom labels that you can then use to run specific subsets of your test suite.
Defining Custom Markers
First, you'll need to define your custom markers. You can do this in your pytest.ini
file:
[pytest]
markers =
slow: marks tests as slow running
integration: integration tests
smoke: smoke test suite
windows: tests specific to Windows platform
linux: tests specific to Linux platform
Applying Markers to Tests
Once you've defined your markers, you can apply them to your test functions:
import pytest
@pytest.mark.slow
def test_complex_calculation():
# This test takes a long time to run
result = perform_complex_operation()
assert result == expected_value
@pytest.mark.integration
def test_database_integration():
# This test involves database operations
data = fetch_from_database()
assert data is not None
@pytest.mark.smoke
def test_basic_functionality():
# Quick smoke test
assert simple_function() == expected_result
Running Tests by Marker
Now you can run tests based on their markers:
# Run only slow tests
pytest -m slow
# Run integration tests
pytest -m integration
# Run everything EXCEPT slow tests
pytest -m "not slow"
# Run smoke tests but exclude Windows-specific ones
pytest -m "smoke and not windows"
Marker Combination | Description | Command |
---|---|---|
Single marker | Run tests with specific marker | pytest -m integration |
Not operator | Exclude tests with marker | pytest -m "not slow" |
And operator | Run tests with multiple markers | pytest -m "smoke and integration" |
Or operator | Run tests with either marker | pytest -m "smoke or integration" |
Selecting Tests by Name
Sometimes you want to run tests based on their names rather than markers. pytest provides several ways to do this using substring matching and regular expressions.
Basic Name Matching
The simplest way to select tests by name is using the -k
option:
# Run tests containing "user" in their name
pytest -k user
# Run tests containing "auth" but not "login"
pytest -k "auth and not login"
# Run tests with exact name match
pytest -k "test_user_creation"
Regular Expression Matching
For more complex patterns, you can use regular expressions:
# Run tests starting with "test_api"
pytest -k "^test_api"
# Run tests containing "v1" or "v2"
pytest -k "v1|v2"
# Run tests ending with "test"
pytest -k "test$"
Here are some common patterns you might find useful:
- Use -k "pattern"
for simple substring matching
- Use ^
to match the beginning of test names
- Use $
to match the end of test names
- Use |
for OR operations between patterns
- Use and
, not
for boolean operations
Running Tests from Specific Files and Directories
When you're working on a specific module or feature, you often want to run only the tests in particular files or directories.
Running Tests from Specific Files
# Run tests from a single file
pytest tests/test_user_models.py
# Run tests from multiple files
pytest tests/test_user_models.py tests/test_auth.py
# Run tests using file patterns
pytest tests/test_*.py
Running Tests from Specific Directories
# Run all tests in a directory
pytest tests/models/
# Run tests from multiple directories
pytest tests/models/ tests/utils/
# Run tests with specific file pattern in directory
pytest tests/api/v1/ -k "test_*.py"
File/Directory Pattern | Command | Description |
---|---|---|
Single file | pytest path/to/test_file.py |
Runs all tests in specific file |
Multiple files | pytest file1.py file2.py |
Runs tests from multiple files |
Directory | pytest tests/models/ |
Runs all tests in directory |
File pattern | pytest tests/test_*.py |
Runs tests matching file pattern |
Using Node IDs for Precise Selection
For the most precise test selection, you can use node IDs, which combine the file path, class name (if applicable), and test function name.
Understanding Node ID Format
A node ID typically follows this format: path/to/file.py::ClassName::test_function_name
or path/to/file.py::test_function_name
for functions not in a class.
# tests/test_example.py
def test_simple():
assert True
class TestUser:
def test_user_creation(self):
assert create_user() is not None
def test_user_deletion(self):
assert delete_user() is True
Running Tests with Node IDs
# Run specific test function
pytest tests/test_example.py::test_simple
# Run specific test method in a class
pytest tests/test_example.py::TestUser::test_user_creation
# Run multiple specific tests
pytest tests/test_example.py::test_simple tests/test_example.py::TestUser::test_user_creation
Managing Test Execution with Configuration Files
You can create permanent selection configurations using pytest.ini
or pyproject.toml
files. This is particularly useful for defining default behavior or creating presets for different environments.
Configuring Default Test Selection
# pytest.ini
[pytest]
addopts = -v --tb=short -m "not slow"
python_files = test_*.py
python_classes = Test*
python_functions = test_*
Environment-Specific Configurations
You can also use different configuration files for different environments:
# Use different config for CI environment
pytest -c pytest.ci.ini
# Use different config for local development
pytest -c pytest.local.ini
Key benefits of using configuration files include: - Consistent test selection across team members - Environment-specific test configurations - Reduced typing for frequently used options - Version-controlled test configuration
Advanced Selection Techniques
For complex projects, you might need more advanced test selection strategies.
Using Plugins for Enhanced Selection
There are several pytest plugins that enhance test selection capabilities:
# Install test selection plugins
pip install pytest-xdist
pip install pytest-testmon
pip install pytest-picked
# Run only tests that have been modified (pytest-picked)
pytest --picked
# Run tests based on code changes (pytest-testmon)
pytest --testmon
Creating Custom Selection Logic
For very specific needs, you can create custom selection logic using pytest hooks:
# conftest.py
def pytest_collection_modifyitems(config, items):
"""Modify test items based on custom criteria"""
if config.getoption("--run-custom"):
selected = []
for item in items:
if "custom_criteria" in item.name:
selected.append(item)
items[:] = selected
Best Practices for Selective Testing
While selective testing is powerful, it's important to use it wisely to maintain test quality and coverage.
When to Use Selective Testing
Selective testing is most appropriate for: - Development phase: When writing new code and needing quick feedback - Debugging: When investigating specific failing tests - CI pipelines: Different test suites for different stages - Large test suites: To reduce feedback time during development
Maintaining Test Quality
Remember that while selective testing is convenient, you should still run your full test suite regularly: - Run all tests before commits or PRs - Schedule full test runs in CI - Monitor test coverage to ensure important paths are tested - Use selective testing as a development tool, not a replacement for comprehensive testing
By mastering pytest's selective test running capabilities, you can significantly improve your development workflow while maintaining the quality and reliability of your test suite. The key is to find the right balance between speed and comprehensiveness for your specific project needs.