Summary
We move to a more realistic project setup that looks like this:
less_basic_project
├── my_awesome_package
│ ├── calculation_engine.py
│ ├── __init__.py
│ ├── permissions.py
├── pyproject.toml
└── tests
├── conftest.py
├── test_calculation_engine.py
└── test_permissions.py
Using "pyproject.toml" to configure pytest, e.g:
[tool.pytest.ini_options] # mandatory section name
addopts = "-s --no-header --no-summary" # force cmd flags
testpaths = [ # what directories contain tests
"tests",
]
pythonpath = [ # what to add to the python path
"."
]
we streamline pytest calls, to get automatic recursive test discovery, solve import problems and make the output clearer.
We can also leverage "conftest.py" to define fixtures for the entire project, and cache their execution:
@pytest.fixture(scope="module") # run once for each test file
def a_fixture():
...
Even without plugins, which are numerous, we can already enjoy fine tuning pytest's behavior with many flags, among which:
-x
: stop a first failure.--pdb
: start the Python debugger on failure.-k <filter>
: only discover tests that match the filter.--ff
: start with tests that failed in the previous run.--nf
: start with new files.--sw
: start from where it stopped the previous run.--no-header
/--no-summary
: remove the big blobs of texts in the output.--verbosity=x
: from 0 to 3 levels of output granularity.
One more for the road
Promise, next part we will answer practical questions like "what to test". But before we move on, we need another little, or maybe not so little, article on tooling.
You see, pytest may be an awesome tool, but I've only shown toy examples (see part 2) so far, so, of course, everything is simple. IRL though, you will have a big projects with a lot of files, imports, successful/failed/overflowed attempts at DRY or YAGNI, specific contexts that need specific configuration, etc.
All that means you will need to use pytest in a way that fits your situation, and for this, you need to understand how to do so. Plus, further explanations will all assume you know this stuff, so I rather get it out of the way now.
A realistic project layout
Having only two files is nice to understand the basics, but your real projects will contain more than that. Let's create a setup that emulates a small file tree that's closer to what you might work with:
less_basic_project
├── my_awesome_package
│ ├── __init__.py
│ └── the_code_to_test.py
├── pyproject.toml
└── tests
└── the_tests.py
We now have a project root called "less_basic_project" containing two directories. One is named "tests", and will contain all the project… tests. We move our previous test code in there. The second one is the source code of the project, "my_awesome_package", in which we place the code we wrote in our last article. Because of the way Python imports work, it also contains an empty "__init__.py" file.
For this tutorial, you will need to be comfy with Python import system, so if you have a doubt, read our article on the topic first. It's trickier than it looks.
Finally, we have an empty "pyproject.toml" file at the top dir, because today that's the standard file to configure Python projects.
The first ImportError
If you position yourself at the root of the project and run pytest tests/the_tests.py
like a good citizen, you will encounter a crash:
$ pytest tests/the_tests.py
...
tests/the_tests.py:4: in <module>
from the_code_to_test import add
E ModuleNotFoundError: No module named 'the_code_to_test'
Indeed, the_code_to_test
is no longer the correct path since it's part of a package. We need to fix "the_tests.py" imports from:
from the_code_to_test import add
to:
from my_awesome_package.the_code_to_test import add
At this stage you may feel good, you understand sys.path
and you know that the current directory is automatically added to it, so it should all work out.
But it does not.
A second run will also crash:
$ pytest tests/the_tests.py
...
tests/the_tests.py:4: in <module>
from my_awesome_package.the_code_to_test import add
E ModuleNotFoundError: No module named 'my_awesome_package'
That's because the pytest
executable, for whatever reason, is an exception in the Python world, and do not add the current directory to sys.path
.
If you run the python
executable instead, which does it by default as we explained in our "python import" tutorial, and use -m
to run pytest
, however, it will work without a problem:
python -m pytest tests/the_tests.py
================= test session starts ================
platform linux -- Python 3.10.13, pytest-7.3.0, pluggy-1.0.0
rootdir: /path/to/less_basic_project
plugins: django-4.5.2, clarity-1.0.1
collected 4 items
tests/the_tests.py ....
================= 4 . in 0.01s ================
It's one of the many reasons I advocate for -m
even inside a virtual env. People endlessly debate about this on twitter and reddit and just don't realize how many failure modes there are out there.
Unfortunately this is all getting very verbose. Remember we often use -s
as well (see previous article) to avoid stdout capture, so a basic run of pytest would start to look like python -m pytest -s tests/the_tests.py
.
Doesn't really roll off the tongue, does it?
I understand you might want to keep it short, and while I do think medium code base should use some task runner like doit to normalize project management, we're gonna work on that.
Automatic test discovery
pytest
test runner doesn't need you to specify a particular file, it can scan all modules matching a certain naming convention, and extract tests from them. You want that, because as you can imagine, you usually have a lot more than one set of tests.
The default convention is to name all test files "test_something...", the something being usually related to the part of the code it's testing. Giving good names to your modules will therefore help make everything clearer.
Let's rename our "the_code_to_test.py" module into a more explicit "calculation_engine.py". And the file that tests it, "the_tests.py", can then be renamed "test_calculation_engine.py". Don't forget to change the imports!
This gives us:
less_basic_project
├── my_awesome_package
│ ├── calculation_engine.py
│ ├── __init__.py
├── pyproject.toml
└── tests
└── test_calculation_engine.py <- change imports here
Now we can run python -m pytest tests
, and your tests will be discovered automatically.
Even better, if I add a new function, and a new test in separate files, it will still work. Let's do so.
I'll create "my_awesome_package/permissions.py" with a state-of-the-art security system:
def can_access(file_path):
return True
And the matching test file, "tests/test_permissions.py":
from my_awesome_package.permissions import can_access
def test_can_access():
assert can_access("/")
Truly ground breaking.
And python -m pytest tests
finds it automatically:
python -m pytest tests
================= test session starts ================
platform linux -- Python 3.10.13, pytest-7.3.0, pluggy-1.0.0
rootdir: /path/to/less_basic_project
plugins: django-4.5.2, clarity-1.0.1
collected 5 items
tests/test_calculation_engine.py .... [ 80%]
tests/test_permissions.py . [100%]
================= 5 . in 0.01s ================
Still, it's longer than just calling "pytest", isn't it?
We can do better.
Pytest configuration
Pytest can be fully configured from the "pyproject.toml" file, which means you can put there the default behavior you want. Anything in the command lines override this default configuration, so you can always locally and temporarily change it if you need to, but the bulk of your calls will require less thinking.
Here is an example of what it can contain:
[tool.pytest.ini_options] # mandatory section name
addopts = "-s" # force a command line option
testpaths = [ # what directories contain tests
"tests",
]
pythonpath = [ # what to add to the python path
"."
]
With all this, you can call pytest
with nothing else, and have all the goodies:
pytest
================= test session starts ================
platform linux -- Python 3.10.13, pytest-7.3.0, pluggy-1.0.0
rootdir: /path/to/less_basic_project
plugins: django-4.5.2, clarity-1.0.1
collected 5 items
tests/test_calculation_engine.py::test_add_integers
This is run before each test
.
We tested with 5
This is run after each test
tests/test_calculation_engine.py::test_add_strings
This is run before each test
.
This is run after each test
tests/test_calculation_engine.py::test_add_floats .
tests/test_calculation_engine.py::test_add_mixed_types .
tests/test_permissions.py::test_can_access .
It will add the current directory (hence the “.”
) to sys.path
, look for all tests in the "tests" dir, append -s
so stdout is not captured, and all is well.
Note that pythonpath
has been added in pytest version 7, if you have an older pytest version, you might need a plugin. Also, you may notice other tools have similar sys.path
problems. In the end, you might just want to bite the bullet, set PYTHONPATH
for the entire project and be done with it. I often do. But that's a different topic, we will keep focusing on pytest for now.
Also consider that in many contexts, such a CI, task runners (doit, nox...), git hooks, I would still use python -m pytest
, to ward off other problems. So while it's convenient for manual execution, remember -m
is our lord and savior.
More conf
pytest is extremely configurable, running pytest -h
will get you a wall of text filled with options, flags and env vars you can use to tweak its behavior. Once you are confident you understand how this lib works, you should definitely explore those. Meanwhile, I will point at a few knobs that are worth your time first:
-x
: will cause pytest to stop the run at the first test failing.--pdb
: when a test fail, start the Python debugger. This one is fantastic. If if you don't remember how pdb works, you know the drill. Check--pdbcls
if you want to useipdb
.-k <filter>
: only discover tests that match the filter. You can use a string to keep only tests with a name containing it, but it's actually much more powerful.--ff
: run all the tests, but start with the ones that failed in the previous run.--nf
: run all the tests, but start with new files.--sw
: exit on first failure, but next run, don't execute all tests. Start from where it stopped the previous run.--no-header
+--no-summary
: remove the big blobs of texts at the beginning and the end.--verbosity=x
: tell pytest to be quiet (0, the default) up to super chatty (3, the max).
As you can imagine, you can combine them to fine-tune your pytest experience.
In fact, for the rest of the series, I will set this in "pyproject.toml":
addopts = "-s --no-header --no-summary"
This will make the test run much clearer.
However, this is not the only way pytest can be configured. You already learned about another one: naming conventions. There are also some specific calling conventions. The most useful is using "::" in a path when you want to target a specific test, as you can do "pytest path/to/test_file.py::test_function".
E.G, if I want to only call test_add_strings
, I would do pytest tests/test_calculation_engine.py::test_add_strings
:
pytest tests/test_calculation_engine.py::test_add_strings
================ test session starts ================
collected 1 item
tests/test_calculation_engine.py::test_add_strings
This is run before each test
.
This is run after each test
================= 1 passed in 0.01s ================
Yet, believe it or not, we have other configuration areas to cover.
I said more conf!
Pytest comes with a full-featured plugin system, and each of them can change your pytest configuration, but also add totally new settings you can set from the command line or "pyproject.toml". Once they are installed in your virtual env, they are automatically loaded unless you disable them in the config.
E.G, if I pip install pytest-sugar
, the pytest-sugar plugin will be installed and automatically loaded on my next test run, providing progress bars to the output:
And it also will add the option --force-sugar
to pytest.
While I find pytest-sugar pretty, I don't use it personally, this is just an example of how plugins work. There are tons of plugin, some provide code coverage, some DB setup, some rest clients, some fake data... The ecosystem is rich, powerful and a very useful.
You can even create your own plugin, with your own configuration parameters, although that is a very niche topic we will not cover in this series.
There is also a whole other side to the configuration: how to integrate testing with your text editor. That's an complete article on itself, and there are so many IDE out there with so many different preferences it's unlikely we can help everybody. I may make one article at least on VSCode, I’m still pondering it.
MOOOOOOOOOAR conf!!!
Yeah, the whole conf stuff is pretty wild in pytest. It's the last one, I swear.
If you create a file named "conftest.py" and put it at the root of your "tests" directory, you can influence how pytest work, programmatically, in python.
We won't get into all the things you can do with conftest.py, because it can quickly become esoteric. Not to mention you can actually create SEVERAL "conftest.py" that cascade and override each other. It's a Pandora's box.
The most important things you can do in "conftest.py" are shared fixtures and scoping.
Remember how we had:
@pytest.fixture()
def random_number():
yolo = random.randint(0, 10)
yield yolo
print(f"\nWe tested with {yolo}")
in "test_calculation_engine.py" ?
Well, you can't use random_number
in "test_permissions.py". It's local to the file it's defined in!
But if you instead move it to "conftest.py", then suddenly all tests next or below it will have access to the fixture.
The tree now looks like this:
less_basic_project
├── my_awesome_package
│ ├── calculation_engine.py
│ ├── __init__.py
│ ├── permissions.py
├── pyproject.toml
└── tests
├── conftest.py
├── test_calculation_engine.py
└── test_permissions.py
And the conftest file contains:
import random
import pytest
@pytest.fixture()
def random_number():
yolo = random.randint(0, 10)
yield yolo
print(f"\nWe tested with {yolo}")
Which we removed from "test_calculation_engine.py".
This opens the door to scoping, meaning we can now finely tune when random_number
is executed.
Indeed, by default, a fixture is called one per test that uses it.
However, for some tests, such as DB setup, network connection, file creation, data generation and so on, you might want the fixture to actually run once per group of tests, or once for the whole test run.
You can do so with the scope
param:
@pytest.fixture(scope="module") # run once for each test file
def random_number():
yolo = random.randint(0, 10)
yield yolo
print(f"\nWe tested with {yolo}")
You can set the value to "function"
(the default), "module"
(one py file), "package"
(a whole dir) or "session"
(one run).
If you don't need the value from yield
but want the side effect, you can also automatically force the fixture to run for all tests that can see it, even if they don't explicitly request it by using a parameter named random_number
, with @pytest.fixture(autouse=True)
. scope
and autouse
can be used together.
Use with caution, it's easy to overdo it.
A word on cache files
I've you showed a clean project tree for pedagogical reasons, but in reality, if you glance at yours right now, it will look more like this (YMMV):
less_basic_project
├── my_awesome_package
│ ├── calculation_engine.py
│ ├── __init__.py
│ ├── permissions.py
│ └── __pycache__
│ ├── calculation_engine.cpython-310.pyc
│ ├── __init__.cpython-310.pyc
│ └── permissions.cpython-310.pyc
├── pyproject.toml
├── .pytest_cache
│ ├── CACHEDIR.TAG
│ ├── .gitignore
│ ├── README.md
│ └── v
│ └── cache
│ ├── lastfailed
│ ├── nodeids
│ └── stepwise
└── tests
├── conftest.py
├── __pycache__
│ ├── conftest.cpython-310-pytest-8.1.1.pyc
│ ├── test_calculation_engine.cpython-310-pytest-8.1.1.pyc
│ └── test_permissions.cpython-310-pytest-8.1.1.pyc
├── test_calculation_engine.py
└── test_permissions.py
That's because both python and pytest have their own caching mechanism. Those files are not something you should commit to your VCS (such as git) and it's OK to delete them. Letting them be will make the tests run faster, and enable features like --ff
.
Refering to the part with IDEs - may i suggest showing something with vim/neovim?
thank you! a very nice article. I like the -s flag. I was lazy and just put assert False into my tests to catch the output put -s is of course much better :-)