I usually hate magic in code, but in this case, the magic is fantastic!
I must admit I used pytest to run my tests for an embarrassingly long time before I started writing and using pytest style tests. I'm sure it was a mixture of laziness and not understanding the magic behind pytest fixtures.
Because I didn't understand the magic happening, it scared me away from looking at pytest more deeply. Once I did, I wished someone had held me down and forced the explanation down my throat.
Fixtures are the building blocks of writing good tests. Writing great tests is easy if you have great fixtures, and your development velocity will skyrocket.
The main problem with writing tests
One of the problems with writing automated tests is setting the stage for the test. Generating the specific data necessary to exist before we execute the thing we want to test.
It is tedious work, but it does not have to be!
Why is this such a problem? In many systems, it is cumbersome to generate all of the branches of a tree, its roots, the ground, and the sky to verify the leaves turn out to be green in spring.
As an example, if you worked at Github and were tasked with adding a feature that replaces curse words in issue comments on public repositories, you have to:
- create a user because Organizations need an owner
- create an Organization to own these repositories
- create a public repo to test the main functionality
- create a private repo to ensure it doesn't touch those comments
- create an issue
And THEN, you can create comments with and without curse words to exercise the feature you're working on. That's a lot of crap to make before you test. Is it any surprise people get lazy and opt to not test the feature end to end?
The answer is not to avoid the test but to make it far easier to generate that data in the first place.
The magic of pytest fixtures is how they are injected into your tests for you to use and their composability. When done well, writing tests is considerably easier and actually fun.
The Magic
pytest fixtures are injected into your test by including the name of the fixture function into the argument list of your test.
import pytest
from seuss.characters import ThingOne, ThingTwo
from seuss.comparisons import are_twins
@pytest.fixture
def thing_one():
t1 = ThingOne()
return t1
@pytest.fixture
def thing_two():
t2 = ThingTwo()
return t2
def test_book(thing_one, thing_two):
assert are_twins(thing_one, thing_two)
(This is a reference to Dr. Seuss' book The Cat In The Hat if you aren't familiar with it.)
What happens here is pytest walks your code base looking for all of your tests. This is called test discovery. During that process it inspects the list of arguments to your test functions and matches those to fixtures.
It then organizes the test run to minimize the time of all fixture creation based on the dependency tree. The author of the fixture can control the scope/lifecycle of the fixture data. You can read up more on pytest fixture scopes in this excellent explanation.
So, now we know that if we create a fixture named bob
that just adding bob
to our list
of arguments to our test, we will get that fixture's data. That makes
sharing fixture data around hundreds of tests easy.
The composability of fixtures
It's important to realize that fixtures can build on top of each other.
Let's go back to our Github naughty issue comment example. I would structure things so that I had the following fixtures:
owner
public_repo
private_repo
public_issue
private_issue
It is likely, as we were building our registration process perhaps, we would have
a fixture that was an already registered user, one who would be able to act
as the repository owner. Let’s assume we have a function named make_repo
that handles making a repository for us. But since every repo needs an owner, we create an owner
fixture. We can now use the owner fixture as an argument to make_repo() when we create our repo fixtures. So for the next two fixtures, we would just use
that fixture as an argument, and it gets pulled in automatically:
@pytest.fixture
def owner():
return make_registred_user()
@pytest.fixture
def public_repo(owner):
return make_repo(owner=owner, visibility="public")
@pytest.fixture
def private_repo(owner):
return make_repo(owner=owner, visibility="private")
We can now quickly get a public and/or a private repo for all future tests we write. Keep in mind these fixtures are available across ALL of the tests in our project. They are injected across Python namespaces and source files. This removes the hassle of having to import dozens of things at the top of your test files.
However, that also means you should take a moment and ensure your fixture's name is a good one that represents it well and is easy to remember, type, and not misspell. Ok, let's get back to work.
Our task is to ensure curse words are rewritten in public issue comments. These fixtures help but only take us halfway to where we want to be. We can solve this with even more fixtures!
from issues.utils import create_issue
@pytest.fixture
def public_issue(public_repo):
return create_issue(repo=public_repo, title="Something is broken")
@pytest.fixture
def private_issue(private_repo):
return create_issue(repo=private_repo, title="Something is broken in private")
Now we have the two bits of data (and the tree of data that comes with them) all created so we can actually test our task. It might look something like this:
def test_naughty_comment_public(public_issue):
comment = public_issue.new_comment("Frak off butthole")
assert comment.body === "F*** off b**hole"
def test_naughty_comment_private(private_issue):
comment = private_issue.new_comment("Frak off butthole")
assert comment.body === "Frak off butthole"
Things you can return
Hopefully, it is evident by now that you can return all sorts of things from fixtures. Here are some ideas that might get your brain seeing how you can use these in your projects.
Simple static data
@pytest.fixture
def password():
# We can return simple strings
return "top-secret"
@pytest.fixture
def pi():
# We can return special/constant values
return 3.14159
@pytest.fixture
def default_preferences():
# We can return structures of things that are commonly needed
return {
"default_repo_visibility": "public",
"require_2fa": true,
"naughty_comments_allowed": false,
}
Instantiated objects
Often we need access to an object that is either complex to instantiate or expensive in terms of processing time. In those cases, we can turn that instantiation itself into a fixture and reuse it across tests more easily.
from core.models import Organization
@pytest.fixture
def big_org():
return Organization(members=1000)
@pytest.fixture
def small_org():
return Organization(members=5)
Callables to make other things
You can have fixtures that return functions or other callables to make it easier to inject them into your tests:
from core.utils import make_new_user
@pytest.fixture
def make_user():
return make_new_user
def test_user_creation(make_user):
new_user = make_user(username='frank')
assert new_user.is_active
assert new_user.username == 'frank'
Or you can use it to curry these functions to make them easier to use:
from core.utils import make_complicated_user
@pytest.fixture
def make_admin():
def inner_function(username, email, password):
return make_complicated_user(
username=username,
email=email,
password=password,
is_admin=True,
can_create_public_repos=True,
can_create_private_repos=True,
)
return inner_function
def test_admin_creation(make_admin):
admin = make_admin(username='frank', email='frank@revsys.com', password='django')
assert admin.username == 'frank'
assert admin.is_admin == True
Organizing Your Fixtures
Here at REVSYS, we primarily work with Django projects, so our fixtures tend to live with the Django app with the models used for that fixture. It doesn't have to be this way, of course, it just makes sense in the context of a Django project.
There is nothing keeping you from defining fixtures all over the place in your tests. It just makes it more challenging to know where to look for them.
If a fixture isn't going to be reused outside of a single test file, then it's absolutely appropriate to keep it in that file. Any tests used across a project should be in some central or semi-central location.
pytest helps you with this as well, of course. pytest looks for a file named
conftest.py
in the directory where you're running pytest
(and sub-directories FYI)
and that allows you to define where to look for fixtures that live outside of the test files
themselves.
Here is a typical project structure here at REVSYS:
project-repo/
config/
settings.py
urls.py
wsgi.py
users/
models.py
views.py
tests/
fixtures.py
test_models.py
test_views.py
orgs/
models.py
views.py
tests/
fixtures.py
test_models.py
test_views.py
We put all of the fixtures we intend to share in the project into the tests/fixtures.py
files. We then define project-repo/conftest.py
like this:
import pytest
from users.tests.fixtures import *
from orgs.tests.fixtures import *
@pytest.fixture
def globalthing():
return "some-global-project-data-here"
As you can see, we also are able to define any project wide or global fixtures
directly in the conftest.py
file if it's more appropriate for something to live
there and not in a particular Django app directory.
Plugin Fixtures
pytest plugins can also create and provide fixtures to you. For Django users, pytest-django provides several valuable fixtures:
db
which automatically marks a test as needing the database to be configuredclient
provides an instance ofdjango.test.Client
admin_user
returns an automatically created superuseradmin_client
returns an instance of the test client, already logged in with theadmin_user
settings
gives youdjango.conf.settings
To show you another example, our library django-test-plus
provides itself as a pytest fixture using the name tp
. This allows you to take
advantage of the test helping functions it provides more easily in pytest style
tests. Here is a quick example:
def test_named_view(tp, admin_user):
# GET the view named 'my-named-view' handling the reverse for you
response = tp.get('my-named-view')
# Ensure the response is a 401 because we need to be logged in
tp.response_401(response)
# Login as the admin_user and ensure a 200
with tp.login(admin_user):
response = tp.get('my-named-view')
tp.response_200(response)
This is accomplished with just a couple lines of config in our library and then the fixtures themselves. That means you can also provide fixtures in internal libraries that are pip installed and not confine yourself to only having fixtures in single repository situations.
Automatically using pytest fixtures everywhere
Everything we've shown so far is nice, but you may face a code base
that needs TONS of fixtures most everywhere. Or even just a handful of
fixtures that are needed across most of the code base. No worries, pytest
has
you covered.
You can define a fixture to automatically be run for all of your tests without specifying it as an argument to each test individually.
@pytest.fixture(autouse=True)
def global_thing():
# Create something in our database we need everywhere or always
Config.objects.create(something=True)
def test_something_lame():
assert Config.objects.filter(something=True).exists()
Built-in Fixtures
pytest comes with a bunch of built-in fixtures as well. I admit I didn't even know about some of these for a long time and they can come in very handy.
Some examples include:
tmpdir
returns a temporary directory path object that is unique to each testtmp_path
gives you a temporary directory as a Path objectcapsys
fixture for capturing stdout and stderrcaplog
fixture that captures logging outputmonkeypatch
quickly and easily monkeypatch objects, dicts, etc.cache
a cache object that can persist state between testing sessions
Further Learning
Learn more details in the pytest fixture docs or look over the full list of built-in fixtures.
Conclusion
The more of our codebase that is covered by meaningful tests the faster we can develop new features and refactor cruft. It is not just about bug-free code. It's about the speed at which you can develop good code.
The easier we make it to write tests, the faster we can deliver great software.
P.S. If you or your team aren't writing tests or just struggling with them, we can help get you there with our TestStart product. One of our experts will swoop in, get you set up, and write a bunch of your first tests for you to show you how it's done in your own code base!