Blog

The Digital Agency for International Development

Overriding django signals in your test suite

By Sarah Bird on 10 October 2013

Runtimes are on our CI server using MySQL (they're usually about half the time when running locally on my machine)

Background

On the project I'm currently working on, for the awesome RUFORUM, we have a decently quick test suite it was taking around 15 seconds to run ~200 tests.

But it was hitting the database alot. See the test output from when I didn't have mysql running, all the Es are tests looking for a database:

calls/tests/views_test.py EEEEEEEEEEEEEEEE
contacts/tests/forms_test.py EE
contacts/tests/group_permissions_test.py E
contacts/tests/models_test.py EEEEEEEEE
contacts/tests/templatetags_test.py E
contacts/tests/validators_test.py E
contacts/tests/views_test.py EEEEEEEEEEEEEEEEEEEEEE..
frontpage/tests/frontpage_test.py EEE
grants/tests/forms_test.py EEE
grants/tests/models_test.py EEEEEEEEEEFFFFE
grants/tests/views_basic_test.py EEEEEEEEEEEEEEE
grants/tests/views_pi_test.py EEEEEEE
grants_reports/tests/forms_test.py ..
grants_reports/tests/mixins_forms_test.py ........
grants_reports/tests/mixins_views_GrantReportMixin_test.py ..EEEE.......E
grants_reports/tests/mixins_views_UpdateWithInlinesAndSaveSubmitView_test.py ...E....................................FE..
grants_reports/tests/models_test.py EEEEEEE.......
grants_reports/tests/views_test.py EEEEEE
main/tests/mixins_test.py EEE
main/tests/password_reset_test.py EE
main/tests/utils_test.py EE
main/tests/widgets_test.py EEE
template_manager/tests/forms_test.py EE
template_manager/tests/models_test.py E
template_manager/tests/populate_templates_test.py EEEE

I know, we like to not hit the database, but it does still happen and 15s (or less on my local machine is liveable with)

Adding signals that create more objects

Then, the curse of superlinear test time growth came and bit us in the ASS.

We added postsave signals to one of our models. 5 signals to be precise, each of which did a get_or_create on another model. In production get_or_create will only create a new model once, but in your test suite it happens every time, because django cleans out the test database between tests so that we have a predictable fresh slate. I think is very useful, and not something I'd want to lose from my test suite but I do not want to create 5 extra model instances when I create the one that's under test.

Our test suite was now taking 70s with not a single test added (yes I am going to write the tests for those signals)! Now that is not liveable with. Our test suite is now taking 5 times as long. Bird cries.

Fixing it TWO ways

I found two ways of fixing it.

Option 1 - overriding Factory Boy _generate

Clearly I'm not the first person to have this issue. As shown in this recipe from the factory boy documentation, I can override the _generate method, disconnect my signals and then reconnect them again.

This was good and got me a lot of the way as I use the GrantFactory often in the test suite. But sometimes I was just manually creating grants e.g. I was testing the AddGrantForm's save method. Going through and polluting the test code with signal removers every time was going to be tedious, and unmaintainable.

Option 2 - pytest fixtures with session scope and autouse

We're using the glorious pytest to run our django test suite.

With pytest we can just define a fixture that has

  • scope=session (only run the fixture once for the whole test run), and
  • autouse=True (gets run no matter what so we don't have to pull it into a test to make it run).

Here's the fixture code, that I put in conftest.py:

@pytest.fixture(scope="session", autouse=True)
def disconnect_signals():
    post_save.disconnect(reindex_month_6_report, sender=Grant)
    post_save.disconnect(reindex_month_12_report, sender=Grant)
    post_save.disconnect(reindex_month_18_report, sender=Grant)
    post_save.disconnect(reindex_month_24_report, sender=Grant)
    post_save.disconnect(reindex_month_30_report, sender=Grant)

And now my lovely test suite is back down to a lovely 15s again. And for the test where I actually want to check the signals work, I'll just add them back again.

I am a fan of pytest and this is a good example why. In general my top three reasons are:

  • the assertions are cleaner and easier to read
  • pytest fixtures are more reusable, more flexible, and end up being far less verbose than setup methods
  • test collection, lots of handy command line arguments, marking tests (all fairly minor but add up to a lot)