Posted on Tue 01 October 2013

Have you tested all of your code?

I attended a Coding Dojo here in Tallinn, Estonia on the weekend. There were developers present who were versed in a variety of programming languages including Python, Ruby and Java. We were tasked with pair programming as much of Conway's Game of Life as we could within a 45-minute time limit. At the end of the 45 minutes we'd delete the code and discuss our learnings. We repeated this four times, each with a new partner. We were encouraged to do test-driven development where the tests would be written before the application code itself.

I learnt a lot sitting with developers of different backgrounds and working through testing strategies and the logic of the game. Every developer knew how to write unit tests but there was one unfortunate key element of testing that all the developers in the room were missing: Coverage reports.

None of the developers in the room I spoke with knew how to generate a coverage report after running their unit tests.

It's true that with 100% test coverage you can still have problems. For example, the reports don't explain on how exhaustive the permutations of parameters are. But coverage reports can at least verify how much code has been hit at least once.

We use coverage reports here at Stickyworld. This is our fabric command for running tests on Stickyworld's primary web backend:

from fabric.api import local, task, env
from os.path import abspath, dirname, join


@task
def test():
    """
    Runs unit tests and coverage tool on local codebase
    """
    this_dir = abspath(dirname(__file__))
    local('cd %s && coverage run '
        '--omit="*/migrations/*,*/tests/*,*/.virtualenvs/*" '
        'manage.py test -v2 %s; coverage report; coverage annotate' % (
            join(this_dir, "../src"), ' '.join(env.django_apps)))

Let me walk you through a breakdown of this Fabric command.

Changing into the correct path

The first thing we do is change the current path of execution into the root source folder. We know it's position relative to this fabric file and we do our best to make sure this code isn't tied to anyone's home folder. A portable codebase is a happy codebase.

this_dir = abspath(dirname(__file__))
local('cd %s' % join(this_dir, "../src"))`

Installing and running coverage

We're using the coverage module to run our tests. You can install it via pip:

(backend)[mark@ubuntu  backend (master)]➫ pip install coverage

coverage will then take a couple of parameters. Here's the ones we're using:

run will tell coverage we want to run a test command.

--omit will give a list of paths to omit from the coverage calculations (external modules, the test code itself, etc...).

Finally we give it the command for running our tests: manage.py test -v2 app1 app2 app3.

The app1 app2 app3 are the specific names of each of our Django apps within the project. Without this list all of our apps and Django itself would be tested. Those tests themselves can be useful but we're trying to make sure our tests run quickly and often. The longer they take, the less likely they'll be run and flag up problems. We can run a more complete test cycle as a part of a pre-git push command call.

(On a side note, I've seen code bases where a file change monitor runs and when it sees a file within a single Django application has been saved, it'll parse out the application name from the file path and run the tests for that individual application.)

This is what the transformed command will look like:

➫ coverage run \
    --omit="*/migrations/*,*/tests/*,*/.virtualenvs/*" \
    manage.py test -v2 \
    app1 app2 app3

Reporting on how much code we've hit

In bash I often use && between commands I'm chaining together. It means the previous command must complete successfully before the next command will run. In the case of our tests, if they fail I still want to get a coverage report so I'll use ; to chain them together instead.

➫ <coverage run command>; coverage report; coverage annotate

Here is an example output of a report:

➫ fab test
...
Name                         Stmts   Miss  Cover
------------------------------------------------
base/__init__                    0      0   100%
base/settings                  121     17    86%
base/urls                        4      0   100%
...
utils/__init__                   0      0   100%
utils/api                       41     30    27%
utils/decorators                12      4    67%
utils/encoding                  58     44    24%
utils/files                     61     48    21%
utils/titlecase                128     89    30%
...
------------------------------------------------
TOTAL                         2099    924    56%

Annotating copies of our code

Finally coverage annotate will create a new sibling .py,cover file for every .py file it's seen. These siblings prefix each line of code with > if they've hit that line at least once during a test or ! if the line was never hit at all.

If your report mentioned that anything under than 100% of your code has been covered then open the corresponding .py,cover file of any .py file in question and see which lines have and haven't been hit.

➫ cat src/utils/decorators.py,cover
> from django.utils.decorators import available_attrs
> from django.http.response import HttpResponse
> from functools import wraps
> try:
>     import json
! except ImportError:
!     from django.utils import simplejson as json


> def return_json(view_func):

>     def wrapped_view(*args, **kwargs):
!         res = view_func(*args, **kwargs)
!         return HttpResponse(
!             json.dumps(res), content_type="application/json")
>     return wraps(view_func, assigned=available_attrs(view_func))(wrapped_view)

Keeping leftovers out of our repository

All these reports leave a few files lying around in your codebase. I've added the following two lines to our .gitignore file to make sure they don't get checked into git.

➫ cat .gitignore
.coverage
*,cover

Did we test everything?

The key thing to remember about the coverage tests is that they can only report on code they know about.

Consider these files:

➫ ls src/utils/*.py
src/utils/__init__.py
src/utils/api.py
src/utils/data.py
src/utils/decorators.py
src/utils/encoding.py
src/utils/files.py
src/utils/serializers.py
src/utils/tests.py
src/utils/time.py
src/utils/titlecase.py

In tests.py if you only include specific modules or methods then they are the only ones that will be counted. Instead you should include everything from every file:

➫ head src/utils/tests.py
from utils.api import *
from utils.data import *
from utils.decorators import *
from utils.encoding import *
from utils.files import *
from utils.serializers import *
from utils.time import *
from utils.titlecase import *
...

Closing thoughts

Test-driven development helps us have confidence to change and deliver a better application for our users. If you agree with this statement I'd like to hear from you. We're always on the lookout for test-driven developers. Our development team is based here in Tallinn, Estonia but includes people from different backgrounds and countries including Canada and the UK.

Signup for our low-traffic newsletter:

Powered by MailChimp

© Giulio Fidente. Built using Pelican. You can fork the theme on github. .