Kurt McKee

lessons learned in production

Archive

Hey there! This article was written in 2009.

It might not have aged well for any number of reasons, so keep that in mind when reading (or clicking outgoing links!).

Quantifying unit test coverage

Posted 4 July 2009 in python

I've really been working hard to make sure that every piece of code that I commit to listparser is backed by unit tests to ensure that the code does what I expect it to do. But a while back I had the idea that it would be great to have some kind of program that could watch the unit tests run and produce a webpage showing me exactly what lines of code did and did not get tested.

Today the thought occurred to me again, and within minutes I found coverage.py, which does exactly what I had envisioned. It ran my unit tests, produced a highlighted webpage, and showed me that I hadn't tested eight lines of code. Minutes later I was down to five lines of untested code.

Is a metric like "97% code coverage" a good indicator of the strength of the unit tests? No. It's trivially easy to produce code that doesn't account for things:

if everything_is_okay:
    print "sweet"

(In this example, what if not everything_is_okay?!) So, code coverage isn't a guarantee that all your bases are...um, covered. It is, however, another useful tool.

☕ Like my work? I accept tips!