“Convince me, why should I use Nette Tester, what makes it better than PHPUnit?” I always feel a bit uneasy with these questions because I don't feel the need to convince anyone to use Tester. However, I couldn't do without it. [perex]
Testing itself is somewhat of a cursed topic. For five years, at every conference, testing is repeatedly discussed, yet in almost no company is code “tested.” I put that in quotes because, in reality, all programmers test every day, but what they don't do is write tests in PHPUnit, for many reasons. Truth be told, during that brief period when the Nette Framework was tested with PHPUnit, I also lost the taste for testing. Yet, testing is as crucial to the development of a framework as, say, a version control system.
But let's take it step by step. Why do all programmers test, even though they “don't test”?
Imagine you program a function foobar()
function foobar($x, $y) {
// some calculations here
return $val;
}
The first thing every programmer does is to check if it works:
echo foobar(10, 20);
They run it, it prints 30, which is correct, it seems to work, and maybe they try a few other inputs.
In other words, they test the function. So, they are testing!
Then, what happens next is that the test script is deleted. And that's exactly the problem! All the developers' arguments that they don't have time for testing fall apart at this moment because the reality is that there is time for testing, and tests are even written, but then the programmers delete them. The irony.
A test is not just a class in PHPUnit; a test is also this one-line script. More precisely, the test must also contain information about what the correct return value should be, so it would look like this:
assert(foobar(10, 20) === 30);
assert(foobar(-1, 5) === 4);
...
Sometime in the future, if I change the implementation of the
foobar
function, or if a colleague modifies it, all I need to do
is run this script, and if it throws a warning, I know immediately that the
function is broken.
And that's all. This is testing. It's that simple. We all do it, unfortunately, many of you then delete those tests.
Over time, we accumulate a huge amount of such test scripts and the question arises on how to run them collectively. It can be solved with some shell script, but I wrote a PHP script for it. Its significant advantage is that it can run tests in parallel (I typically run 40 threads), which dramatically speeds up the testing of the entire set. It also neatly displays where exactly in which file a failure occurred.
Instead of the PHP function assert
I wrote my own functions (class Assert
),
which differ mainly in that they clearly and legibly output what the function
should have returned and what it instead returned, so I can quickly identify
where the problem is.
That launcher, the Assert
class, and a few other things make up
the aforementioned Nette Tester. It has reliably tested the Nette Framework for
four years.
When someone reports a bug in the foobar
function, saying it
returns an empty string instead of the number -1
for inputs
0
and -1
, I start by verifying it:
Assert::same(-1, foobar(0, -1));
I run it, and indeed, it outputs:
Failed: '' should be -1
So, I wrote a failing test. I didn't do it because the manuals about testing say that a test must fail first, or because I follow TDD, but because I can't think of anything else to do but simply write a short piece of code that checks the bug report, i.e., a failing test.
The bug really exists and needs to be fixed. In the IDE, I start stepping
through the code, looking for the issue. (I'll write an article about
programmers who code in notepads, whether they're named TextMate or Sublime,
instead of a full-fledged IDE, and therefore cannot step through code, some
other time.) Yes, I could have found the bug without stepping through by just
staring at the code and placing var_dump
, echo
, or
console.log
, but it would take much longer. I want to emphasize
that stepping and testing are not alternatives but completely different
activities that are great to use together.
I find and fix the bug, Assert::same
is satisfied
, and I commit not only the function correction foobar
but also
the test file to the repository. Thanks to this, such a mistake will never occur
again in the future. And believe me, bugs tend to repeat themselves, a
phenomenon that even has a name: regression.
This conversation might have seemed very obvious to you. And that's good because it is obvious, and I want to break down prejudices and fears about testing. But I still haven't answered the initial question: why don't I use PHPUnit? Because I can't work with it this straightforwardly.
To test foobar
, I would have to write a whole class that
inherits from another class, whose name I can't remember. Well, I would use a
template. PHPUnit does not allow tests to be run in parallel, so testing the
whole set takes much longer. In the case of the Nette Framework, it's about
35 seconds versus 2 minutes, which is a significant difference. Moreover,
tests written in PHPUnit can only be run in PHPUnit, they are not standalone
scripts. So there's no way to write a failing test and then step through it and
easily search for the bug in the mentioned way.
The simplest solution, therefore, was to write my trivial testing tool. Over four years, it has slowly evolved into a full-fledged tool, which I no longer develop alone, and thanks to the guys from Oracle, it now has integrated support in NetBeans 8.0. Since it generates output in TAP format, there should be no problem integrating it into other tools either.
I won't convince you to use Nette Tester, but I would like to convince you not to delete the tests you write 🙂