I hate these posts. I hate them because I feel like I am drawing attention to another post which made me cringe a little bit. But I think that reacting to these posts is good, because having a healthy debate about topics is good, especially if you can keep from devolving into mud slinging and attacks. The post I am talking about is this one, which is titled “It’s OK Not to Write Unit Tests”. I was surprised to see that it is actually from March of this year, and maybe I have seen it before, maybe not, but this time I felt the need to respond to it.
One thing that I want to say first is that I want to keep this calm and mellow. I used to react very calmly in retort to this kind of post, but over time as I wrote more and more on my blog, I found myself starting to become the “everything is black and white and I’m going to tell you that you’re wrong with my blaring megaphone” kind of writer that I often loathe when I read. Sure, it might not make the best link-bait in the world, but maybe I’ll feel a bit better after I write it. Oh, and the blood pressure might stay down a little bit. Having said all that, here we go…
Bad Experience Does Not Equal Bad Practice
The first thing I want to point out, which at first glace will seem blindingly obvious, is that a bad experience with a particular tool or practice does not mean it is necessarily bad. It could be bad, but correlation is not causation and context is everything. The experience we had was bad, but it is often hard to put all of the blame on the tool or practice. For example, what if I went and tried to use Scrum on one of my projects and the entire project failed miserably. Does that mean that Scrum is a bad framework for building applications? Of course not, many people find Scrum very very effective. And if it works for a lot of other people, then sometimes the best thing to say is “let me see how these other people are doing it”, before you decide that the tool doesn’t work for you. You might find out that you were wrong, or you might confirm your beliefs, the important thing is to have an open mind.
We have all heard the saying “when all you have is a hammer, everything looks like a nail”. It is one of my favorite sayings because people don’t spend enough time looking outside of their own boxes in order to discover that the world is full of screws and nuts (pun intended). Not every screw can be banged in with a hammer, but at the same time, you can’t wield a screwdriver like a hammer and expect positive results. In order to be successful with any tool or practice, you need to research in order to figure out how the tool works. Or as almost any developer on IRC will tell you, RTFM.
If we are misapplying tools and techniques, then we are going to have bad experiences with them. And because I believe in testing a system at a low level, and testing a system at a high level are both important, I am going to outline a few steps that I think need to be taken toward your testing enlightenment.
First Step: Admit You’re Not As Smart As You Think You Are
Unit testing, like many other practices that the development community has adopted, are here for one simple reason. We, individually and collectively, are not that smart. We like to think we are smart, but as any Douglas Adams fan knows, the dolphins are the ones who really have it figured out. We write unit tests because all of the pieces that are involved in building large and complex applications just can’t fit in our tiny little heads. You’ve heard of the magical number 7 right? Sure, people will argue it is a wives tale, but the point is the same, we can only hold a finite number of items in our head. And for most of us, that number is not very big.
In order to alleviate the “tiny head” problem, we need to break things down. Breaking things down won’t help us if we don’t validate as we go along that each part is working. Think about it, what if we were going to build an engine. Wouldn’t it be easier to individually design, build, and test 150 different parts that make the whole before we assemble them? Surely it is easy to make an alternator and install that into the engine as a subsystem, rather than individually bring in every part that makes it up.
That doesn’t mean we aren’t going to fire up the engine after we build it to make sure the whole thing works, but if we didn’t have each piece designed right, it would be a heck of a lot harder to figure out that the fuel injectors weren’t spitting out enough fuel if we hadn’t already tested that independently as part of their design.
The author makes the point of saying:
Like, let’s say I was writing a SHA-1 hash implementation. That’s a lot of code. And I wouldn’t write little tests all the way down. But I would have a few tests validating that it works at a very high level, absolutely.
Okay, so writing an implementation of SHA-1 is not exactly a common occurrence. And quite frankly, a SHA-1 hash algorithm is actually not that much code. It is very complex, but it is not a huge amount of code. To say that you are going to go through that algorithm without writing any tests as you go tells me that you are going to spend a huge amount of time in the debugger. Or you are a genius. Either way, why not formalize all of the intermediate testing that you are going to do anyways so that when someone finds a bug in your implementation, they don’t have to retrace your steps.
Second Step: Admit You Need Training Wheels, Or At Least Some Crutches
The author of the article makes the statement that unit tests are akin to training wheels. Something that you should do when you aren’t very good, and then get rid of when you become an expert. I can’t wrap my head around this analogy for a few reasons. First, unit tests are not testing your ability as a developer. They are testing the correctness of your code. Writing correct code is about being a domain expert, writing good code is about being a good developer. The code you are writing probably involves significant business logic that you did not create, in a field that you are probably not an expert in, and is mixed with lots of code that you did not write. And, in the future, will be modified by people which you will probably never even meet. And those people will probably be idiots (just sayin).
Let’s just make it easier for everyone and assume that you and I are both going to break your code, so put some harnesses in place that will tell me that. And no, I don’t want to know that I broke your code from a test that is 10,000 lines above where I made my modification, I want to know at a much lower level than that. So I don’t waste a ton of time in the debugger stepping through line by line.
Third Step: Address The Problem Not The Symptom
Brittle tests. Ugh. I’ve heard this argument so many times. “Unit tests are brittle, unit tests are brittle.” Brittle tests are a symptom. Go grab a few books on writing effective unit tests. There are many available. All of them have strategies for mitigating this. Most of the time these strategies don’t involve complaining about brittle tests, they usually involve teaching people how to write tests which don’t depend on external resources and which operate on public interfaces and not internal implementation details… which leads me to the next step.
Fourth Step: Understand That Encapsulation Is Your Friend
One of the practices that the author complains about is the process of making private methods public in order to test them. Again, we are looking at the symptoms. First, the only tool or entity that I have ever seen advocate this is Microsoft and MSTest. The tool that MSTest provides which exposes private methods via proxies for unit testing was one of the biggest mistakes they made with their testing toolset. Most unit testing experts will tell you that once you start digging into the internal implementations of classes you are, by definition, creating brittle tests. A private method is a big flag that says “DON’T CALL THIS METHOD EXTERNALLY, EVEN USING REFLECTION.” If you decide to do so, you are doing it at your own risk, knowing that you are creating a dependency on something that may change.
Encapsulation is one of those fundamental concepts that you should go out of your way to not violate. The sooner you learn that you should only be testing the inputs, outputs,and public state on your classes (which you should work to minimize), the happier that you’ll be. If you see something in your class that you need to really exercise fully and you can’t get to it because it is wrapped up in a sea of private methods, it sounds like it is time for some refactoring. Pull it out into a class so that you can exercise the complex behavior in isolation. That is the idea after all.
Fifth Step: Admit That You Don’t Know Statistics (AKA: Combinatorics Are Not Your Friend)
If you are writing tests at the system level then you aren’t thoroughly exercising your system. Simple as that. Have you ever sad down and thought about the insane number of interactions within the applications that you are working on? If you haven’t, take a moment and ponder it. Software projects are often giant spider webs of logic and interaction where the deeper you get into the web, the harder it is to understand how things at the top affect it. If you have ever seen a game of Plinko then you’ll know exactly what I am talking about. You may be able to predict where the chip will go for one or two levels down, but past that point there are simply too many possibilities and too much randomness. Testing “in the small” on classes that are at the lower levels of your system will allow you to have more confidence that you have actually exercised these pieces.
Sixth Step: Realize That Isolation Is The Only Way To Have Control
We all want control, right? Resorting to only doing “whole system” or “acceptance” testing makes it difficult to test anything in isolation. But why would you want to test in isolation? This is one of the most frequent questions I receive when I talk about mocking to people. And the best answer is that it allows you to control the execution environment. How many times have you seen code where there aren’t any tests that cover an error case because there is no good way of forcing an error at test time? Or how many times have you not tested something thoroughly because you couldn’t call out to a web service or read from a file during a test? These are common problems. If you don’t have a strategy for testing pieces of your system in isolation then you can’t thoroughly test your system.
Seventh Step: Understand That It Isn’t Just Integration Tests or Unit Tests
Many people, when they argue against unit tests, are merely saying that testing at the level of units is often just too small of a piece of code to be worth it. To a certain degree, I agree with this. Writing tons of tests that test itty bitty pieces of logic will probably not get you too much value in the end, and will probably end up being an anchor around your neck. But many of those same people seem to think that if you aren’t writing tests at the unit level, then the only other option is to write tests at the system level. Argh!
The more you work on a system, the better of a feeling you will get about what level most of your testing needs to occur at. For me, that level of testing is mostly at the class level and frequently at the sub-system level. Sure, you also want whole system tests, but as I said earlier, you shouldn’t lean on those too heavily.
Don’t Take My Word For It
In the immortal words of LeVar Burton, “you don’t have to take my word for it”. Go out there and grab a few books like xUnit Test Patterns or Pragmatic Unit Testing. Try to write tests the way that many of the experts out there would prescribe, and see if it doesn’t make your life easier. I know that the author of the referenced article said that he didn’t think find grained tests caught regressions, but I have never worked on a system which had a good suite of lower level tests that weren’t constantly catching unexpected regressions. I just don’t know what to think about that. And that is why I say it is so important to try it out for yourself, I have worked on well tested projects and in the end they ended up higher quality, better designed, and more maintainable because of the tests, not in spite of them.