Kent Beck told me years ago, if the code does not need to work, then there is no need to test it. He continued and observed, why bother writing it if it does not need to work. Since hearing that and discovering how frequently I make coding mistakes, I want thorough tests.
Maybe you are asking yourself “I’ve got integration and system tests, why do I need unit tests?”. The answer is simple, simple math.
The only practical way to know that every line of code is doing what you think it should do, is to have unit tests that exercise and check each line. Higher level tests, because of the number of combinations, can not practically test each line as the system grows larger and object interactions increase. For example, how many tests are be needed to thoroughly test these three objects if you test them together?
Did you do the math? It requires multiplication. You would need thousand tests to test this simple network of objects. In most cases, it is not practical. Who would bother to spend the time to write 1000 tests for such a simple system? (Maybe someone in medical electronics, aviation or space travel.)
No consider a unit test strategy for the same three objects.
The numbers game works in favor of unit testing. Addition is needed instead of multiplication to calculate the test count. You need a total of 30 unit tests to fully test each object. Then you’d be smart to write the necessary higher level tests to check each interface is being used properly and the system is meeting its requirements.
Its not a matter of needing one and not the other. Unit tests and higher level tests are both needed. They just serve different purposes. Unit tests tell the programmer that the code is doing what they think it should do. They write those tests. Higher level tests (by whatever name you like: BDD, ATDD, Acceptance, System, integration, and load tests) cannot be thorough, but they demonstrate that the system is meeting its requirements.
Here is a good question, and my reply, from a recent attendee of my Test-Driven Development for Embedded C training.
As I work more with TDD, one of the concepts I am still struggling to grasp is how to test “leaf” components that touch real hardware. For example, I am trying to write a UART driver. How do I test that using TDD? It seems like to develop/write the tests, I will need to write a fake UART driver that doesn’t touch any hardware. Let’s say I do that. Now I have a really nice TDD test suite for UART drivers. However, I still need to write a real UART driver…and I can’t even run the TDD tests I created for it on the hardware. What value am I getting from taking the TDD approach here?
I feel like for low-level, hardware touching stuff you can’t really apply TDD. I understand if I didn’t have the hardware I could write a Mock, but in my case I have the hardware so why not just write the real driver?
I am really confused about this…and so are my co-workers. Can you offer any words of wisdom to help us see the light?
Seeking the Light
Hi Seeking the Light
I am happy to help. Thanks for the good question.
Unit tests and integration tests are different. We focussed on unit testing in the class. You test-drove the flash driver Tuesday afternoon. That showed you how to test-drive a device driver from the spec. You mocked out IORead and IOWrite, not the flash driver. You test-drove the flash driver so that when you go to the hardware you have code that is doing what you think it is supposed to do.
The unit tests you write with mock IO are not meant to run with the real IO device, but with the fake versions of IORead and IOWrite. You could run the test suite on the real hardware, but the unit tests would still use mock IO.
I think the flash driver exercise illustrated the value. Pretty much everyone that does the flash driver exercise cannot get the ready loop right without several attempts. Most end up with an infinite loop, or a loop that does not run at all. With the TDD approach, we discover logic mistakes like that during off-target TDD. We want to find logic mistakes during test-driving because they are easy to identify and fix with the fast feedback TDD provides. Finding the problem on-target with a lot of other code (that can be wrong) is more difficult and time consuming. If your diver ready check resulted in an infinite loop, that can be hard to find. Maybe your watchdog timer will keep resetting the board as you hunt for the problem. Bottom line, it is cheaper to find those mistakes with TDD.
TDD can’t find every problem. What if you were wrong about which bit was the ready bit? An integration test could find it. An integration test would use the real UART driver with the real IORead and IOWrite functions. These tests make sure that the driver works with the real hardware. These are different than the unit tests and are worth writing. You could put a loopback connector on your UART connector. Your integration test could send and receive test data over the loopback. If your was looking at the wrong bit for the ready check, you would still have an infinite loop, but that happens only if you mis-read the spec. You’d have to find that mistake via review or integration test.
An integration test may be partially automated. You don’t need to run these so often so, partial automation should be OK. You would only rerun them when you touch the driver or are doing some release. (Loopback is probably better in this case as it can run unattended.) So the test might output a string to a terminal and wait for a string to be entered. Depending on other signals that your driver supports, you may want to breakout and control those signals in a physical test harness.
An integration test for the flash driver would exercise the flash device through the driver. You might read and write blocks of values to the real flash device. You might do the flash identification sequence. You might protect a block and try to write to it. Your integration test would make sure modification is prevented and generates the right error message. These tests use the real versions of IORead and IOWrite and run on the hardware only. When integration problems are found, solve them and then go back to the unit tests and make them reflect reality. You will know which tests need to be changed, because once the integration problems are fixed, the associated unit test will fail.
Some other words in your question makes me want to talk about a fake UART driver. You will want a fake UART driver when you are test-driving code that uses the UART driver. For example a message processor that waits for a string will be much easier to test if you fake the get_string() function. You can build that fake with mocking or hand crafted, depending upon your needs.
All that said, in general the test above the hardware abstraction layer (the layer your UART driver is part of) are the most valuable tests. They should encompass your product’s intelligence and uniqueness. Hardware comes and then it goes, as do the drivers as the components change. Your business logic has, or should have, a long useful life. The business logic for a successful product should last longer than any hardware platform’s life. Consequently those test have a longer useful life too. If I was creating a driver from scratch, I would use TDD because it is the fastest way for me to work, and results in code that can be safely changed as I discover where my mistakes are.
I hope this helps.