Building a Automated Testing/Quality Assurance System
(Based on my Pycon 2009 presentation.)
What is the value of automating your software testing as much as possible?
At a talk I gave once to a large room of software engineers, I asked for a show of hands: how many write unit tests all the time? And, how many use version control for nearly all their code? Almost every hand went up on both counts. I was proud of them for this, and told them so!
A little later in the same talk, I asked how many had use a code coverage tool on their code in the past week. Only fifteen percent! A tiny fraction. And these were undeniably excellent engineers. What would the ratio have been for a more mediocre group?
Automation is a form of leverage. If each attendee's organization had an automated QA system that simply did a source checkout each night, and executed all tests within a code coverage tool while everyone slept, then everyone would have been able to raise their hands... even if they did not know how to invoke the tool!
This situation demonstrates why an automated testing system is important. By silently working for you and your team, all measurable metrics are just there and available. You won't use them all the time. That's fine. They are there when you need them, without distracting you from more important matters.
A great automated quality assurance system will have a number of good ingredients. Let's look at them.
(And by the way, this list is unapologetically opinionated. It's not necessary to agree with each item. But everything is there for a reason. I urge you to at least understand the rationale behind it. Then you are well equipped to decide whether to do it this way or another way. Send questions and/or criticisms to amax@redsymbol.net.)
Prerequisites
Two things must be in place team-wide before a test automation system can work for you. The team must be using source code control, and the team must be writing automated tests.
Without these in place, most of what follows will not be effective.
Intelligent use of Version Control
Once the system under test and the development team reach a certain size, you start to need at least two kinds of branches in your version control system. First is one or more "mainline" branches. There is one for each particular product; if there are several versions of the a product, you will have a mainline branch for each version as well.
You may just have one mainline branch, or you may have many, depending on what your group is doing.
In addition, there will be private developer branches. These are branches each engineer creates and works with on their own while working on particular tasks or tickets.
The QA system will treat these two kinds of branches differently. Every merge into a mainline branch will trigger a battery of tests. At the minimum, all automated tests are executed, and a useful report is generated (see next section).
Such a battery is not automatically triggered for each commit to a private developer's branch. They can, however, request that it be run. In addition, there is some way that a developer can specify that only specific tests are executed... so that they do not have to wait long if they know only a few tests are failing.
Reporting
The quality of reporting of results perhaps the most important aspect of the QA system. The amount of data generated each day can easily become substantial; it is important to put some thought into making the salient information most accessible.
If a unit test fails, and you don't notice, what is its value to you? Zero, of course. That, in a nutshell, is the kind of problem your reporting subsystem must make impossible to happen.
Besides the production of good reports that make it easy to get the relevant information, the reporting can be active rather than just passive. When a developer commits to a mainline branch, once all tests have completed, they can receive an email containing a convenient link to the report.
Build and Installation
An automated QA system works best if build and installation can be fully automated. When I say "build and automation", that is intentionally vague; because what it means specifically will vary with the system under test.
Here is an example of how the process might work with some application that is installed on the end user's computer:
- Source code is checked out from repository.
- A script executes that builds the product installer, which is a standalone program itself.
- This installer is moved to a test machine.
- The environment of the test machine is reset. That means any product installed on a previous run is uninstalled. A final cleanup step handles all the loose ends, including if the uninstaller happened to be broken and was unable to run.
- The new installer is executed, and the product is installed.
- The tests are run in the new environment.
Notice that at no point is a human required. I don't believe it is always possible to automate everything. But any manual steps act as a bottleneck limiting how well the QA system can serve you. So it pays to minimize them if you can.
Actual Tests
Of course, it would not be much of an automated testing system if it did not run any tests, would it?
Aside from the unit and integration tests, etc., that are written, there are many other kinds of measurements you might consider. One important example: code coverage. It is useful to run this at least once a day.
You may want to integrate static source code analysis tools, such as lint - or whatever the equivalent is for each language.
Also in this category are profiling and performance benchmarks.
You probably do not want to run all of these every time someone commits to a mainline branch. Running all tests under code coverage, for example, can take an order of magnitude longer than running just the tests normally, and performance tests are going to be resource intensive almost by definition. A good compromise is to run these high-demand tests periodically, such as once a day.
Generality
A good QA system will be language agnostic. This means it is capable of running tests for code in any language. The reason this matters is, basically, you don't know what the future will hold.
When we first built the QA system at SnapLogic, the main product consisted almost entirely of Python code. Less than two years later, however, a large fraction of the source was in Java. Fortunately, we had made no assumptions in the beginning about what language the code would be implemented in. As a result, accommodating the Java code was fairly straightforward.
Usually you want the QA system to be able to test on multiple platforms. This actually may not be necessary; if you only have one platform you are deploying on, and that will always be the case, it doesn't matter. If your software under development does need to execute on several different operating systems or environments, though, you want your software system to test them all automatically.
Extensible
Finally, a good QA system framework is very extensible. As your system under test evolves and grows, you will want to be able to add new kinds of tests and procedures. Some of them will be very specific to your project, possibly even unique. Having a flexible automated QA framework will make accommodating these new needs practical.