Jason Deacon Team : Web Development

Automated Testing

Jason Deacon Team : Web Development

Developer: "Okay, I've deployed the site."

Producer: "Er, the homepage is a giant error page"

Developer: "F**K"

Does that scenario sound familiar? Don't be silly, of course it's familiar. Anyone involved in the web industry has had that happen at one point or another and this blog is going to take a look at how this happens and how automated testing can help reduce the chance of this happening.

What is automated testing?

The most common form of automated testing is unit/integration testing which runs tests on the application code. This means that we write code which runs other code and checks that the results are what we expect and therefore the tested code is operating correctly.

Typically you will write code in a specific test project which references the project of your application code that you want to test, then you write tests which are run by a tool either by a command line interface or a GUI, which presents you with a list of all the tests which passed and failed.

Unit tests VS Integration tests

What's the difference?

Unit test: Tests one specific method that may rely on other methods, however all its dependencies are stubbed out (it means they are fake, not the actual code that they would otherwise execute) so that you can test not only the input/output of the method, but also test that it calls the correct classes and operates correctly based on the return values. Mocking/Stubbing is an entire blog on it's own so I won't cover that here.

Integration test: You call the method you want to test except anything it is dependent on is still the actual code that the website uses. You're no longer just testing the specific method, but also all the code down the stack that the method utilises.

What's better?

Well that's a matter of opinion. I typically write integration tests against logic/DAC classes in order to verify that they perform their tasks correctly, which are usually multi-step or state-based.

Like most things however, it depends on the project and the code being tested, it's never a "one size fits all" type of thing.

But wait.. how is that automated?

Yes, you still need to write each test to test a specific aspect of specific methods, which can be a very laborious task..

Unit/integration tests are only useful when they are executed and provide worth to the developers working on the project. Having 200 tests which are never run is just as bad as having no tests at all, worse actually if you consider the time cost to write and maintain those tests as well.

So the key then, is to run all the tests automatically. If your project uses a continuous delivery system to automatically build and deploy your project based on source control activity then that's the perfect place to run the tests.

If your project does not use continuous delivery then you need to rely on process instead of systems, which is a failure in itself but let's not get into that.

The great thing about having your tests run as part of your CI build and deploy pipeline is that a single test failing can be made to halt the build and prevent the deployment of the project, forcing developers to ensure that the code is working by investigating test failures and fixing them sooner rather than later.

It's the old "fail fast" system, by failing rapidly at the earliest possible moment (when the developer checks the code in), the errors can be fixed before they go anywhere near a live site.

But what's stopping developers just commenting out broken tests?

Enter Code Coverage

A metric on developer performance! Oh no! Everyone hates metrics on their performance! What ever will the devs do?!

They'll get better (hopefully).

Code coverage is simple, it's the percentage of application code which is covered by the tests which have been written. 50% code coverage means that 50% of the code in the application is executed by the tests, which means the OTHER 50% can be literally be doing ANYTHING when it's executed.

So when Mr Lazydeveloper breaks the build with a change because one of the tests fails and then to "fix" it he just goes in and comments out the test and the code coverage drops from 91% to 89%, it's very transparent and easily followed up on.

That 91% wasn't arbitrary by the way, a good code coverage minimum is 90%. Some development teams run at lower minimum code coverage (as low as 65% or so), but the lower that percentage, the more code you have in your application which could cause huge errors with no warning.

But Jason, I wrote a test but the method still failed for a different reason!

Simply, write better tests. You should evaluate all possible values for any given inputs and test your methods in various circumstances to ensure that you are capturing all necessary failure paths (and execution paths, in the case of more complex methods).

The quality and effectiveness of test code, just like real code, depends entirely on the effort and thought put into it when it's written. If you write bad tests then you can't expect to have any confidence about what those tests are testing, and therefore you can't have any confidence in the site you're deploying.

A simple example

Let's look at a very simple example.

Here's the method we want to test, in this scenario it's an extension method.

public static int StringToInt(this string foo) 
{
	return int.Parse(foo);
}

Now let's look at all the ways that this could fail:

1) foo could be null 2) foo could be empty 3) foo might contain characters which can't be converted to an integer 4) foo might contain a number which is too large to be represented by an integer 5) foo could have "10" but return "-10" or something which is not the correct value (due to random mistakes or changes to the method without understanding what it's meant to do)

Now let's write a test for the first option. For simplicities sake we're going to assume that any invalid input gives 0 as the output.

[Test]
public void StringToInt_Null() 
{
	string input = null;
	
	int output = input.StringToInt();
	
	Assert.AreEqual(0, output);
}

There, that now tests that the StringToInt() method returns 0 when given a null value. You might notice the [Test] attribute above the method. The name of the attribute you need to decorate your tests with will vary between testing frameworks (and if your tests even need to be decorated at all) so you can just substitute where necessary.

Now we need to go back and modify the actual method to check for this failure condition and behave appropriately. Here's the updated StringToInt() method with that check:

public static int StringToInt(this string foo) 
{
	if (foo == null)
		return 0;
		
	return int.Parse(foo);
}

Now we have a test which verifies that no nasty exceptions are thrown when our StringToInt method receives a null string as the input.

The process continues for each failure condition or execution path through the method you are testing until you reach the required code coverage level, which I'll talk about in a second.

Just quickly, here's what the StringToInt() method would look like after the 1-4 failure conditions are met:

public static int StringToInt(this string foo) 
{
	if (string.IsNullOrEmpty(foo))
		return 0;
		
	int value;
	return int.TryParse(foo, out value) ? value : 0;
}

Cases 1 & 2 are covered by the "string.IsNullorEmpty" call and cases 3 & 4 are covered by the use of TryParse instead of Parse, which is the safer and preferred method of converting strings to integers which doesn't generate nasty exceptions when invalid input is encountered.

This particular method would end up with 6-8 different tests to ensure total coverage of all failure scenarios and code paths, but from here on out you know with 100% certainty that it will operate correctly..

.. so long as you run those tests!