Cross-browser Testing

For years it's been the bane of every developer's existence; the thorn in the side of every beautiful CSS design. Internet Explorer 6: it's three four versions old, Microsoft has stopped supporting it and by all accounts it's nearing extinction. An otherwise flawless design would fall apart in IE6  - sometimes doubling the slicing timeline.
 
It's not dead yet, but fewer and fewer clients are demanding an IE6-compliant site. In most cases, web dev firms have long since stopped recommending it. And yet, while we may dream that one day all browsers will be created equal - we're not there yet. Cross-browser testing remains a reality to ensure your audience enjoys a consistent experience.
 
While it can be a tedious exercise, there are really only three parts to successful cross-browser testing.
 
Define the scope

Which browsers will you support, on which platforms?

Fortunately, the differences between the same browser across  platforms are almost non-existent today, but they do still exist (we recently had an issue that only affected Firefox 3.5 on the Mac platform). W3Schools is a good resource for web stats, and can help you define where you want to focus your efforts. The cost-benefit to fully support Flock 2.0 on Ubuntu might not be worth it.
 
Prepare testing scripts

There are countless online resources to help you prepare your testing scripts, but if you're going to be thorough about cross-browser testing, then every test should be run in parallel across every version of every browser you intend to test, so that you can compare functionality on each test.

If you're in a rush, an old rule of thumb says you can achieve a reasonable level of functionality by testing on the earliest version of IE you're supporting, and the most recent version of Firefox (beware: this is by no means a watertight testing method!)
 
Test

This is the most tedious part, and it can be especially difficult running multiple tests across multiple browsers. The best method is to follow each script item across each browser you're testing so that you can compare functionality on each point as you go. When you document an issue, you can then also indicate whether it is browser-specific or if it will affect all users. You should also document the steps required to reproduce the problem, and if possible provide a screenshot. After the issue’s been addressed, of course, you need to retest across all browsers to make sure it’s a) fixed in the problem browser and b) still working in all the others.

I find a good basic matrix to work from when documenting QA issues looks like this:



One day, all browsers might be fully HTML5-compliant, and free from bugs and quirks, making cross-browser testing somewhat redundant. Until then, it's a necessary evil - don't let a browser-specific bug catch you out!