Robert Beerworth | 21/06/2012
Wiliam builds some fairly large and deep websites these days, many of which are mission critical.
- We launched the NBN (National Broadband Network) online.
- We launched Nine Entertainment Co’s group-buying platform, Cudo.
- Some of our key eCommerce websites turn over literally millions and millions a week.
- We’ve recently launched a global and transactional website for a credit card company.
This means that you need to be pretty good at making solid, secure and very robust websites. Websites that basically never go down.
Websites that handle well under strain, especially if you have a traffic spike.
Australian Current Affair (ACA) is a great benchmark for traffic spikes.
If a story on ACA is sensational – e.g. save up to 90% on brand name products – within 10 minutes, 25,000 viewers are on their laptops and iPads looking for the offer. It is absolutely, always consistent: 25,000 users.
We call that a traffic spike, except it isn’t so much a spike as it is a shark’s fin. Traffic up in a straight-line and then tapers out over hours or even day’s. You can see an example below of a traffic spike, super-unordinary traffic to a server.
See, it does look like a shark’s fin:
A good friend’s ‘how to save money’ website was featured on ACA and he was out for 2 days.
The traffic spike used to be known as the ‘Digg effect’, because it was akin to your website making to the homepage of then popular social sharing website, Digg; your website would be exposed to millions of users and bang, down it went.
Part of the issue is that most web developers have no clue as how to build a website designed to withstand a traffic spike, let alone what a traffic spike is.
And server hardware is definitely not the first answer to a traffic spike, nor how you make a website robust.
Only to exacerbate the issue, once the website is down, it’s down and staying down. There is never enough time to intervene, especially if part of what is being choked is indeed your hardware or network.
Indeed, I was asked by an airline last year to advise them on a major competition they had launched, but which had tanked and remained down two days into the competition.
They had used a small web design firm (nothing necessarily wrong with that) who had decided to host the website on what is known as ‘shared’ hosting.
They had purchased a hosting package that promised 50GB of throughput and they had gambled that this was sufficient.
And well it might have been, except that they needed all 50GB at the start of the competition and that is not how their hosting package worked.
The maximum available bandwidth to their competition website at any stage was 10MB.
The bandwidth was throttled and nothing worked.
Rookie error. And yet, one of dozens and dozens that can be made.
Anyway, the reason for this blog is because I was caught up yesterday in a matter where a website was hammered to the point of timing out.
The website was not ready for it.
I wrote a blog a few years back that described why websites died and the major culprits.
The morale of the issue I dealt with yesterday was make sure your website is ready before driving crazy traffic at it and never ever assume it is until your web developer has assured you, explained what steps they have in place and finally, you have load tested it with dummy traffic.
Interested in learning more?
Wiliam is a leading supplier of web solutions and can provide expert advice to assist your business or organisation online.