The cost of datacenter downtime has increased more than 40% for many companies over the last 3 years, according to a recent study by Ponemon Institute, sponsored by Emerson Network Power. The report analyzes 67 datacenters during 2013 across a variety of different industries ranging in size from 2500 sq. ft to over 46,000 sq. ft. Ponemon conducted an identical report in 2010 and nicely compares the differences between the two to demonstrate that even though the number of outages were fewer and the length of outages slightly shorter, both the cost and impact of downtime increased significantly in 2013 from 2010.
It should be no surprise that even more significant costs were incurred by companies whose revenue models depend on the datacenter’s ability to deliver IT and networking services to customers. This naturally includes e-commerce, software-as-a-service and similar organizations. Based on this and all of the above, it would seem wise for an organization to spend more money preventing outages in the first place.
Today, organizations are increasingly aware of downtime and the impact. Events such as the Superbowl power outage to outages on Amazon and Google only bring downtime to the foreground. This not only reinforces the criticality of availability, but also emphasizes the incredible financial cost associated with outages. Uptime is becoming more important than ever before and organizations must increase availability in order to save both money and reputation.
At Total Uptime, we know that datacenters have outages on a fairly regular basis. We’re customers in dozens of datacenters around the world, and while we’ve enjoyed excellent track records, issues do happen. Fortunately, we’ve designed for that and as a result there is almost zero impact to our solutions based on the incredible level of redundancy built-into our architecture, and the multi-datacenter footprint we’ve deployed. Our solutions are designed to help organizations deal with downtime by automating redirection of client/user traffic to alternate datacenters during an incident. Simple solutions like DNS Failover can mitigate an outage in under 5 minutes, and more advanced solutions like Cloud Load Balancing can bring it down to under a minute.
The study is available at the Emerson website.
If you went to bestbuy.com and the site was unavailable, how long would it take for you to go to amazon.com or elsewhere to find what you wanted? On average, it’s less than 30 seconds; it used to be much longer, but our society has grown impatient. If you’re not available when customers are looking […]
Service providers do everything they know how to avoid downtime. Generally the best practice is not to touch a live network. If it ain’t broke, don’t fix it. But change is inevitable, and eventually every network or system will need improvements. The trick is to handle these changes with little to no disruption of running […]
A service provider that offers software-as-a-service or another cloud-based solution should understand what customers are looking for and what compels those very customers to choose an off-premise, “cloud-based” solution vs. the more traditional on-premise, self-hosted solution. As a cloud service provider ourselves, we set out to understand how our customers went about choosing one service […]
“Houston, we have a problem…” For NASA, the disaster aboard Apollo 13 required a systematic approach. And that’s what they got thanks to the Kepner-Tregoe methodology. The decision matrix developed by Charles H. Kepner and Benjamin B. Tregoe was employed at NASA to help bring the astronauts home. It is a step-by-step approach. And it is […]