StorageCraft Technology Corporation
X

We know downtime is costly. According to Gartner, downtime can cost as much as $5,600 a minute, which sounds terrifying, but how much does it really cost for you and your specific situation? There are dozens of downtime calculators online, but they’re typically too simple. Really, they’re marketing tools to help illustrate the cost of downtime generally, but they’re not business tools for calculating your or your client’s real costs. Every situation is different because of many variables, but if you want the real number, you have to do some math to find it.

Before we begin, it’s worth noting that there are dozens, even hundreds of variables that affect the cost of downtime, and many of them aren’t so easily quantified. We can attempt to arrive at a solid number, but the cost of downtime depends on the type of business, the event that causes downtime, and indirect costs, which we’ll explore later.

Estimated Labor Cost Per Hour

The costs that are easiest to tally relate to labor. Any time employees are not completing work, the company is paying them to perform at a lower level. To get to the estimated labor cost per hour, you need to know:

With that information, use this formula to find your estimated labor cost per hour:

(number of employees * Average employee wage per hour) * Average % of lost productivity = Total labor cost per hour.

Estimated Revenue Loss Per Hour

Next, you’ll need to determine how much revenue you lose per hour of downtime. Understand that this formula gives you an idea of how much revenue a company could lose per hour, but it’s the highest possible percentage. If a company suffers downtime, that doesn’t always mean it loses 100 percent of the revenue over the course of the downtime. For example, if someone visits a web store for a product but the site is down, they may go to a competitor, but they could also come back later. You may wish to take the total from this section, and multiply it by a percentage of lost revenue to get a more accurate picture for your business. In any case, here’s the data you need:

With that information, use this formula to find your estimated revenue loss per hour.

((Gross annual revenue / days per year open) / hours per day open for business) =  Estimated revenue loss per hour.

Estimated Hourly Downtime Cost

Next, you’ll add the totals from the previous two sections to get the total hourly downtime cost. Use this formula:

Estimated labor cost per hour + Estimated revenue loss per hour = Total downtime cost per hour

Duration of Downtime

Now that you know what an hour of downtime can cost, you can multiply that by the number of hours downtime lasts to get the total downtime cost. Downtime events can be brief, but depending on the cause of downtime, they can last hours – sometimes days or weeks. As the hours add up, so does the lost revenue and wasted labor time. Many businesses are satisfied understanding only the cost of downtime per hour, but there’s one more category worth thinking about: indirect costs.

Indirect Costs

Indirect costs are tied to revenue, but aren’t as easy to measure with hard figures. However, they can be more crippling than the loss of revenue itself. Indirect costs add up quickly and can prevent a lot of downstream work getting done. They can even send your company on a downward spiral. Here are the biggest indirect costs to consider:

Conclusion

It’s easy to see how a few hours of downtime can cause thousands of dollars in loss, and how that cost coupled with indirect costs can put a business in the ground. Backup and disaster recovery solutions that prevent downtime pay for themselves quickly. Investing in these tools is less a costly burden and more a form of insurance. Make sure your business stays hale and hearty for years to come by taking the time to evaluate these costs and how much you can invest in downtime prevention.

View Comments

  • Hello,

    I'm just wondering if any of you have actually tested this scenario in the end and come to any conclusion since this article was published.

    Thank you!

    • Hello Octavian,

      Thank you for asking. To be honest I haven't tested this theory, though it's been on my "to do" list since the question first came up. Have any of our other readers tried storing backup images on a Server 2012 deduplicated volume? I would be interested in at least two qualities of this test: 1) how much storage can be freed using this process (as a percentage of the original data size), and 2) is their any discernible difference in I/O speed compared with a data volume that isn't managed? I'm interested in your comments.

      Cheers!

  • you missed so many important factors. just don't bother writing an article like this if you don't provide all the information, its far too dumbed down. you have probably lead astray some poor network/system admin who will choose to back up to disk and sacrifice his companies data retention for cost. you don't know the cost of the average company to lose recoverable data.

    • Hi Daniel,

      Thank you for your comments. Yep, there is so much to talk about with this topic. What information would you like to see in more detail? We're always looking to talk about the tech that interests our readers as well as what interests us.

      Cheers!

  • This appears to no longer work on their 6.1 and 6.1.1 versions. I tried FAT32 and NTFS partitions as well.

    It appears they switched to some sort of linux boot to do this.

    • Hello Greg,

      Yes, there have been some updates to the process since I wrote this article in March of this year. We now have the StorageCraft Recovery Environment Builder for Windows which does most of the heavy lifting. This means I don't have to come up with creative solutions using unsupported third-party software to create a bootable USB, I can make a bootable USB natively with the Recovery Environment Builder.

      Some of the benefits of using the builder include the ability to add custom drivers to the recovery environment during the build process, faster boot times because each build is language specific, and the builder is able to leverage the latest Windows PE (currently Windows 8) with the latest Microsoft drivers and security fixes.

      The Recovery Environment Builder creates ISO's using the Windows ADK you have locally installed. These ISO files can be used to boot a virtual machine or they can be burned to CD/DVD or USB using the Recovery Environment Builder application. StorageCraft also provides an ISO Tool utility which comes free with StorageCraft ShadowProtect. This tool can rip, burn, author and mount/dismount ISO files and makes a handy addition to your IT toolkit. This ISO Tool can also be used to burn bootable CD/DVD drives using the ISO created by the Recovery Environment Builder.

      Basically we're trying to make your recovery process as easy and fast as possible, which is why the Recovery Environment Builder now creates customizable ISO's in several "flavors" of the recovery environment (e.g. IT Edition) and burns those ISO's to your available removable media. The builder application is your all-in-one solution for creating a bootable ShadowProtect recovery environment.

      If you want more about the ISO tool utility, check out this article: http://www.storagecraft.com/blog/the-best-things-in-life-are-free/

      Cheers!

  • I have a question with the following...your use of the Word "Host" in between the *stars* (see below)

    5. Regularly check the virtual machines’ event logs for VSS errors as they can indicate problems with the backup. This is good to do because when the *host* machine calls for a backup of the VM, the VM is asked to pause processes while ShadowProtect takes the snapshot

    Don't you mean "Guest"? As per you reasoning in the above statements, the "Host" is only backing up the OS drive. The ShadowProtect Client, that's installed on the VM "Guest" machine, calls for the backup itself, not the Hyper-V "Host".

    • You’re correct, we were referring to the guest. But, after further review, we noticed that the sentence you pointed out in step five doesn’t quite fit with the remainder of the post, so we’ve removed it. It is, however, still important to check the virtual machines’ event logs for VSS errors-- this is just a standard best practice to make sure everything is running smoothly.

  • The price of a microlized hypervisor is in case of Hyper-V, that it is to large to get fully loaded into the RAM. This could have backdraws if you lost the contact to the boot volume. I found an impressive demonstration about this topic @Youtube: http://www.youtube.com/watch?v=E8ZF0ez0iH0
    In case of this, it seems VMware has still the better product.

  • Well done to Guy & Casey it's an excellent eBook, well worth reading and well worth keeping a copy close to hand!

  • I have no bone in this debate. However, I have used both agentless and agent based backup solutions in my 14 yr IT career. I am also a Certified Ethical Hacker and Certified Penetration Testet. That distinction is important to my comments below:

    1- The statement made above "It’s important to keep in mind that in order to take a true disk image for complete, fast bare metal recovery, something has to be installed on the machine." is false. This can be done by agentless, remote capability. I have done this myself.

    2- I have used the security holes proclaimed above to not exist to break into systems using the usually weak backup passwords. The machine was in fact running shadow protect. Yes the holes exist, yes it is up to the local IT folks to keep that in mind.

    • Hello David,

      Good points, and we respect your professional opinion. It's true that the perfect system has not been created yet, meaning that every system is imperfect in some way. With this in mind we are attempting to represent the "best" solution based upon the Microsoft Windows architecture and philosophy. Of course, this solution is limited to the underlying OS architecture and any of its inherent weaknesses. You have aptly pointed out one of those weaknesses yourself: that of weak backup passwords. If an administrator chooses not to implement the strongest passwords at their disposal then the administrator presents an opening for unethical and malicious behavior. It should be noted that this is not the fault of the software, but of the human managing the software. The software may be designed perfectly but implemented or secured in a manner which allows for errors or weaknesses.

      With regards to agent-based backups, it is Microsoft's intent that their Windows OS be managed (in this respect, backed up) using agents. They themselves use agents to manage Windows Server backup processes. We understand that it is still possible to create a disk image with an agent-less backup; however, Microsoft's propensity towards agents warrants the use of an agent-based solution. In addition, there are a number of advantages that an agent-based solution offers over an agent-less solution. For example, an agent-based solution (if implemented correctly) can operate at a low level of the OS not available to injected or remote procedure processes. In the case of StorageCraft's ShadowProtect agent this allows us to directly track changes to the disk and to function as a driver within the Windows OS resulting in fast and reliable backup images. Other systems which inject agents typically have to traverse the file system looking for changes first before they can begin processing a backup, resulting in added overhead and resources.

      As you've pointed out, both solutions can work. And to add to your comments I will point out that the effectiveness of either an agent-based or agent-less solution really depends on the underlying code and how it is implemented. So I guess we come full circle back to the beginning where we both agree that software is only as good as the person designing/using the software. We feel we've built a rock solid agent-based solution founded on Microsoft's platform but designed and implemented by our amazing developers to give our customers fast and reliable backup images which are easy to use and manage. Hopefully this message comes across in our products as well as our literature.

      I would like to personally thank you for taking the time to contribute to our forum. The life of a "white hat" has always intrigued me as you guys get to use operating systems in ways that many of us can only imagine. And I think we're grateful for your honest commentary.

      Cheers!

  • For a "lover of words", you sure missed this:

    "The brain is so complex that we’re a long way from discovering all of its mysteries, and we might never actually know how much space has."

    Read it slowly...

  • 1 2 3 4 11