How a Poor Backup Plan Cost BYU Millions

Lighting the Y
0 Flares Twitter 0 Facebook 0 LinkedIn 0 Google+ 0 Pin It Share 0 Filament.io Made with Flare More Info'> 0 Flares ×

May 28th, 2012 became a nightmare for Brigham Young University—a nightmare that will likely continue past Christmas into 2013 and beyond. On that fateful Memorial Day, a failed software upgrade on BYU’s primary server system effectively destroyed hundreds of terabytes of data—including payroll, research, and student data. Backup systems failed to recover most of the data.

As a BDR professional, and the fact that BYU is my alma mater, I feel the intense pain administrators, professors, and students are going through as a result: spring graduation was delayed, 30 departments lost significant amounts of research data, graduate students’ careers put on hold, failed research grants, lost employment opportunities. Costs just in disk recovery alone are running in the hundreds of thousands of dollars. The total outlay in both IT and administrator time, opportunity costs, new hardware, and recreating data will be well into the 8 figures.

Perhaps BYU sociology professor Vaughn Call summed it up best: “My first reaction was disbelief. I’ve never in my long career been in any circumstance like this. It just brought us to a dead halt.”

Whose business—be it scholastic or commercial—could survive a “dead halt”? A dead halt that extends into its seventh month? Could anything have been done to avoid some or all of this impact?

Three recommendations may well have made Memorial Day 2012 a day of rest rather than a day of rage in Provo:

  1. Select a BDR solution that regularly verifies existing backup files. If backup files remain intact, recovery by swapping out damaged hardware may be laborious but is a simple process followed by restoring the backed up data.
  2. Use replication to a remote site—either via a cloud service or directly to a data center.  Not only does replication prevent a localized natural disaster from taking out data, it also prevents an onsite system failure from propagating to existing backup files. A complete set of intact image files stored through cloud services or at another data center prevents a cascading single system loss.
  3. Most importantly, perform any server upgrades using simulations first from backup images—doing testing and simulations with backup image files is ShadowProtect’s unique strength. First, administrators can test individual backup images to confirm the volume backup is intact (and clearly showing that its contents can be opened, viewed, and data retrieved). Using VirtualBoot to bring up a backed up server online as a VM has an even more critical value. By using this live testbed, administrators can execute an upgrade on that VM of any software. In BYU’s case, that type of software testing might have immediately demonstrated the problems with the upgrade corrupting data and destroying hard drives.
Enhanced by Zemanta
Dave Doering

Dave Doering

Dave Doering is a fifteen-year veteran in storage and senior writer at StorageCraft. Author of two textbooks on network administration, Dave is a frequent speaker at conferences as well as host of the popular podcast “It’s Never Boring with Dave Doering”.

More Posts

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Google+ 0 Pin It Share 0 Filament.io Made with Flare More Info'> 0 Flares ×

One Response to How a Poor Backup Plan Cost BYU Millions

  1. Matthew Rayback Matthew Rayback says:

    I can’t believe stuff like still happens. How can any organization this size really think so little of their data.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>