What can be done in the event of a catastrophic system failure that renders access to company data impossible? In most cases, the first “real” test of the company’s data backup strategy is attempted, and a panicked IT team jumps into action looking for the most recent tape that hasn’t been lost or corrupted.
A company’s data is only as good as the most recent successful backup. Implementing a cost effective backup strategy can be very challenging to small and large businesses alike. So how does one tackle this daunting problem?
Step 1: Determine what data needs to be backed up!
It has been my experience that many IT managers or company owners understand that backups are important, but choose to address the issue by painting the problem with a broad brush. Why not just take an image of each of the critical servers and be done with it? Here is why not:
- Storage & Bandwidth: Server images are large and storage is expensive. Why pay for the storage space and/or the bandwidth to transfer images offsite?
- Restoration Compatibility: Although improved in recent years, restoring a server image to new hardware often causes headaches with hardware drivers, domain associations, and system reliability.
- Time management: Creating a server image takes much more time than scripting a backup process to capture only critical data that in most cases could be restored to any working server. I would argue that building a new Windows server, with a new SQL implementation, and restoring SQL backup files takes less time than restoring a server image.
Rather than creating an image, using inexpensive or even free command line utilities such as Robocopy and Winzip can reduce the backup time, as well as the storage space required. (Reduced backup time also translates to reduced restoration time.)
Let’s use a MS SQL server as an example. Company X uses SQL server to store a large amount of the company data. We have already determined that imaging the SQL server is not our most efficient backup solution.
MS SQL has a wonderful backup tool in built in. So the first thing we do is schedule a nightly database backup for all of our critical databases. For argument’s sake, we’ll have these first backups land on a storage partition local to the SQL server.
Depending on the amount of data and the amount of storage space available to the company, we can now schedule a command line compression script to compress the SQL database backups to a more manageable size.