4

I´m looking over my backup strategy, that I installed a couple of years ago...and I´m wondering, how do other perform their backups? Can I do different/better/safer/more economic?


Edit The main reason for this question is to gather experiences about how others do their backup and how they keep their data redundant and safe.


I store my backups to a USB disk...using BackupExec (We are a small company), how do you do your backup (does it work?)?

Dave Cheney
  • 18,307
  • 7
  • 48
  • 56
anra
  • 41
  • 4
  • 2
    Worrying about your backup is only a quarter of the answer. You should be worrying about your *restore* procedure. – MikeyB Jun 15 '09 at 18:20
  • I have already completed many successfully tests with my restore...One thing that bothers me a little is that it is so hard to do a full restore on a Windows setup. It´s much easier on linux. Why is it so difficult to make a restore to a diffrent HW? – anra Jun 16 '09 at 05:47
  • Source code visibility for all parts of the Windows boot process would, no doubt, make it easier to do a restore onto unlike hardware, if only because all the little nooks and crannies that cause BOOT_DEVICE_INACCESSIBLE and its ilk could be ferreted out. Linux having a monolithic kernel is, I think, also one of the reasons that it's easier to get it up and running on unlike hardware. I'm with you, though. I've moved Linux boxes from one RAID controller to another, SCSI to SATA, etc, with no major problems. Try that with Windows w/o a lot of headaches. – Evan Anderson Jun 16 '09 at 22:23

12 Answers12

5

Lots of suggestions on this question:

Best practices to keep your computer(s) backed up efficiently?

MathewC
  • 6,877
  • 9
  • 38
  • 53
  • No, there is plenty of suggestions, but mainly for/from home users. What I like to here about is, how does you (System/backup admins) do your backup at work. I work at a small company and sometime you wonder how the big guys do there jobs. – anra Jun 16 '09 at 05:54
4

I think we need to know a little more about your situation to give you a great answer. Things like the following would help:

  • Size of backup corpus
  • Number of server computers being backed-up
  • Duration of backup window
  • Retention / archival / destruction concerns

I'm very "old fashioned" in my backup strategies, but I've yet to have any failure to restore. Tape and conservative off-site rotation stategies have served us very well.

My smaller business Customers are receiving full daily backups to tape, which are rotated off-site daily. Most have at least two (2) weeks of daily rotation, and may have additional monthly or quarterly rotations. To expedite restores without requiring someone to go off-site to get media, we're usually using a disk-to-disk-to-tape backup strategy. This works well with the Customers who have under 100GB of data. We've used a combination of LTO, VXA, and SDLT tape technologies in single element drives managed by Backup Exec. The cost of the drive and tapes is higher, initially, than other "lower tech" solutions, but we get rock solid backups and restores (and perform periodic test restores just to be sure).

For larger installations, we usually move to single element autoloaders (LTO) and typically perform daily differential and weekly full backups.

I'll probably be critisized by not being "trendy" and using things like removable hard disk drives, but quality tape technologies have served us very well and have been utterly reliable. LTO, in particular, has been rock solid. We've had a flaky VXA drive now and again, and flaky SDLT tapes, but they've worked well too.

Evan Anderson
  • 141,071
  • 19
  • 191
  • 328
  • 1
    You say you've had no failure to restore. Have you done a simulation of an "everything is lost, full backup is required" situation? – phuzion Jun 15 '09 at 16:06
  • I've had the unfortunate duty of having to restore of the only domain controller in a network from bare-metal using Backup Exec (10, at the time), off of tape. When I deployed Backup Exec 9 and four large HP autoloaders to a large automotive Customer a few years ago, I wrote and tested click-for-click (or key-for-key, for the non-GUI boxes) rebuilds for each server computer covered by the backup, starting from bare metal, for all of the 25 servers covered by the backup (including servers that were being backed-up over the network, and servers running non-Windows operating systems). – Evan Anderson Jun 15 '09 at 18:34
  • 1
    In short, restoring from bare metal off of tape with Backup Exec (using no special disaster recovery agents) is not only possible, but I've done it multiple times with no ill effects. Being well prepared and knowing the requirements necessary (drivers, RAID volume configuration, partitioning) to restore your server computers is the key to a good restore. – Evan Anderson Jun 15 '09 at 18:36
2
  1. Local backups (network file servers, including source repositories, and mail server) taken to another machine via rsync daily, and many snapshots kept (in case we discover weeks down the line that a file was accidentally edited/deleted) in a manner similar to http://www.mikerubel.org/computers/rsync_snapshots/
  2. Remote backup stage one: most recent copy of local backup sent up to intermediate off-site server via rsync over SSH
  3. Remote backup stage one: copy picked up from intermediate server, via rsync over SSH, by the main off-site backup, which maintains many snapshots like the local backup
  4. Backup testing (file server backups): once a week a script does two "rsync --checksum --dry-run"s, one between the live filesystems and intermediate backup server and one between the intermediate and the latest copy on the bit off-site backup and mails me the results (so any major discrepancies will alert me to problems).
  5. Backup testing (mail server): once per day, a few hours after the usual backup runs are due to finish, a VM containing a copy of the mail server, running on the main backup server, restores the latest mail backup to itself. I log into that every now and again (if it is running OK and I can see recent new mail and other changes, I know the mail backup is fine).
  6. Offline backup: a copy of that backups taken off-site on a USB disk each week (actually, we don't currently do this, but I'll be instigating it once I have time and can grab a company credit card to buy a couple of external drives with)

Steps 4 and 5 are very important. A backup isn't really a backup unless you have tested it, and a time when you find you need to restore files (or everything) is not a good first time to test! Testing backups is something that people often leave until it is too late to do anything if it hasn't been working.

Steps 2 and 3 may seem overkill, but add a little security. As the local servers can not talk directly to the backup servers or vice-versa, someone who manages to hack into one can not easily get from there to the other (just to the intermediate machine which, while the live and backup machines can authenticate against it, can not itself authenticate against either the live or backup machines). This avoids risk of the hassle that hit WHT recently (see http://ask.slashdot.org/story/09/03/25/0036211/How-To-Prevent-Being-Hacked-Via-Backups for discussion).

David Spillett
  • 22,534
  • 42
  • 66
2

Live Evan I'm old-fashioned with regard to media - tape has been tried and trusted for decades, and is a great way of providing ultra-cheap and ultra-reliable off-line storage. The current prices of LTO4 are so low that the dollar/pound/euro per gig ratio just cannot be beaten.

Lesson 1 with backups is always to keep them as simple as possible; as soon as you start implementing fancy things, you're courting disaster. Simple and boring is the way to go.

Lesson 2 is to keep a daily manual element in the jobs. Automation is fine, but with backups it pays divident to get people into the habit of at least popping out old tapes daily. This way, there's less risk of them forgetting that once-a-month additional manual task.

Lesson 3 is to back up everything. You can try to be clever and only backup your data partition, but you'll only get away with that on the most basic of file servers. In any other type of restore job, getting the application server config back the way it was is where the hardest work is. For sure there are cases where it's easier, but I prefer not to take chances.

Maximus Minimus
  • 8,937
  • 1
  • 22
  • 36
1

I've used Backup Exec before (v7 through 9), but more recently have started using hot-swappable SATA cages for the small-scale servers I work with in conjunction with Acronis True Image.

As some have emphasized, the backup process doesn't mean much unless you've gone through and tried a restore, preferably a full-blown "bare metal" restore from your backup media to ensure you can get everything back up and running as quickly as necessary for your setup.

Horror story, one site I worked with had been using USB drives in conjunction with Backup Exec's backup-to-disk option. They wanted a full-time IT service provider rather than a single contractor for their stuff, but in the process of handing over the documentation, some important details about the backups got lost in the shuffle. Months passed, and a drive in their one file server's RAID array failed, which was exacerbated by the server vendor's tech coming in apparently high on something and supposedly removing then reinserting the drives in something other than their previous order.

Anyway they ended up losing about two weeks worth of data because the last couple of full backups had failed and no one was keeping an eye on the logs.

Darth Continent
  • 175
  • 1
  • 1
  • 9
0

Our workstations (and other servers) backup to a specified server with a ton of HD space, that server then has a local backup (1 week worth) on an external (FireWire) hard drive, then 3 times a week (Mon, Wed, Fri) that data is taken and uploaded off-site, off-site there is 3 months of data.

Before the data is moved to the external drive it is encrypted with a local salt which is known only to 2 people (it's also in the "doomsday" book) and a "salt" that is provided by the user to whom the data belongs to. When the data is moved off-site it is again encrypted (management requires it to be encrypted here, ffs) with a simple blowfish algorithm that is provided by the off-site service provider with our own key.

I've been using this system for about 3 years (the local software I wrote myself) with various off-site providers (I've yet to find one I really like) and it has never once failed me.

Some rules may change from place to place, for example the financial company I did backed up form the workstations hourly and the server sent it off-site ever 3 hours.

This system is nice because it is cheap and (so far) scales seamlessly.

Unkwntech
  • 1,762
  • 3
  • 19
  • 24
0

There are many, many types of backup schemes. Their use really depends on your site and on your available technology.

Fulls, incremental, and differentials.

Usually daily differential/incremental with weekly fulls. Trying to keep 2-4 weeks of complete backups.

I really need to ask you one thing. You disconnect the USB drive after the backups are completed, right? If you don't I really wouldn't call it a backup.

Technically it IS backed up, but you are just asking for trouble if your disk gets corrupted, or you get hacked.

Backups should be kept for the worst case scenario.

Joseph Kern
  • 9,809
  • 3
  • 31
  • 55
0

For my personal situation we use BackupExec to take a daily full backup of everything on the following schedule:

Monday, Tuesday, Wednesday and Friday : Kept for 7 days 1st, 2nd, 4th and 5th Thursday of the month : Kept for 1 month 3rd Thursday of the month on a 3 month schedule : Kept for 3 months Once a year: Kept forever

Currently we are using LTO3 tapes, which are sent off site for one day thus at any one time the last sucessful backup is off site.

Tape backups are only used for full system restores of file servers, individual files can be restored with Previous Versions which is available to Administrators and users alike.

SQL Server, IIS Metabase and MySQL are dumped to a file each night and kept for one week, this is on top of the hot-backup that BackupExec takes.

Richard Slater
  • 3,228
  • 2
  • 28
  • 42
0

Also see the discussion here: Setting up a new backup scheme

David Mackintosh
  • 14,223
  • 6
  • 46
  • 77
0

There is already a lot of good information here, so I'm going to add two critical points: 1. Redundancy. You say you're backing up to "a" USB hard drive. Make sure you're backing up to more than one disk on different days. You need to account for the possibility that one or more of your backup drives might go bad. Also, if you've got all your USB hard drives in one box, consider the possibility that you might have a really bad day and drop that box. 2. Off-site backups. Buildings burn down, flood, are burglarized, etc. Make sure you take your backup media off site and leave multiple copies of your media off-site at all times.

Carl C
  • 1,038
  • 3
  • 10
  • 19
  • We do both. We have many USB discs and we do take them off-site and yes I have tested my backups... – anra Jun 16 '09 at 06:14
0

I use duplicity on our production servers. I chose it mainly because it can do differential backups to ftp server and the backups are encrypted with GnuPG key.

The differential backups are done every night and full backups are performed once/2 weeks.

Works great!

Alakdae
  • 1,213
  • 8
  • 21
0

We have our servers running on Hyper-V, and have a nightly export of the Virtual Machines, which we copy to USB HDDs next day, which are stored off-location.

So far we had no problems restoring: just fetch or set up any machine with Hyper-V, copy the VM files onto it, import them, and off we go.

Sam
  • 909
  • 4
  • 15
  • 26