4

I am working in a small organization with about 8 or so Linux independent servers. While we currently do remote backups among the machines, I was told to make an "emergency backup" of the machines to an external drive.

Here is my current plan:

  • Mount the drive to one of the servers
  • tar and compress the contents of the root folder
  • Use rsync and ssh to transfer the tarball.

The backup need not be "conservative." It is only about 500GB of data on a 2TB drive.

Does this constitute an adequate backup plan given the conditions?

jaynp
  • 663
  • 3
  • 9
  • 18
  • Depends on your application. In many cases, no. Some files may contain inconsistent data making the backup not suitable for data recovery, but it should be fine for restoring the system itself. – jordanm Mar 09 '13 at 03:35
  • I would suggest dar over tar since you're not actually backing up to a Tape ARchive. – remmy Mar 09 '13 at 20:45

3 Answers3

2

I would suggest not. Assuming you are going to tar the entire contents of /, you are going to pull in all sorts of stuff you don't want.

For instance the /proc directory. And the place you have mounted the drive. So you will end trying to copy the backup to itself, and then its turtles all the way down.

rahmu
  • 20,023
mjs
  • 121
  • 2
2

One problem with simply performing a full copy of files is that there is the possibility of getting inconsistent data. It usually works this way

Here's an example of a file inconsistency. If a collection of files, file00001-fileNNNNN depends on each other, then an inconsistency is introduced if one of the files changes in mid-copy

  1. copying file00001
  2. copying file00002
  3. copying file00003
  4. file00002 changes
  5. copying file00004
  6. etc...

In the above example, since file00002 changes while the rest are being copied, the entire dataset is no longer consistent. This causes disaster for things like mysql databases, where tables should be consistent with their indexes which are stored as separate files...

Usually what you want is to use rsync to perform a full sync or two of the filesystem (minus stuff you don't want, such as /dev, /proc, /sys, /tmp). Then, temporarily take the system offline (to the end-users, that is) and do another rsync pass to get the filesystem. Since you've already made a very recent sync, this should be much, much faster, and since the system is offline - therefore, no writes - there's no chance of inconsistent data.

madumlao
  • 1,716
0

The relevant pieces are mostly /root (root account settings), /etc (most configuration resides here). You should also save a list of the installed packages, and a installation/rescue CD/DVD at hand for emergency use.

If you need a way of getting the setup back, the filesystems to save are /boot and / (if /usr/ is separate, that too). Be careful, check if you have e.g. databases, websites, VCS repositories somewhere under /var. Also, the logs under /var/log might have to be kept safe for legal or auditing purposes, or just for later curiosity.

vonbrand
  • 18,253