8

I normally use machine A, and I make backups of A onto a fileserver B. Sooner or later, I will lose machine A for one reason or another. Its hard drive wears out, or it gets hit by lightening, or some salesperson convinces me that it's an embarrassing piece of obsolete junk, or an overclocking experiment goes horribly wrong, or it suffers a "glitter-related event", etc.

Let's assume that computer C is totally different from computer A -- different mass storage interface, processor from a different company, different screen resolution, etc.

Is there an easy way to make a list of all the software currently installed on A before disaster strikes, in a way that makes it easy to install the same software on the blank hard drives of computer C? Or better yet, makes it easy to install the latest versions of each piece of software, and the specific sub-version optimized for this particular machine C?

If I have plenty of space on B, it seems easiest copy everything from A to B. If I do that, what is a good way of dividing the files I want to copy from B to C from the files I don't? I don't want to copy binary files I can easily re-download (and possibly re-compile) as needed, and probably wouldn't work on machine C anyway. Or is it better in the long run to try to avoid backing up such easily-obtained machine-specific binary files onto B in the first place? Is there a better way to reduce the chances that viruses and trojans get passed on to C and re-activated?

When I customize software or write fresh new software, what is a good way to make sure the tweaks I have made get backed up and transferred to the new machine and installed? Such as cron and anacron tasks?

What I can I do to make my transition to some new computer C safe and smooth?

(This question expands on a sub-question of "Incremental system backup and restore w/ rsync or rdiff-backup issues" that I thought was particularly important).

  • Are you talking about a specific distro, or is this just hypothetical? – phunehehe Feb 27 '11 at 05:31
  • I currently have a few bits of software running on a Fedora 7 i686 box that I want to move "the good stuff" to a Fedora 14 x86_64 box. Most software I use claims it works with most Linux distributions, so I hoped that, with a bit of careful planning, the same process with only minor changes would also work for switching from any distro to any other distro. – David Cary Feb 28 '11 at 23:00
  • 1
    I'm looking more for "a summary checklist of the entire process so I can be confident I haven't forgotten some important step", and less "what command do I use for step 3?" I've posted three very different approaches as "answers" -- alas, each of them somewhat flawed. Which approach is the best? Or is there yet another superior approach, and if so, what is the entire process of that approach (in summary checklist form, please)? – David Cary Mar 01 '11 at 17:59

6 Answers6

5

$HOME in version control

  • Periodically commit everything in the /home directory to the version control repositories. (Except always do "make superclean" just before committing the programmer's $HOME directory, so he never commits binary executables or other easily-machine-generated files).
  • For each user that "owns" unique data on my work computer, make sure there is some sort of version control repository on my file server that contains that user's entire $HOME directory ("$HOME in subversion"). Even though I'm practically the only human that touches this keyboard, I have a separate user for: The untrusted web browser who likes to install lots of potentially malware-infected games; the C programmer who often writes software with horrific bugs and so we want to keep him isolated in a sandpile where he can't accidentally delete my favorite web bookmarks; the robot user that runs the wiki; the root user; etc.
  • keep all "my" files in my $HOME directory, which is a version control working directory.
    • text files, photographs, web browser bookmark file, etc. -- all in the home directory.
    • if I write a batch script that "needs" to go in some other subdirectory, keep the master copy somewhere in my $HOME directory, and make a soft link from that other subdirectory to the master copy.
    • If I write compiled software that "needs" to go in some other subdirectory, keep the master source code and Makefile in some subdirectory of my $HOME directory, and set up the Makefile so "make install" automagically installs the binary executable in that other directory.
    • If I fix bugs in some software, pass the bug fixes upstream.
    • keep a list in some text file in my $HOME directory of "apps I like and wasn't installed by default" and "apps that are normally installed by default but I don't like". See How do you track which packages were installed on Ubuntu (Linux)? or How do you track which packages were installed on Fedora (Linux)?
  • if I purchase software on CD or DVD, back up the ISO image on my file server, and install from that image onto my work computer. (Because my work computer doesn't have an optical drive).

Later, when machine A is lost,

  • install the latest version of whatever distro is my favorite this week, including all the default software, on machine C.
  • For each user, do a version control checkout of the latest version (HEAD), except somehow (?) skipping over all binary executables. This blocks some kinds of viruses and trojans from spreading to C.
  • setup other stuff outside my home directory:
    • check my list of apps, uninstall stuff I don't want.
    • check my list of apps, install the latest version of stuff I want. Hopefully the latest version includes the bug fixes I've passed back. (See links above for ways to automate this process)
    • do "make superclean" and "make install" with each of the compiled programs I've written.
    • somehow (?) remember where the batch scripts "need" to go, and create a soft link from that location to the master source in my /home/ directory. (Is there a way to automate this?)
    • somehow (?) remember all the stuff I have running as cron and anacron jobs, and enter them back in again.
    • install software that was purchased on CD from the ISO image on the file server.
  • ... is there anything else I'm missing?
3

First go and read previous threads on this topic: Moving linux install to a new computer (about the same-architecture case), and How do I migrate configuration between computers with different hardware?. Here I'm going to address a few minor points that weren't covered before.

If you're moving to a computer with the same architecture, and your disk hasn't died, just move the disk into the new machine. This can be done completely independently of moving the data to a larger disk. Note that “same architecture” here means the processor architecture type, of which there are only two in current PCs: x86-32 (a.k.a. i386, ix86, IA-32, …) and x86-64 (a.k.a. amd64, Intel 64, …). Things like specific chipset or processor variant, video devices, storage interfaces, etc, don't matter here. (If the storage interface is incompatible¹, or if one of the computers is a laptop, you'll have to find an adapter or copy across the network.)

For backing up in case your drive fails (it's one of the most fragile components), you have two choices:

  • Make a bit-for-bit copy of the whole disk or partition. Then you can restore directly, or even run from the backup in an emergency. If that's your strategy, you'll still want a file-level tool for incremental updates.
  • Back up your files. To restore, do a fresh install, then restore the files.

Your default should be to copy everything, there are very few files that need changing when you move to a new computer. You will have to reinstall the OS (with most current unices) if you move from a 32-bit PC to a 64-bit PC and you want to use the new PC with a 64-bit OS, but otherwise any bad experience you may have had from Windows does not carry over to Linux or other other unices.

To make it easier to ensure your data is on every computer you use (the old one and the new one, the family desktop PC and your personal laptop, etc.), make sure you customize things in your own home directory rather than at the system level. In Ubuntu or other “user-friendly” terms, this means a customization method where you don't have to enter a password. But do perform the customization at the system level if it's strongly hardware-dependent (e.g. screen resolution).

¹ This is largely hypothetical. Most current desktop PCs still have IDE interfaces and are compatible with all general-public internal hard disks since the late 1980s. Surely you've already upgraded all your earlier PCs.

3

Assuming you are using Debian-like Linux

  • periodically on machine A run:

    dpkg --get-selections  > /mnt/bakup/backup.pkg.lst 
    

and keep backup.pkg.lst file in a safe place

  • When disaster happens, do a minimum install on C machine (or A) (even without GUI) and run as root:

    dpkg --set-selections < /mnt/floppy/backup.pkg.lst 
    apt-get update
    apt-get dselect-upgrade
    apt-get dist-upgrade
    apt-get upgrade
    

and restore your /home directory from a backup

jet
  • 894
2

All this depends on what package management system your distro uses.

If you're a debianish user you can use dpkg to get a list of installed packages.

Redhatesque users can use yum to get a list

For FreeBSD you can look in /var/db/pkg for a list of packages installed.

Majenko
  • 818
1

bit-for-bit backup

  • Periodically, use "dd" to make a complete bit-for-bit copy of the /home partition on my work computer (or perhaps each and every partition) to a backup file(s) on my server. (Is there some way to update last month's backup using something like rsync so I don't have to start from scratch every time, speeding this up?) (is there some way to do all or most this in the background, while I'm using my computer?).
    • Put a liveCD in the working computer, and reboot
    • sudo dd if=/dev/hda | gzip -c | ssh -v -c blowfish davidcary@my_local_file_server "dd of=backup_2011_my_working_computer.gz"
  • keep all "my" files in my $HOME directory.
    • text files, photographs, web browser bookmark file, etc. -- all in the home directory.
    • if I write a batch script that "needs" to go in some other subdirectory, keep the master copy somewhere in my $HOME directory, and make a soft link from that other subdirectory to the master copy.
    • If I write compiled software that "needs" to go in some other subdirectory, keep the master source code and Makefile in some subdirectory of my $HOME directory, and set up the Makefile so "make install" automagically installs the binary executable in that other directory.
    • If I fix bugs in some software, pass the bug fixes upstream.
    • keep a list in some text file in my $HOME directory of "apps I like and wasn't installed by default" and "apps that are normally installed by default but I don't like". See How do you track which packages were installed on Ubuntu (Linux)? or How do you track which packages were installed on Fedora (Linux)?
  • if I purchase software on CD or DVD, back up the ISO image on my file server, and install from that image onto my work computer. (Because my work computer doesn't have an optical drive).

Later, when machine A is lost,

  • install the latest version of whatever distro is my favorite this week, including all the default software, on machine C.
  • On the file server B, use "mount" with the "loop device" to allow read-only access the individual files stored inside that backup file. (For more information on creating and mounting a read-only compressed disk image, see https://superuser.com/questions/254261/compressed-disk-image-on-linux )

Alas, the user number of "davidcary" on my work computer is different from the user number of "davidcary" on my file server -- so it appears that all these files are owned by some other user. Is there a way to fix this, or to prevent it in the first place?

  • copy my /home/ directory from that backup file to the /home/ directory of my new work machine, somehow (?) skipping over all binary executables. This blocks some kinds of viruses and trojans from spreading to C.
  • setup other stuff outside my home directory:
    • check my list of apps, uninstall stuff I don't want.
    • check my list of apps, install the latest version of stuff I want. Hopefully the latest version includes the bug fixes I've passed back. (See links above for ways to automate this process)
    • do "make superclean" and "make install" with each of the compiled programs I've written.
    • somehow (?) remember where the batch scripts "need" to go, and create a soft link from that location to the master source in my /home/ directory. (Is there a way to automate this?)
    • somehow (?) remember all the stuff I have running as cron and anacron jobs, and enter them back in again.
    • install software that was purchased on CD from the ISO image on the file server.
  • ... is there anything else I'm missing?
1

life in a virtual machine

  • Set up a virtual machine on my work computer. Do all my real work inside that virtual machine.
  • Periodically pause the virtual machine, and backup the virtualized disk and virtual system state to the file server. (is there some way to do all or most this in the background, while I'm using my computer, so I only need to pause long enough to back up the last few things?).

Later, when machine A is lost,

  • install some convenient host operating system onto the new work machine C.
  • install virtual machine player onto work machine C.
  • Copy the virtualized disk file and virtual system state file from the file server to machine C.
  • Run the virtual machine player to un-pause that virtual machine.

Alas, now C is running all the viruses and trojans that A has collected -- is there a way to block at least some of them?