6

I have a remote server, which runs Linux. I would like to install remotely the OS image, in case that it gets corrupted (this has already happened twice while I am experimenting with the OS).

So far, the only way that I have, is to physically go to the machine location and use a USB disk to mount the OS and the BIOS see it, so it can boot from it.

Is there any way to basically connect to the machine via ssh, attach this image and have it act like it would be on a virtual drive on Windows (like daemon tools for example), so it would persist to a reboot and allow me to install remotely the OS?

I was looking at solutions on Google, but I found something mentioning PXE boot....which sounds complicated, since you need a server and such, and it is not as simple as mounting an image and being done with it.

Beyond that, I found nothing useful, so I am quite short on options....does anyone know how to accomplish this?

slm
  • 369,824
rataplan
  • 768
  • 1
    Some remote management cards (such as Dell iDRAC) support mounting an ISO remotely. Does your server have a remote management card? – jordanm Jun 29 '15 at 00:26
  • I believe no; I have got the server without any documentation, so I am not sure. I have it installed at the office; I will have to check. – rataplan Jun 29 '15 at 00:29
  • Yeah, you just do mount ./btrfsimage; btrfs dev replace "$(losetup -j ./btrfsimage)" /dev/install-disk start if you can accept a btrfs source/target fs. It will just start doing the job immediately in the background and you can do whatever you want in the meanwhile. It would also work with a booted live-disc as the source (which is above got w/ losetup). – mikeserv Jun 29 '15 at 00:32
  • 1
    Thanks Mike; could you please elaborate more about what these commands do? I was reading about btrfsimage and it seems that you need to copy the image on the remote machine when running this? Sorry but this is the first time that I do something like this. – rataplan Jun 29 '15 at 02:08
  • I am having trouble understanding what you want to do exactly. You want to install the OS remotely, yes? You got a suggestion for Dell iDRAC in comments already but it seems your server doesn't have that. You mentioned network booting in your question (PXE) but you have trouble with that because you would need a server. But of course you are going to need a server of some kind or other because if you don't physically walk over to the machine to install it and it does not itself have a functioning operating sytem then it certainly needs to boot off something else. How else could it work? – Celada Jun 29 '15 at 02:14
  • Related: http://serverfault.com/questions/208128/how-to-remotely-install-linux-via-ssh. Also take a look at FOG: https://fogproject.org/wiki/index.php/FOGUserGuide. DRBL too: http://drbl.org/. – slm Jun 29 '15 at 03:15
  • @Celada: Yes, I would like to mount an image, so when I reboot the machine, it will load from the image and allow me to install the OS. I have ssh access to the machine; from my admin that is the one that I use to interact with the remote server. That would be the "server" where the OS image would live. The server has its own os running (that's how I ssh on it). I am looking for a way to mount a sort of virtual drive on the server, which read the image on my admin machine and boot from it; so I can run the installer. Hope this clarified. Thanks – rataplan Jun 29 '15 at 03:57
  • Well, you can mount something, but obviously when you reboot the server there is no OS running anymore and whatever you mounted won't be mounted anymore. Are you looking for something like kexec? – Celada Jun 29 '15 at 04:35
  • @Celada: Yes, that's my main issue; I need something that would persist a reboot, so an image can be loaded, like if a cd was connected to the server. would kexec fit my needs? – rataplan Jun 29 '15 at 05:43
  • btrfsimage isn't a thing - it's just a hypothetical filename. Do man btrfs-replace to get a better idea. – mikeserv Jun 29 '15 at 05:45
  • 1
    kexec doesn't persist a reboot - the only thing that does is firmware. All of the offered ideas - DRBL, DRAC - are based around server solutions - you run a server that some preconfigured pre-boot environment knows how to talk to, and have a chat. iPXE would be my recommendation along the same lines. Otherwise you'll need to interface the firmware directly (and iPXE kind of does) which isn't so hard in some cases but is entirely off topic here. Read about the DHCP bootp and tftp protocols. Check out PLoP linux. The virtual drive you desctibe is why i suggested btrfs-replace, though. – mikeserv Jun 29 '15 at 05:51
  • Do you want to remotely install just the kernel, or the entire filesystem that comes with the OS, including /usr/bin etc.? – Mark Plotnick Jun 29 '15 at 06:02
  • Thanks Mike, so the whole thing is much more complex than simply mount a drive remotely. I am used to VM, so the whole real world scenario kinda elude me. I will check the suggestions and see which one is the easiest to implement. – rataplan Jun 29 '15 at 07:31
  • @MarkPlotnick: ideally I need to mount the image and run the boot from it; but the local FS on the remote machine is not necessary; I mount the drive and re-partition it at every install. Not sure if this answer your question. – rataplan Jun 29 '15 at 07:34
  • You keep talking about the virtual drive and that's why i suggested btrfs-replace. btrfs can add devices to a storage array on the fly. It doesn't care if the array is made up of loopmounts either. So if you add two devices to am array - a loop moounted file with your image and a real disk which you want to target, you can btrfs-replace the loop device in that array w/ the target disk and it's all done by the filesystem automatically. So provision an image, mount it, then replace it is all you have to do. you can do it every boot. – mikeserv Jun 29 '15 at 07:44

2 Answers2

3

Here's a hypothetical situation which I consider might be plausible:

  1. The targeted machine is EFI.
  2. grub is either never installed on the target or has been utterly wiped from the system.
    • it can only ever interfere and offers nothing of value otherwise.

So what we might do in the above case is configure a boot option for a small installation/rescue image we keep on our /esp or EFI system partition.

If anything were ever to go wrong with our current installation, then, for so long as we can at least access the EFI system partition by some means, then we can interface our firmware and set the machine to boot to our recovery image on next reboot. In that case all we would have to do is change a text file or two, cross our fingers and run reboot now.

Here is a basic set of commands for a minimally configured Arch Linux (because it's what I use) system which could still do as I describe.

  • First, we'll make a work directory and download some files.

    • I use aria2c here. I recommend it, but use whatever works.
    • I unzip rEFInd with 7za but the same tool preference is yours in all cases here.
    • If you're not reading this within a few hours/days of my posting it, then there is a very good chance that the links used below are not current.

      mkdir /tmp/work && cd /tmp/work || exit
      aria2c  'magnet:?xt=urn:btih:331c7fac2e13c251d77521d2dc61976b6fc4a033&dn=archlinux-2015.06.01-dual.iso&tr=udp://tracker.archlinux.org:6969&tr=http://tracker.archlinux.org:6969/announce' \
              'http://iweb.dl.sourceforge.net/project/refind/0.8.7/refind-cd-0.8.7.zip'
      7za x ref*.zip; rm ref*zip
      
  • Next I'll make an image disk.

    • I use a file here with loop devices, but you may want to use an actual disk if you want to boot to this from firmware.
    • In the case of an actual device the fallocate and losetup stuff can be ignored and the actual device names will far more likely correspond to /dev/sda[12] than /dev/loop0p[12]

      fallocate -l4G img
      
  • Now I'll partition that disk with the gdisk utility and assign it to a loop device.

    • This is a scripted shortcut for the options you'd want to feed the program interactively. It will create a GUID partition table and a partition of type EFI-system that spans the first available 750Mib of the target disk and another linux default partition spanning the rest of the disk.
      • These partitions will be /dev/sda1 and /dev/sda2 respectively if you're using a real disk, which will be /dev/sda rather than ./img. It is usually desirable to add more than one partition for a linux root, which is assumed to be the purpose of /dev/sda2.
    • printf script or no, the gdisk program is easy to use - and so you might do better to go at it interactively instead. The target disk should not be mounted when it is run, and you'll probably need root rights to write the changes.
    • As a general rule you can do pretty much whatever you want in that program without any effect until you write - so be sure when you do.
    • I'll be putting my $TGT in a shell variable. Except for its definition here, which you may want to tailor as necessary, where I use it so can you.

      printf %s\\n o y n 1 '' +750M ef00 \
                       n 2 '' '' '' '' w y |
      gdisk ./img     >/dev/null
      TGT=$(sudo losetup --show -Pf img)p
      
  • We'll need a filesystem on the esp, too. It must be FAT.

    • I give mine the fs label VESP. You should call yours whatever you want.
    • We'll use the label later in /etc/fstab and another config file - so definitely make it something.
    • In my opinion you should always label all disks.
    • If you install an OS to ${TGT}2 now you will of course need a filesystem for it as well.

      sudo mkfs.vfat -nVESP "$TGT"1
      
  • And we'll make some mount directories and start extracting the relevant files.

    set     ref     ref*iso         \
            arch    arch*iso        \
            efi     arch/EFI/archiso/efiboot.img
    while   [ "$#" -gt 0 ]
    do      mkdir "$1" || exit
            sudo mount "$2" "$1"
            shift 2
    done;   mkdir esp
    
  • Install rEFInd...

    • rEFInd is a boot manager - which mostly just offers and populates boot menus.
    • rEFInd will put its config files on the esp and these can be edited at any time and anyway you like.

      sudo ref/install.sh --usedefault "$TGT"1 &&
      sudo umount ref  && rm -rf ref*
      
  • Now we'll mount our esp and get the needed files off of the Arch install disk to get our own live bootable rescue disk.

    • Most live disks implement a sort of ugly hack to make the flat, unpartitioned iso filesystem look like an acceptable boot device to a UEFI system while still maintaining backwards compatibility w/ BIOS systems.
    • Arch Linux is no exception.
    • This ugly hack is that efiboot.img currently mounted on ./efi. It's where we'll find our kernel and initramfs image file. The other ones on the disk (in ./arch/arch/boot) will not work for EFI systems.

      sudo sh -ec    <<CONF '
           mount    "$1" esp
           cp -ar    efi/EFI/archiso esp/EFI
           cp -ar    arch/arch/*x86* esp/EFI/archiso
           mkdir     esp/EFI/archiso/cow
           xargs   > esp/EFI/archiso/refind_linux.conf
           umount    efi arch
           rm -rf    efi arch*' -- "$TGT"1
      \"arch_iso\" \"archisobasedir=EFI/archiso    \
                     archisolabel=VESP             \
                     copytoram                     \
                     cow_label=VESP                \
                     cow_directory=/EFI/archiso/cow\
                     cow_persistence=P             \
                     cow_spacesize=384M            \
                     initrd=EFI/archiso/archiso.img\"
      CONF
      

You have essentially just installed - from the ground up - a pre-boot rescue environment with a persistent copy-on-write save file (so you might, for example systemctl enable sshd_socket now and the setting would persist in the live system's next boot). The Arch Linux live install media now resides on your system's boot partition and can be called from the boot menu at any time. Of course, you also installed the boot menu manager.

  • A couple of things about the above should stand out to you:
    • I use *x86* because I have a 64-bit machine and that glob gets what I need. For a 32-bit installation (but why?) use *686* instead.
      • What I need, by the way, is a total of only 7 files and approximately 300M.
      • The live-system's rootfs is the squashed image in esp/EFI/archiso/x86_64/airootfs.sfs.
    • I specify the disk by label. There are no hints or other such nonsense - the disk is named and so it is easily found. You'll need to substitute in whatever you chose for an esp label instead of VESP.
    • The copytoram kernel parameter instructs the Arch Linux live init system to copy its rootfs image into a tmpfs before loopmounting it - which frees you actually to access the esp when working in that environment. Most live install systems offer similarly arranged constructs.

Where EFI shines is in its ability to handle a filesystem. On modern computers there is absolutely no need to pack some raw binary and wedge it in between your disk partitions. It astounds me that people still do, when, instead, they could manage and configure their boot environment with simple text files arranged in a regular, everyday directory tree. Above I put the kernel and initramfs in their own named folder in a central tree structure. The EFI - which will take its cues from rEFInd in this case for convenience - will invoke that at boot by pathname because it mounts the esp.

Now all that is left to do is to ensure you understand how to select the system which will actually boot when you need to. Understand - you can boot this right now. You can do it in a virtual machine w/ qemu (you'll need OVMF -pflash firmware) or you can reboot your computer and rEFInd will detect the kernel and pass its pathname to the firmware which will load and execute the Arch Linux live system. When you install a more permanent system on the disk - or several (which you can do right now if you so choose by rebooting to the live disk and performing the installation) - you'll want to keep its kernel and initramfs in the same structure. This is very easily arranged.

  • If, for example, you were to install a system on a root partition named, for lack of an imagination, root, you'd want to set it up something like this:

    • mount --bind its particular boot folder over the root /boot path in /etc/fstab.
    • You'll need two lines in /etc/fstab and to create a mount point in /esp to handle this.

      sudo sh -c          <<\FSTAB     '
           [ -d /esp ]    || mkdir /esp
           findmnt   /esp || mount -L ESP /esp
           mkdir -p  /esp/EFI/root
           cp        /boot/kernel binary   \
                     /boot/initramfs.img   \
                     /esp/EFI/root
           mount -B  /esp/EFI/root /boot
           cat   >>  /etc/fstab
           echo "$1">/boot/refind_linux.conf
      ' -- '"new_menu_item" "root=LABEL=root"'
      LABEL=ESP       /esp    vfat    defaults        0 2
      /esp/EFI/root   /boot   none    bind,defaults   0 0
      FSTAB
      

You only ever have to do anything like that once per installation - and that is assuming you didn't set it up that way in the first place - which is easier because the kernel and initramfs will already be where they belong. Once you've got those lines in /etc/fstab and a minimal config file in /boot/refind_linux.conf you're set for good. You can support as many installations as you like on the same system with the same /esp device and centralize all bootable binaries in the same tree just like that. Different systems will do things a little differently - Windows takes a little more cajoling to get it to conform, for example - but they will all work.

  • Ok, the last thing you need to know, as I said before, is how choose the next booting installation from the filesystem. This is configured in the file /esp/EFI/BOOT/refind.conf.

    • You should read this file - it's probably 99% comment and will tell you all about what you might do with it.
    • Of course, you don't really have to do anything - by default rEFInd will boot the most recently updated kernel in its scan tree.
    • But I usually wind up setting the following options:

      <<\DEF sudo tee \
             /esp/EFI/BOOT/refind.conf.def
      ### refind.conf.def
      ### when renamed to refind.conf this file
      ### will cause refind to select by default
      ### the menu item called "new_menu_item"
      ### in its /boot/refind_linux.conf
      default_selection new_menu_item
      ### this file will also set the menu timeout
      ### to only 5 seconds at every boot
      timeout 5
      ### END
      DEF
      
    • And the rescue file...

      <<\RES sudo tee \
             /esp/EFI/BOOT/refind.conf.res
      ### refind.conf.res
      ### this one will default to selecting
      ### the entry named "arch_iso" with a
      ### 10 second timeout
      default_selection arch_iso
      timeout 10
      ### END
      RES
      
      • And so now you can just move them around.
      • For example, to make the rescue environment definitely boot after you do reboot now...

      sudo cp /esp/EFI/BOOT/refind.conf.res \
              /esp/EFI/BOOT/refind.conf
      
      • And substitute .def for the .res used above, of course, to go back to default root.
mikeserv
  • 58,310
2

Let me restate your question for clarity:

You want to install a Linux distribution but you want to avoid needing to physically access the server. You cannot use alternatives such as the following:

  • Dell's iDRAC or an equivalent from another vendor. These solutions provide out-of-band server management that works even when the server is not running any operating system, and one of the features they provide is that you can attach virtual installation media such as a virtual USB stick.
    • But in order to use this you must own a server that has such a feature.
  • Network booting, using PXE. PXE is a feature that most servers have these days, that allows the server to boot over the network using DHCP and TFTP.
    • But you must be able to provision a DHCP server and a TFTP server in order to use PXE and this may not be possible in your environment (for example, there is no nearby server to provide those services).

The idea you present in your question is to mount an installation image over the network using the server's existing operating system, and somehow install from that.

At first glance, this of course cannot possibly work. In order to install, you have to reboot, especially if you want to install over top of the existing operating system since that would overwrite the current operating system and if you didn't reboot then you would be trying to overwrite the operating system while it is still in use. Yet if you reboot then of course the existing operating system shuts down, including the network-mounted installation image.

I can think of very few ways around this. Here are 2 candidates. They are both very advanced procedures and I don't recommend that you try them unless you understand what they are doing and how they work. Also, practice first using a locally-accessible server.

  • Use kexec to boot another operating system directly from the existing operating system. kexec is the only facility I know about that lets you provide a boot image for a new operating system that replaces the current one. kexec can only be used (effectively) if the current operating system is Linux and the target operating system is also Linux.

    kexec requires that you give it a kernel and an initrd to load. You can't give it anything else such as a root filesystem image. Luckily, most Linux installers use a self-contained kernel & initrd pair, so you can use that. For example you can get the necessary kernel and initrd for the Debian installer from the netboot download page. Extract them from netboot.tar.gz. (Yes, you should be able to use the same kernel & initrd as you would for netbooting.)

    I have not tested this idea and I would consider it an expert procedure, so you may have some work to do to make it happen which is beyond the scope of this answer.

  • Install the new operating system from inside the current operating system, alongside it. Then switch to the new one. You can install to a separate partition or, if using LVM (recommended in this case) to a different LV. This of course requires that you have enough space to temporarily store both operating systems.

    You will need to use a tool such as debootstrap to install the new operating system instead of the regular operating system installer. Using debootstrap is much less user-friendly than using the regular installer because you must do several steps by yourself which are normally taken care of by the installer (for example, install the kernel and bootloader, edit base system configuration files). That makes this an expert procedure as well.

In all cases, you'll want to have remote console access to the server. That can be achieved for example with IPMI.

Celada
  • 44,132