0

I am trying to create a script to create an image of an entire partition, restore the image in another partition and boot from the new partition.

I am having problems with the last part, i.e. making the changes to boot from the new partition.

For this I install ubuntu/debian using auto partitioning and configuring the hdd like this

  • /dev/sda
    • /dev/sda1 - /boot/efi
    • /dev/sda2 - / (Ubuntu/debian)
    • /dev/sda3 - SWAP
    • /dev/sda4 - Not mounted - Target partition to copy/restore the image of sda2

So what I want to do is to create an image of dev/sda2 and restoring it to /dev/sda4 and booting then from /dev/sda2.

The reason for this is to be able to supply complete images of an unix installation and "update" some IOT devices without internet connection. So every time we supply a new image, this image gets restored in one of the partitions and this partition becomes the boot partition. This process applies everytime we supply a new image, i.e. everytime the boot partition switches. If something goes wrong by applying/installing the new image, the boot partition should not change and instead boot from the "old working" partition.

As for now I succeeded creating the image using dump and restoring it to the target partition.

I am having problems with the changes to tell the grub to boot from the other partition, where the dump was restored.

I tried various things, like grub-install, update-grub and chrooting into the restored installation and running those commands, but I never got it working.

Could someone explain what is to be done to achieve what I am looking for?

R. Gomez
  • 113
  • I'd honestly just not use partitions, but use LVM logical volumes. Ubuntu (and all other large distros) can work just fine with a / on LVM, and all you need to do is rename the logical volumes – no fiddling with updating the boot loader involved. Also, the use case you describe is a pretty classical one! That's good, because other people have already solved the "make updates roll-back'able if they fail" on the OS image level. See for example rpm-ostree, as used in Fedora IoT, where you roll out system image updates as whole. – Marcus Müller Dec 18 '22 at 13:12
  • (here, "OS image" is not the same as "disk partition image", but means "specific state of a system with all the software installed"; it's in the end functionally the same as updating a disk image, but much less data volume. Since modern file systems are pretty stable, copying a thousand files (and keeping a hardlink to these 10000 files that did not change) is less likely to introduce random errors (as caused by main memory bit flips and corrupt storage media) than dealing with full partition images. It's also faster. Generally, I'd recommend you avoid updating the operating system core; – Marcus Müller Dec 18 '22 at 13:20
  • you probably never actually have to, as your updates probably mostly contain bugfixes and enhancement to the application side of things. So, containerizing your application logic and exchanging these containers seems to me the more promising approach than to image full disk partitions.) – Marcus Müller Dec 18 '22 at 13:22
  • (I think Ubuntu has something similar as Fedora has with Fedora IoT, but it's as far as I can tell based on snapd, and every time I have to work with that, it's a headache, and I really would not want to do it automatedly in the field. rpm-ostree is kinda low-maintenance, not-everything-needs-to-be-or-should-be-done-in-a-container approach.) – Marcus Müller Dec 18 '22 at 13:31

1 Answers1

3

Modern Linux distributions tend to configure GRUB to identify the filesystem to boot from using filesystem UUIDs (or equivalents). When you clone sda2 to sda4, you'll now have two filesystems with the same UUID. It is likely that GRUB will either boot the first one it finds with a matching UUID, or just stop if it detects multiple matching UUIDs.

So, the first thing you'll need to do after cloning the filesystem is to assign a new UUID to the new clone on sda4, to make the UUIDs unique again. Here's a question that includes answers specifying how to change the UUIDs of many filesystem types. My guess would be that you have missed this step.

The second step would be to update the mini-grub.cfg on sda1 located at /boot/efi/EFI/<name of distribution>/grub.cfg. It contains the search.fs_uuid command that will look for the filesystem that contains /boot/grub and the main GRUB configuration file. Once you have updated the UUID there, GRUB will be looking for its configuration in the new clone instead of the original filesystem.

And finally, you'll need to update both the /boot/grub/grub.cfg and /etc/fstab of the new clone on sda4 to actually use the new UUID of sda4.

(And even if you chose to use partition/device names instead of UUIDs, you would have to make changes to all those same places.)

telcoM
  • 96,466
  • 1
    nice answer; I'll stress that it's still desirable to just put / on a LVM volume, and /boot on a separate partition; and rename the root-carrying volume rather than having to update partition name or UUID entries. The /boot and /boot/EFI would stay immutable under OP's plan, anyways. – Marcus Müller Dec 18 '22 at 14:20
  • @MarcusMüller Since the OP mentioned the target is an IOT device, they may have reasons to restrict the total number of partitions and might be restricted to e.g. minimal busybox utilities only. But I agree with you: if the appropriate tools and resources are available, your idea would be good. – telcoM Dec 18 '22 at 16:48
  • minimalism and using Ubuntu as base system really do not go well together ;) Yeah, I really think that your answer is valuable, but hopefully there's a less complicated way. – Marcus Müller Dec 18 '22 at 17:40