I have a mdadm
-managed RAID1 with a single EXT4 partition on it (having both the OS files and GRUB's files in /boot/grub
), and I'd like to know what modules and what configuration do I need for GRUB to be able to boot from it.
1 Answers
The minimal required grub.cfg
is the following :
set timeout=1
set root='mduuid/ce16c757e4752e4fa9a2fd4935df1aef'
menuentry 'Arch Linux' {
linux /boot/vmlinuz-linux root=UUID=05dddf23-1d9f-417e-b3f8-2281a328dc0b rw
initrd /boot/initramfs-linux.img
}
Yes, quite a bit different from the long stream of garbage that gets generated by grub-mkconfig
, and as you can see, no modules seem to be necessary for this setup.
My particular setup is a single partition on an MBR-formatted disk, on which I assembled a RAID1 array and an EXT4 partition on that array.
The root
variable is set to mduuid/xxx
with the UUID of the RAID array which you can get by running mdadm --examine /dev/sdX
on a disk or partition which is part of the RAID array. This is not the UUID of the EXT4 filesystem on top of the RAID, do not use the UUID reported by lsblk
as it will only give you the UUID of the partition which won't work there.
You can also set the root
variable to md/...
with the label of your RAID array specified when creating it (mdadm ... -N "some label" ...
). For other ways of specifying the root device, check out the documentation.
The UUID on the kernel parameter's line, is the UUID of the filesystem which is on top of the RAID array - this one can be obtained by running lsblk -o NAME,UUID
:
NAME UUID
loop0
└─loop0p2 ce16c757-e475-2e4f-a9a2-fd4935df1aef
└─md127 05dddf23-1d9f-417e-b3f8-2281a328dc0b
loop1
└─loop1p2 ce16c757-e475-2e4f-a9a2-fd4935df1aef
└─md127 05dddf23-1d9f-417e-b3f8-2281a328dc0b
It's the one corresponding to the mdXXX
device node - 05dddf23-1d9f-417e-b3f8-2281a328dc0b
in my case. Do not use the UUID of the underlying partition - loopXp2
in this example.
As a bonus, here are some horrible shell scripts that kinda work if you want to experiment with GRUB in a QEMU/KVM virtual machine.
To create some raw disks, they have to be "raw" to be mountable on the host, you can't use qcow2 nor vmdk :
qemu-img create -f raw driveX.img XXXG # name and size
To start a VM with two disks, an ISO (in this case it's Archlinux but it can be anything) and basic network access :
/path/to/qemu-system-x86_64 -m 512 -cpu host -smp 2,cores=2,sockets=1 -machine q35,accel=kvm -balloon none -device ahci,id=ahci -drive if=none,file=drive1.img,format=raw,cache=none,aio=native,id=hdd1 -device ide-hd,bus=ahci.0,drive=hdd1 -drive if=none,file=drive2.img,format=raw,cache=none,aio=native,id=hdd2 -device ide-hd,bus=ahci.1,drive=hdd2 -drive if=none,file=arch.iso,format=raw,cache=none,aio=native,id=iso,snapshot=on -device ide-cd,bus=ahci.2,drive=iso -device pci-ohci,id=ohci -device usb-kbd,bus=ohci.0 -device usb-mouse,bus=ohci.0 -netdev user,id=net0 -device e1000-82545em,autonegotiation=on,netdev=net0 -realtime mlock=on
Script to mount the VM's partitions on the host, make sure to stop QEMU before running it to avoid corrupting the partitions :
losetup -P -f drive1.img # create loopback device nodes for the virtual disks
losetup -P -f drive2.img
mdadm --assemble /dev/md/root /dev/loop0p1 /dev/loop1p1 # assemble the RAID
# some distributions will auto-detect and assemble them so it sometimes fails
# running this script a second time usually succeeds in mounting it anyway
# if it failed the first time - I don't have time to fix the real issue
mount "/dev/md/root" rootfs # mount the root FS of the VM in this directory
Script to unmount, run this (multiple times just to be safe) before starting the VM back up :
umount rootfs # umount the FS
mdadm --stop /dev/md127 # sometimes the array appears under these names
mdadm --stop /dev/md126 # so we stop them as well just to be safe
mdadm --stop /dev/md/root # the correct name
losetup -D # detach all loop device nodes