9

It looks like currently most OS installers insist on putting /boot on a non-RAID partition (or the kind of RAID1 partition that "looks like" a non-RAID partition), even the installers that support RAID5 and GRUB2.

I'm guessing this limitation is a historical relic leftover from Grub1. My understanding is that Grub1 doesn't know anything about RAID and so can't boot off any kind of RAID array -- except for RAID arrays that "look like" a non-RAID array.

Is this a limitation of Grub2, or of the OS installers?

I've heard rumors that Grub2 is "able to support /boot on RAID-0, RAID-1 or RAID-5, metadata 0.90, 1.0, 1.1 or 1.2".

Does Grub2 really support putting /boot on a software RAID1 partition with 1.2 metadata?

Does Grub2 really support putting /boot on a software RAID5 partition?

An ideal answer would link to a tutorial that explains how to move a /boot partition (on a non-RAID partition) to a RAID5 partition.

By "looks like" a non-RAID partition, I mean either

  • when Grub1 reads only one hard drive of a software RAID1 array with a ext3 or ext4 filesystem and ignores the RAID metadata 0.90 or 1.0 at the end of the partition, it looks just like the a non-RAID ext2 file system that Grub1 can handle. Or
  • Not a software or fake-RAID, but a full hardware raid that looks like a normal non-RAID disk.

2 Answers2

12

Yes grub2 is fully raid ( and LVM ) aware. In fact you do not need a separate /boot partition at all; you can just put everything on the raid5.

Ideally you want to not install with a /boot partition at all, but removing it after the fact simply means copying all of the files to the root partition, and reinstalling grub, like this:

umount /boot
mount /dev/[bootpart] /mnt
cp -ax /mnt/* /boot
grub-install /dev/sda

Of course you then need to remove the /boot line from /etc/fstab, and you still have the partition laying around, just unused.

Note you can also grub-install to all of the drives in the raid5 so that you can boot from any of them. The Ubuntu grub-pc package will prompt you ( dpkg-reconfigure grub-pc to get it to ask again ) to check off all of the drives you want it installed on and install it for you.

psusi
  • 17,303
  • So I don't have to use metadata=0.9? – CMCDragonkai Jul 02 '14 at 08:52
  • @CMCDragonkai, no, nor should you. – psusi Jul 03 '14 at 01:22
  • what if one disk fails? The raid needs a working operating system to rebuild and grub needs a working raid to boot.. there is a deadlock then. Creating and mirroring a boot partition would be better, right? – El Hocko Feb 07 '16 at 00:25
  • 1
    @cIph3r, no: while grub can't rebuild the degraded array it can still boot from it just fine. – psusi Feb 07 '16 at 00:31
  • nice, and when installing grub the system asks to write grub to the mbr, what to do then, install on (say /dev/sda) and dd the mbr it to the other 3? – El Hocko Feb 07 '16 at 13:38
  • @cIph3r, no, you just choose to install grub to all three drives. – psusi Feb 07 '16 at 19:34
  • @psusi Manually? The installation procedure just asks for one! – EnzoR Sep 11 '16 at 05:46
  • @Enzo, yes manually. You just run grub-install once on each drive. Or if you are running a debian based distro, use dpkg-reconfigure grub-pc and you can select multiple drives for it to be installed to ( and automatically reinstalled to when upgrading ). – psusi Oct 07 '16 at 17:43
  • I have successfully done what you said. I have installed ubuntu server on a RAID 5 with 3 drives with one entry point "/" . Then, I removed one drive, I rebooted and it worked. Then I added a new drive to the RAID array, I did a grub-install /dev/sdx and it booted from that particular drive again. – Nicolas Guérinet Aug 29 '18 at 08:46
  • 1
    Historically, Grub used to be only able to read part of RAID, I remember having to put the /boot partition on a RAID 1 (mirrored). Now Grub2 can real all MDRAIDs (the standard Linux software raid, which is super stable and very very good and best of all: Hardware independent, so it will boot on any system that you hook up the drives, no hardware controller necessary) – Markus Bawidamann Aug 24 '21 at 21:59
0

This is a frightful mess in linux. The default superblock version using mdadm is version 1.20. Once you go above 0.90 for booting you are in uncharted territory. Certainly lilo shows no interest above 1.0. Your best bet is to form the raid arrays (tip: use the parameter --metadata=0.90 in your mdadm create) before you use the installation procedure. Then you can install on the RAID array and use your favourite bootloader.

Paul L
  • 9