11

Is there a way to mount multiple hard drives to a single mount point? Let's say I run out of space on /home and decide to add an extra hard drive to the computer. How do I scale the space on a mount point? If I use RAID, can I add drives on the fly to increase space as I run out of them? Is there an alternative to using RAID if I am not interested in maintaining a high level of redundancy?

Michael Mrozek
  • 93,103
  • 40
  • 240
  • 233
Lord Loh.
  • 1,976

5 Answers5

9

You may be interested in UnionFS. It may be simpler to setup on an existing system than an LVM.

From the UnionFS page, http://www.filesystems.org/project-unionfs.html :

This project builds a stackable unification file system, which can appear to merge the contents of several directories (branches), while keeping their physical content separate. Unionfs is useful for unified source tree management, merged contents of split CD-ROM, merged separate software package directories, data grids, and more.

I hope you find this helpful.

Mat
  • 52,586
dubkat
  • 624
  • 5
  • 4
8

You can use lvm for this. It was designed to separate physical drive from logical drive.

With lvm, you can :

  1. Add a fresh new physical drive to a pool (named Volume Group in LVM terminology)

    pvcreate /dev/sdb my_vg

  2. Extend space of a logical volume

    lvextend ...

  3. And finish with an online resize of your filesystem

    e2resize /mnt/my/path

But beware it's not a magic bullet. It's far more harder to reduce a filesystem, even with LVM.

Coren
  • 5,010
3

The LVM answer is a good one, but raid can do this as well, so adding another.
The linux software raid (mdadm) does allow adding disks to an array that's already created. When you do it will re-balance the data onto the new drive.
If youre not interested in redundancy, you can use raid-0 which simply stripes data evenly across all disks.
However raid-5 offers at least some redundancy without losing much storage (you sacrifice one disk's worth).

But with that said, raid works best when all the drives are the same size. If they arent the same size, portions of the drive will be unused as it'll only use as much as the smallest drive. If I recall correctly, LVM stiping does not have this issue as if the drives arent the same size, the extra space just wont be striped.

phemmer
  • 71,831
1

You can use mhddfs instead lvm as if one disk is failed then only data on that disk will be lost. It runs on user space using fuse modules but I'm using it for very heavy load infrastructure

[root@storagenode1 ~]# df -hl
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdl2             259G  2.1G  244G   1% /
tmpfs                  48G     0   48G   0% /dev/shm
/dev/sdl1             485M   30M  430M   7% /boot
/dev/sda1              24T   23T  1.4T  95% /mnt/disk01
/dev/sdg1              24T   22T  2.6T  90% /mnt/disk02
/dev/sdf1              24T   22T  2.6T  90% /mnt/disk03
/dev/sdb1              24T   20T  4.5T  82% /mnt/disk04
/dev/sde1              39T   30T  8.3T  79% /mnt/disk07
/dev/sdh1              28T  6.6T   21T  24% /mnt/disk08
/dev/sdj1              39T   32T  6.5T  84% /mnt/disk09
/dev/sdi1              20T  792G   19T   5% /mnt/disk10
/mnt/disk01;/mnt/disk02;/mnt/disk03;/mnt/disk04;/mnt/disk07;/mnt/disk08;/mnt/disk09;/mnt/disk10
                      218T  153T   65T  71% /mnt/disk99

This is the main developer website, and here you can download packages for Centos6, as I'm doing

  1. yum install mhddfs
  2. create mount points for local disks as normal and mount them
  3. create anouther directory to hold all your disks, my case called disk99
  4. mount all disks into disk99 using mhddfs

vim /etc/fstab add this line

mhddfs#/mnt/disk01,/mnt/disk02,/mnt/disk03,/mnt/disk04,/mnt/disk07,/mnt/disk08,/mnt/disk09,/mnt/disk10          /mnt/disk99     fuse    defaults,allow_other,mlimit=10%,nonempty,logfile=/dev/null,loglevel=2   0       0

Warning: in my case with heavy load mhddfs was logfile size consuming and crashed the server many times, therefore I use /dev/null for logging. I couldn't make logrotate work with mhddfs because you need to remount when you change logfiles.

0

Is there a way to mount multiple hard drives to a single mount point?

Probably not in the way that you want to. You can do all kinds of funny things, but this doesn't solve the problem you're trying to resolve.

Let's say I run out of space on /home and decide to add an extra hard drive to the computer. How do I scale the space on a mount point?

One approach would be the following steps:

  1. add one or more new drives to your computer

  2. create an LVM using the new drive(s)

  3. format the new LVM (probably to ext4 or xfs)

  4. mount it to a temporary spot (ex: /dev/tmp)

  5. copy the contents of /home to the new lvm (/dev/tmp) with rsync or cp

  6. do a "swaperoo" (unmount the old drive at /home, unmount the new drive at /dev/tmp and mount the new drive at /home

  7. optionally extend the new LVM using the old drive

  8. update fstab so that the mount is persistent across reboots

The above steps should probably be done by booting into an OS from a live USB/CD such as Gparted.

If I use RAID, can I add drives on the fly to increase space as I run out of them?

Yes you can. Exactly how is hardware dependent (assuming you would be using hardware raid).

Is there an alternative to using RAID if I am not interested in maintaining a high level of redundancy?

"High levels of redundancy" is subjective, and not all RAID provides redundancy. Here's the high level breakdown of RAID options:

  • RAID 1 (mirroring) is fully redundant, probably don't want this

  • RAID 6 (2 parity blocks) can handle 2 disk failures without data loss, minimum 4 drives, probably don't want this

  • RAID 5 (1 parity block) is tolerant to a single drive failure, minimum 3 disks, might fit your needs

  • RAID 0 (striping) has no redundancy and you will loose all data if a single drive fails (though it provides great performance), might fit your needs

Alternatively, you could also use ZFS which is what I would do. From Wikipedia: "ZFS is scalable, and includes extensive protection against data corruption, support for high storage capacities, efficient data compression, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones". In my experience, ZFS is very flexible and can be configured (and tuned) to do most anything you want. It handles RAID, LVM, copy-on-write, fast compression, etc. It's a bit more involved to administer than ext4 or xfs, but if you have to mess with raid/lvm anyways then it's really not that different.