31

I saw in this question that it is possible to place both a SSD and a standard SATA hard drive into a single LVM volume group (VG).

How does this affect performance?

Is there a way to force the OS to be on the SSD while the data is on the SATA drive within a single volume group?

Are there any good whitepapers on using LVM with different types of drives?

Would it be beneficial to create a VG for each drive type and/or speed? I was thinking of creating one VG for SSDs and one for SATA (and for each drive type I may add in the future as it comes).

Nick
  • 413
  • related - http://unix.stackexchange.com/questions/7122/does-lvm-impact-performance – Graeme Mar 18 '14 at 17:37
  • My intuition is that it would be a really bad idea to put both a SSD and a conventional hard disk in the same volume group. – samiam Mar 18 '14 at 17:41
  • @samiam that was my initial thought. I wasn't sure if there were ways to tell the LVM to always place data going to and from such-and-such directory to sda and always place data going to another directory on sdb. – Nick Mar 18 '14 at 17:50
  • @Graeme that talks a lot about performance, but I didn't see anything related to spanning different disk types, which is what I'm mainly concerned with. If I missed something, please point it out. – Nick Mar 18 '14 at 17:50
  • Nick: I can't answer about LVM from the top of my head, but, yes, it's possible to set up /etc/fstab so that / is on a SSD but anything below /home is on a conventional hard disk. This is usually an option while installing any modern Linux system (/home would be a "mount point" when choosing some form of "advanced options") – samiam Mar 18 '14 at 17:52
  • @samiam Yep, using a tradition partition setup was my initial plan. The benefits of LVM really stood out to me for my home lab (easily expanded storage, snapshots, etc) so I wanted to know more about it before I made a decision about using LVM or traditional partition setups. – Nick Mar 18 '14 at 17:56
  • I think it would be best for you to use LVM. The more technology you use, the more learning opportunities and resume bullet points you have. – samiam Mar 18 '14 at 18:01
  • @Nick I just thought the link would be useful to you and/or anyone else looking at this Q. Likely the LVM performance will be highly variable as where things are place will be mostly filesystem dependent. If you want to homogenise the performance, RAID 1 is probably a better choice. – Graeme Mar 18 '14 at 19:03
  • @derobert Sorry for the hairsplitting and for the late react. The kernel subsystem is not the LVM, the kernel subsystem is the device mapper, which is essentially a framework for chaining mapped block devices. LVM only uses the dm. – peterh Oct 16 '18 at 04:13
  • @peterh true. I'll delete my not fully correct and also obsolete comment. – derobert Oct 16 '18 at 05:19

2 Answers2

14

What you can do in recent-ish LVM versions is create one “origin” LV on the HDD and one “cache pool” LV on the SSD, and then combine it into a single “cache” LV. It has the same size as the “origin” LV (i.  e., you only get as much space as is on the HDD), but frequently used blocks and metadata are cached on the SSD to improve performance.

The gist of it is, assuming you already have a VG spanning both drives:

lvcreate -l 100%PVS -n your_name YourVG /dev/YourHDD
lvcreate --type cache-pool -l 100%PVS -n your_name_cache YourVG /dev/YourSSD
lvconvert --type cache --cachepool YourVG/your_name_cache YourVG/your_name

After that, you will have a your_name LV that you can use like any other LV, and several internal LVs that you can see with lvs -a YourVG.

For example, I set up an encrypted root filesystem across an SSD partition (/dev/sda3) and an HDD partition (/dev/sdb1) with the following commands:

pvcreate /dev/sda3 /dev/sdb1
vgcreate RootVG /dev/sda3 /dev/sdb1
lvcreate -l 100%PVS -n cryptroot RootVG /dev/sdb1
lvcreate --type cache-pool -l 100%PVS -n cryptroot_cache RootVG /dev/sda3
lvconvert --type cache --cachepool RootVG/cryptroot_cache RootVG/cryptroot
cryptsetup luksFormat --type luks2 /dev/RootVG/cryptroot

You can find more details on this blog post or this one. (The first one is what I used for reference and is also used as a reference on the LVM Wikipedia article; the second one is by me, describing how I used it in practice. Decide for yourself which one you want to trust )

10

LVM does not distinguish between a fast and a slow disk. Is it seems not to be a good idea to put those disk's to one LVM volume group.

Beside of this, it is always good to mount your /tmp directory on a SSD which provides a huge speedup, especially for applications that use it like compiling.

  • 6
    Put /tmp on tmpfs. More performance, less wear on the SSD (or on the hard disk for that matter). SSD's very fast reads make it mostly useful for data that is read more often than it's written. – Gilles 'SO- stop being evil' Mar 18 '14 at 23:35
  • this was discovered as a vulnerability and is not more provided by many distributions. –  Mar 19 '14 at 00:34
  • 1
    What was discovered as a vulnerability? – Gilles 'SO- stop being evil' Mar 19 '14 at 00:43
  • 1
    http://rwmj.wordpress.com/2012/09/12/tmpfs-considered-harmful/ –  Mar 19 '14 at 00:45
  • 7
    Meh. I generally want files in /tmp to be cleaned on reboot — if they're meant to stay, that's what /var/tmp is for. I've used tmpfs for /tmp for years on many machines and have never come close to running out of swap space, and I don't have atypically small amounts of data in /tmp, so that argument is bogus. In any case, it isn't a vulnerability — that word implies a security problem. – Gilles 'SO- stop being evil' Mar 19 '14 at 01:09
  • 1
    it seems you don't have any bad users to serve. If you don't want to call it vulnerability, then call it harmful, in any case it's not recommended except you know what you are doing. –  Mar 19 '14 at 14:29
  • 1
    Even with malicious users, whether /tmp is on tmpfs or on disk is irrelevant. If they fill up the space, it hurts the same either way. – Gilles 'SO- stop being evil' Mar 19 '14 at 15:12
  • agree, except man mkfs.ext3 -m reserved-blocks-percentage –  Mar 19 '14 at 15:14
  • http://meyerweb.com/eric/comment/chech.html – Jonathan Baldwin Sep 03 '14 at 03:13