5

Background: My Windows Home Server bit the dust - some kind of OS Hang. The drives and hardware were still good; after making sure I had all my data backed up, I decided to investigate Ubuntu and LVM, (along with btsync and samba, but that's another story)

I had the following drives available to me: 1TB, 1.5TB, and 2TB.

I was trying to figure out how to partition them and raid them together. I ran around in several circles. What I ended up doing was:

  • 1 partition on each disk for the whole disk
  • pvcreate /dev/sd[b,c,d]
  • vgcreate all of the above together into a single vg "vg1"
  • lvcreate -m1 on that vg1.

I played with the sizes. Sure enough it would let me lvcreate and extend up to 1.8tb, but not more.

Question #1: Is it really mirrored? I think it is; however, I didn't follow any guidelines of "pair this drive with that drive because they are the same size" (which is pretty much every example I've found on LVM).

Output of lsblk, excluding the O/S drive:

sdb                            8:16   0   1.4T  0 disk
└─sdb1                         8:17   0   1.4T  0 part
  └─vg1-lv1_mimage_0 (dm-1)  252:1    0   1.8T  0 lvm
    └─vg1-lv1 (dm-3)         252:3    0   1.8T  0 lvm  /srv/samba/share
sdc                            8:32   0   1.8T  0 disk
└─sdc1                         8:33   0   1.8T  0 part
  └─vg1-lv1_mimage_1 (dm-2)  252:2    0   1.8T  0 lvm
    └─vg1-lv1 (dm-3)         252:3    0   1.8T  0 lvm  /srv/samba/share
sdd                            8:48   0 931.5G  0 disk
└─sdd1                         8:49   0 931.5G  0 part
  ├─vg1-lv1_mlog (dm-0)      252:0    0     4M  0 lvm
  │ └─vg1-lv1 (dm-3)         252:3    0   1.8T  0 lvm  /srv/samba/share
  └─vg1-lv1_mimage_0 (dm-1)  252:1    0   1.8T  0 lvm
    └─vg1-lv1 (dm-3)         252:3    0   1.8T  0 lvm  /srv/samba/share

Question #2: When is the !#$!$% copy going to be done? Its been running for about 24 hours so far. Is it really this slow? Or is it falling behind because I'm also currently btsync'ing my data back to the array? Or is there something wrong that I can go tweak and fix?

Output of lvs -a:

  LV             VG        Attr      LSize   Pool Origin Data%  Move Log      Copy%  
  root           ubuntu-vg -wi-ao--- 697.39g
  swap_1         ubuntu-vg -wi-ao---   1.00g
  lv1            vg1       mwi-aom--   1.80t                         lv1_mlog  43.83
  [lv1_mimage_0] vg1       Iwi-aom--   1.80t
  [lv1_mimage_1] vg1       Iwi-aom--   1.80t
  [lv1_mlog]     vg1       lwi-aom--   4.00m

Thanks for any feedback and direction in advance.

Other details if its at all relevant:

  • EX485 former HP Media server
  • Ubuntu Desktop 32bit 14.something as of 3 days ago (<2G of ram)
  • Installed by mounting boot drive elsewhere, installing, adding sshd, and then moving it over, so maybe its a low level disk driver problem

update #1 I caught some of the fine print here (my bold added):

When a mirror is created, the mirror regions are synchronized. For large mirror components, the sync process may take a long time. When you are creating a new mirror that does not need to be revived, you can specify the nosync argument to indicate that an initial synchronization from the first device is not required.

This might indicate that the "long copy time" is correct.

2 Answers2

2

if you are creating a lvm mirror volume from zero of some TB, maybe you can use the option --nosync

man lvcreate
 Specifying the optional argument --nosync will cause the creation of the mirror to skip the initial resynchronization.  Any data written afterwards  will  be
          mirrored,  but  the  original  contents  will  not be copied.  This is useful for skipping a potentially long and resource intensive initial sync of an empty
          device.
c4f4t0r
  • 649
  • On a running mirror, I am very often slowed down if anything big is being created/copied. Do you believe there is some way to let it not be that intensive when mirroring? – Aquarius Power Aug 31 '16 at 00:05
  • 1
    @AquariusPower you can set a limit on IO, puting the kernel thread and process of lvm convert mirror under cgroup IO controller. – c4f4t0r Aug 31 '16 at 13:28
  • sounds quite interesting, I am trying to determine what specific command to use following this: http://serverfault.com/questions/563129/i-o-priority-per-lvm-volume-cgroups, I created also this question: http://unix.stackexchange.com/questions/306827/lvm-mirror-creates-high-load-from-time-to-time-how-to-lower-these-effects, and I am looking for a gui frontend to do it :) – Aquarius Power Sep 01 '16 at 01:40
0

To answer your first question, yes, it is being mirrored. The output of the lvs command shows that information in the Copy% column:

LV             VG        Attr      LSize   Pool Origin Data%  Move Log      Copy%
lv1            vg1       mwi-aom--   1.80t                         lv1_mlog  43.83

As we can see, in your case you are at 43.83% which meant that at the time the mirror was still partial (you could not recover from the mirror).

To get more information and see which disks are used you need to list the devices along your logical partitions. This is done using the following command:

sudo lvs -a -o +devices

And then you'd see which device is used by which image. From my computer (which has two drive of the same size) it looks like this:

LV                      VG      Attr       LSize    Pool Origin Data%  Meta%  Move Log         Cpy%Sync Convert Devices
root                    tristan mwi-aom--- <250.00g                                [root_mlog] 100.00           root_mimage_0(0),root_mimage_1(0)
[root_mimage_0]         tristan iwi-aom--- <250.00g                                                             /dev/sda5(0)
[root_mimage_1]         tristan iwi-aom--- <250.00g                                                             /dev/sdb5(0)
[root_mlog]             tristan mwn-aom---    4.00m                                            100.00           root_mlog_mimage_0(0),root_mlog_mimage_1(0)
[root_mlog_mimage_0]    tristan iwi-aom---    4.00m                                                             /dev/sdb5(472781)
[root_mlog_mimage_1]    tristan iwi-aom---    4.00m                                                             /dev/sdb5(63999)
swap_1                  tristan -wi-ao----  <15.97g                                                             /dev/sda5(472748)
tristan-home            tristan rwi-aor---   <1.56t                                            2.72             tristan-home_rimage_0(0),tristan-home_rimage_1(0)

In your case, it will have used the 1Tb and 1.5Tb for one copy and the 2Tb for the other copy. The result is a partition of about 1.8Tb (because the drive says 2Tb, but that's really 2,000,000,000,000 [or 2x10¹² and not 2⁴¹). The LVM software is smart enough to select multiple physical devices when you ask for a RAID1 mirror. I do not know whether there is a way to force a certain physical device without creating a specific group for your RAID.

As for your second question (Copy this slow?), I suppose you meant «"write" this slow?», and yes, the write can end up being slow. For best RAID1 performance you need to have N drives which have the exact same carateristics and have all the blocks at the same location on all drives.

The worst case for a slow write is when each drive has a different speed. For example, you may have a 5000 rpm and a 7200 rpm in which case you will be bound to the slower speed and all sorts of synchronization between the two speeds.

Your hardware may also be bad in that the DMA used to send the data to the drives need to be properly shared and some older desktop computers may just not be well suited for RAID1.

Alexis Wilke
  • 2,857