29

Is it possible to move a logical volume from one volume group to another in whole?

It is possible to create a (more or less) matching lv and copy the data over, but is there any way to do this with LVM tools alone?

If not, is there a theoretical reason or a technical limitation (extent sizes)?

XTL
  • 1,161

6 Answers6

21

A volume group consists of whole physical volumes. A physical volume consists of many extents (an extent is typically 4MB); each extent may belong to a different logical volume. To transfer a logical volume to a different group, you cannot simply transfer extents, because that might split the physical volume between the source VG and the target VG.

What you can do is transfer one or more PVs from the source VG to the target VG, with the vgsplit command. You can specify which PVs you want to transfer, or which LV (but only one at a time). If you specify an LV, it and the other LVs in the source VG must be on separate PVs. The destination VG will be created if no VG exists with the specified name.

vgsplit -n source_group/volume_to_copy source_group target_group
vgsplit source_group target_group /dev/sdy99 /dev/sdz99

You may need to use pvmove first to arrange for the logical volumes you want to move to be on separate PVs.

If you meant to retain the physical boundaries of the VG and move the data, there's no built-in tool, but you could make a mirror then remove the original.

17

As of the LVM in Debian stretch (9.0), namely 2.02.168-2, it's possible to do a copy of a logical volume across volume groups using a combination of vgmerge, lvconvert, and vgsplit. Since a move is a combination of a copy and a delete, this will also work for a move.

Alternatively, you can use pvmove to move the physical extents instead. To quote U&L: Purpose of Physical Extents:

A single physical extent is the smallest unit of disk space that can be individually managed by LVM.

A complete self-contained example session using loop devices and lvconvert follows.

Summary: we create volume group vg1 with logical volume lv1, and vg2 with lv2, and make a copy of lv1 in vg2.

Create files.

truncate pv1 --size 100MB
truncate pv2 --size 100MB

Set up loop devices on files.

losetup /dev/loop1 pv1
losetup /dev/loop2 pv2

Create physical volumes on loop devices (initialize loop devices for use by LVM).

pvcreate /dev/loop1 /dev/loop2

Create volume groups vg1 and vg2 on /dev/loop1 and /dev/loop2 respectively.

vgcreate vg1 /dev/loop1
vgcreate vg2 /dev/loop2

Create logical volumes lv1 and lv2 on vg1 and vg2 respectively.

lvcreate -L 10M -n lv1 vg1
lvcreate -L 10M -n lv2 vg2

Create ext4 filesystems on lv1 and lv2.

mkfs.ext4 -j /dev/vg1/lv1
mkfs.ext4 -j /dev/vg2/lv2

Optionally, write something on lv1 so you can later check the copy was correctly created. Make vg1 inactive.

vgchange -a n vg1

Run merge command in test mode. This merges vg1 into vg2.

# vgmerge -A y -l -t -v <<destination-vg>> <<source-vg>>
vgmerge -A y -l -t -v vg2 vg1

And then for real.

vgmerge -A y -l -v vg2 vg1

Then create a RAID 1 mirror pair from lv1 using lvconvert. The dest-pv argument tells lvconvert to make the mirror copy on /dev/loop2.

# lvconvert --type raid1 --mirrors 1 <<source-lv>> <<dest-pv>>
lvconvert --type raid1 --mirrors 1 /dev/vg2/lv1 /dev/loop2

Then split the mirror. The new LV is now lv1_copy.

# lvconvert --splitmirrors 1 --name <<source-lv-copy>> <<source-lv>>
lvconvert --splitmirrors 1 --name lv1_copy /dev/vg2/lv1

Make vg2/lv1 inactive.

lvchange -a n /dev/vg2/lv1

Then (testing mode)

# vgsplit -t -v <<source-vg>> <<destination-vg>> <<moved-to-pv>>
vgsplit -t -v /dev/vg2 /dev/vg1 /dev/loop1

For real

vgsplit -v /dev/vg2 /dev/vg1 /dev/loop1

Resulting output:

lvs
[...]
lv1        vg1       -wi-a-----  12.00m
lv1_copy   vg2       -wi-a-----  12.00m
lv2        vg2       -wi-a-----  12.00m

NOTES:

  1. Most of these commands will need to be run as root. The # in front of some of the commands indicates that it is a comment. It does not represent the root prompt.

  2. If there is any duplication of the names of the logical volumes in the two volume groups, vgmerge will refuse to proceed.

  3. On merge, logical volumes in vg1 must be inactive. And on split, logical volumes in vg2 belonging to vg1 must be inactive. In our case, this is lv1.

Faheem Mitha
  • 35,108
  • 1
    Nice runthrough. Ended up using to move LVs to different set of disks with new UUIDs, where I normally DD and deal with duplicates. – Yarek T Nov 11 '21 at 08:25
8

I will offer my own:

umount /somedir/

lvdisplay /dev/vgsource/lv0 --units b

lvcreate -L 12345b -n lv0 vgtarget

dd if=/dev/vgsource/lv0 of=/dev/vgtarget/lv0 bs=1024K conv=noerror,sync status=progress

mount /dev/vgtarget/lv0 /somedir/

if everything is good, remove the source

lvremove vgsource/lv0
conan
  • 81
  • 1
    This is pretty much the opposite of the question. The point is to move the volume instead of copying the data to a new one. – XTL Jun 17 '18 at 07:55
  • 2
    I think this is actually a good answer. There is no 'move' in this context. Just a copy and delete. This answer describe pretty much exactly what will happen if you 'move' the lv.

    The difference here is that this is more transparent to the user and thus safer.

    – user62469 Mar 21 '20 at 15:09
  • I considered the complicated solution, and just keep coming back to the simplest method. Make a new lv, and make a bit for bit copy. lvm is not zfs, I like it that way, and I prefer the dumb solution. – user128063 Sep 23 '20 at 04:41
  • This is essentially a poor man's version of pvmove without any of the safety offered by that command. Being "more transparent" doesn't make it safer in the slightest. The dd part cannot be done online (you have to stop using the source lv for the entire duration of the copy), a single typo in the dd command can destroy data. pvmove will never destroy any of your data, no matter how hard you screw up or how badly stuff fails. Worst case, you'll be left with your original LV and an incomplete mirror. – TooTea Sep 26 '20 at 21:00
  • I didn't do dd between two SSDs because I'm not sure how it handles trim operations – Thorsten Oct 05 '20 at 10:48
  • The only problem with pvmove is the start, pv. When you want to move a logical volume, you do not want to mess with physical volumes or even mess up a volume group with random temporary guest PVs. ZFS handles it more transparent by allowing you to export/import a volume through send-receive. – Gabor Garami Nov 28 '20 at 16:22
  • @TooTea keep in mind that pvmove had serious bugs for quite a few years in the far past and would definitely lose data (at least from a lvm pov), i.e. if it OOM'ed. – Florian Heigl Apr 14 '22 at 01:21
6

The precise answer to this question is: "No, it is not possible to (logically) move a Logical Volume (LV) from one Volume Group (VG1) to another (VG2). The data must be physically copied."

Reason: Logical Volume data is physically stored on block devices (disks, partitions) assigned to a specific Volume Group. Moving Logical Volume from VG1 consisting of /dev/sda and /dev/sdb to VG2 consisting of /dev/sdc would require to move data from /dev/sda and/or /dev/sdb to /dev/sdc which is a physical copy operation between at least two block devices (or partitions).

P.S. If all the LV data was stored on the Physical Volume, which could be completely excluded from the VG1, then this Physical Volume could be assigned to VG2. But then it would be moving a Physical Volume from one Volume Group to another, not a move of a Logical Volume.

Krzysztof
  • 76
  • 1
  • 1
  • 2
    Great answer; if you have any reference you used to, please add them; they would make your answer greater! – mattia.b89 Apr 05 '20 at 07:53
3

Let's say you have a volume named s0

$ pvs -o+pv_used

PV         VG Fmt  Attr PSize    PFree   Used
/dev/sda2  cl lvm2 a--  <118.24g      0  <118.24g
/dev/sdb   s0 lvm2 a--  <223.57g      0  <223.57g
/dev/sdc1  s0 lvm2 a--  <465.76g      0  <465.76g
/dev/sdd1     lvm2 ---   931.51g 931.51g       0 

I want to move /dev/sdb and /dev/sdc1 to a new psychical disk /dev/sdd1

Create a physical volume on sdd1

$ pvcreate /dev/sdd1

You can now extend your Volume Group s0 with the new disk

$ vgextend s0 /dev/sdd1

You can now start moving data

$ pvmove /dev/sdb /dev/sdd1

Wait to finish

/dev/sdb: Moved: 10.0%
...

/dev/sdb: Moved: 50.0% ...

/dev/sdb: Moved: 100.0%

Check

$ pvs -o+pv_used

PV         VG Fmt  Attr PSize    PFree   Used    
/dev/sda2  cl lvm2 a--  <118.24g      0  <118.24g
/dev/sdb      lvm2 ---   223.57g 223.57g       0 
/dev/sdc1  s0 lvm2 a--  <465.76g      0  <465.76g
/dev/sdd1  s0 lvm2 a--  <931.51g 707.94g <223.57g

Now you can remove /dev/sdb from the s0 group

$ vgreduce s0 /dev/sdb

Follow the same steps for /dev/sdc1

  • This also is not an answer to the question as the LV is still in the same volume group. Only PVs have been changed. – XTL Nov 15 '22 at 13:22
-1

When I want to copy lv1 from vg1 to vg2 I create /dev/vg2/lv1 (same size as /dev/vg1/lv1) and use:

blocksync.py -f /dev/vg1/lv1 localhost /dev/vg2/lv1

It also works across servers.

When you use it for making regular copies, it will skip sending identical blocks. For move just drop /dev/vg1/lv1 afterwards.

AdminBee
  • 22,803