3

I'm trying to save some disk space between multiple systems and re-using the same swap partition (which seems to be fine).

I'm now getting into multiple virtual machines and am thinking of a pass-through file system to another drive to be used as a linux swap.

It would be possible (I assume) to mount both systems to the same swap partition, but would it cause problems. Since the application are mapped and don't conflict are OS identifiers also mapped or part of the application mapping?

The stressed scenario would be multiple VMs running (possibly paused, so not actively running conncurrently).

To be explicit:

  1. Start VM 1
  2. Pause VM 1
  3. Start VM 2

Both share the same swap file. No hibernation If have seen this: FreeBSD and Linux shared partitions but the answer states it is unconfirmed. Also I need to know if it breaks down for different OSes/Kernel versions.

DarkSheep
  • 1,886
  • 3
  • 15
  • 18
  • That one implies that both systems are not running at the same time. Depending on timing they wouldn't be running concurrently, but not hibernating. For example pausing the VM isn't a hibernate since it wouldn't flush. – DarkSheep Jun 13 '13 at 01:12
  • Would seem to say you can do it, at least on VirtualBox – slm Jun 13 '13 at 01:28
  • I should have mentioned that I am using qemu/kvm with virt-manager. I am unsure how that changes things, but I am mentioning it. – DarkSheep Jun 13 '13 at 01:37
  • I use KVM as well and have swap allocated to each VM. Realize that each one is using their own .img or .qcow file for this however. So you're adding potential disk I/O with this. But I don't see the difference in doing it that way vs. a single swap they all share. – slm Jun 13 '13 at 01:53

2 Answers2

3

I think what you're going to find is that it's possible but probably not advisable to do this. In looking at 3 virtualization technologies:

  • VirtualBox
  • VMWare
  • KVM

It would appear to be possible in the first 2 (VirtualBox and VMWare):

For KVM it seems possible as well but with several caveats. For one with newer versions of RHEL a new technology called KSM - Kernel Samepage Merging has been deployed which allows identical pages across VM guests to be shared, however they are "pinned" in memory and cannot be swapped out.

excerpt - Kernel Virtual Machine (KVM) Best practices for KVM

The system cannot swap memory pages that are shared by KSM because they are pinned.

So it becomes unclear what happens if the VM has it's own swap, and one of these guest's pages needs to be swapped by the VM. So it would do so, but the physical memory wouldn't become freed (would be my guess) thus wasting now disk within the VM's swap and still continuing to use the physical RAM.

Additionally it seems dangerous to have all the VMs using the same swap for this reason: What happens when the swap becomes full, all the VMs are at risk since the resource has been essentially exhausted.

excerpt Overcommitting with KVM

As KVM virtual machines are Linux processes, memory used by virtualized guests can be put into swap if the guest is idle or not in heavy use. Memory can be committed over the total size of the swap and physical RAM. This can cause issues if virtualized guests use their total RAM. Without sufficient swap space for the virtual machine processes to be swapped to the pdflush process, the cleanup process, starts. pdflush kills processes to free memory so the system does not crash. pdflush may destroy virtualized guests or other system processes which may cause file system errors and may leave virtualized guests unbootable.

Anatomy of a swap out on KVM

There is an excellent write up on the KVM website that discusses how a VM provisions memory as well as how it eventually will make use of swap. It was written for qemu-kvm v0.12, so I don't know how much has changed since that version.

excerpt from above page

Swap-out path

Now, let's say the host is under memory pressure. The page from above has gone through the Linux LRU and has found itself on the inactive list. The kernel decides that it wants the page back:

  1. The host kernel uses rmap structures to find out in which VMA (vm_area_struct) the page is mapped.

  2. The host kernel looks up the mm_struct associated with that VMA, and walks down the Linux page tables to find the host hardware page table entry (pte_t) for the page.

  3. The host kernel swaps out the page and clears out the pte_t (let's assume that this page was only used in a single place). But, before freeing the page.

  4. The host kernel calls the mmu_notifier invalidate_page(). This looks up the page's entry in the NPT/EPT structures and removes it.

  5. Now, any subsequent access to the page will trap into the host ((2) in the fault-in path above)

So what the above is trying to say is that when a page of the guest VM's memory needs to be swapped out, it's done so by the host. But realize this is when the entire host has exhausted it's RAM, not when a guest VM has.

Should you use swap in KVM?

I do and as long as you understand that if you have a lot of VMs on a system they may all be hitting the disk if they're over provisioned, and so you'll be creating a ton of I/O on your disk. See this ServerFault question for more details.

slm
  • 369,824
1

If the OSes are running concurrently in VMs with one shared swap partition you'll wreak havoc.

Don't go there, buy an extra disk.

tink
  • 6,765
  • Hey can you read this post and tell me how you interpret it? It would seem to say that you can share swap and /tmp (at least in VirtualBox). – slm Jun 13 '13 at 01:25
  • @slm: Hummm ... that would suggest it IS possible. Not sure how that's supposed to work, though, short of Vbox creating temporary files for the immutable VDI to grow into as required, a separate one for each VM (which would kind of defeat the idea of having them to preserve space). – tink Jun 13 '13 at 01:38
  • I still agree with your assessment. – slm Jun 13 '13 at 01:44