13

My btrfs metadata is getting full. (I'm creating hourly snapshots using btrbk.)

How do I increase / extend the space allocated to the metadata of a btrfs filesystem?

Or is it automatically expanded?

Tom Hale
  • 30,455
  • are you using a partition only for backups? – Rui F Ribeiro Feb 25 '17 at 08:22
  • 1
    It's subvolume @home mounted at /home, with btrbk backups in subvolume btrbk-snap. For a filesystem that supports up to 2^64 snapshots, I'd expect it to have a way to increase metadata size... – Tom Hale Feb 25 '17 at 09:21
  • Take a look at the btrfs balance command: https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs-balance – Emmanuel Rosa Feb 27 '17 at 23:21
  • @EmmanuelRosa I have. Which part do you consider useful? – Tom Hale Mar 01 '17 at 05:12
  • There's a second that discusses re-balancing the metadata. Perhaps with the right filter you can grant more space for metadata chunks. – Emmanuel Rosa Mar 02 '17 at 18:55
  • What do you mean a second? Rebalancing can help if the metadata is not full. My question relates to when rebalancing is no longer effective. – Tom Hale Mar 06 '17 at 11:16
  • @TomHale If you state that rebalancing is not longer effective, should that imply that you are low or out on allocable disk space? Because as long as there is allocable space on the devices structure, there should not be a problem with metadata expanding automatically – humanityANDpeace May 26 '17 at 10:33

3 Answers3

12

TL;DR The metadata (if the btrfs is not suffering general low space condition) will automatically increase. In cases that no unallocated free space exists, the automatic increase is hembered. If, however, the data part of btrfs has been allocated more space than it needs, then it is possible to redistribute this. This is called balance-ing in btrfs.

Assuming that there is enough unallocated memory on the backing block device(s) of the btrfs, then the Metadata part of the filesystem allocates - just as assumed by the OP - automatically memory to increase/expand the metadata.

Therefore, the answer is: Yes (provided there is not low memory/free space condition in the btrfs), then the metadata will get automatically increased, as such:

(1) We have a look at some initial allocation setup of the btrfs (on a 40GB device)

$> btrfs filesystem df /
Data, single: total=25.00GiB, used=24.49GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=1.55GiB, used=1.33GiB
GlobalReserve, single: total=85.41MiB, used=0.00B

(2) As can be seen, the allocated space in the filesystem to store Metadata is 1.55GiB, of which 1.33GiB, hence almost all is used (this might be a situation as occurring in the OP's case)

(3) We now provoke an increase of metadata to be added. To do so, we copy the /home folder using the --reflink=always option of the cp command.

$> cp -r --reflink=always /home /home.copy

(4) Since (as we assume there were lots of files in /home), which a lot of new data to the filesystem has been added, which because we used --reflink does use little to no additional space for the actual data, it uses the Copy-on-Write, mechanism. In short, mostly Metadata was added to the filesystem. We can have hence another look

$> btrfs filesystem df /
Data, single: total=25.00GiB, used=24.65GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=2.78GiB, used=2.45GiB
GlobalReserve, single: total=85.41MiB, used=0.00B

As can be seen, the space allocated for Metadata used in this btrfs has automatically increased expanded.

Since this is so automatically, it normally goes undetected by the user. However, there are some cases, mostly those where the whole filesystem is already pretty much filled up. In those cases, btrfs may begin to "stutter" and fail to automatically increase the allocated space for the Metadata. The reason would be, for example, that all the space has already been allocated to the parts (Data, System, Metadata, GlobalReserve). Confusingly, it could be yet the case that there is apparent space. An example would be this output:

$> btrfs filesystem df /
Data, single: total=38.12GiB, used=25.01GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=1.55GiB, used=1.45GiB
GlobalReserve, single: total=85.41MiB, used=0.00B

As it can be seen, the system all the 40GiB, yet the allocation is somewhat off balance, since while there is still space for the new files' data, the Metadata (as in the OP case) is low. The automatic allocation of memory for the devices backing the btrfs filesystem is not possible any more (simply add up the totals of the allocation, 38.12G+1.55G+..~= 40GiB).

Since there is however excess free space that was allocated to the data part of the filesystem, it can now be useful, or even necessary to balance the btrfs. Balance would mean to redistribute the already allocated space.

In the case of the OP, it may be assumed that, for some reason, an imbalance between the different parts of the btrfs allocation has occurred.

Unfortunately, the simple command sudo btrfs balance -dusage=0, which in principle should search empty blocks (allocated for data) and put them to better user (that would be the almost depleted space for Metadata), may fail, because no completely empty data blocks can be found.

The btrfs developers recommend to hence successively increase the usage limit of "when data blocks should be rearranged to reclaim space".

Hence, if the result of

$> sudo btrfs balance -dusage=0
Done, had to relocate 0 out of 170 chunks 

is showing no relocation, one should do some

$> sudo btrfs balance -dusage=5
Done, had to relocate 0 out of 170 chunks  <--(again fail)
$> sudo btrfs balance -dusage=10
Done, had to relocate 0 out of 170 chunks  <--(again fail)
$> sudo btrfs balance -dusage=15
Done, had to relocate 2 out of 170 chunks  <--(success)

The other answer hinted at the influence of the btrfs nodesize, which influences somewhat how quickly metadata will increase. The nodesize is (as mentioned in the other answer) only set once at mkfs.btrfs filesystem creation time. In theory, one could reduce the size of Metadata if it was possible to change to a lower value for the nodesize, if that was possible (it is not!).

The nodesize, however, will not be able to help expand or increase the metadata space allocated in any way. Instead, it might have only helped to conserve space in the first place. A smaller nodesize, is not guaranteed however to reduce the metadata size. Indeed, some cases might show that larger nodesizes reduce the tree-traversal length of btrfs, as nodes can contain more "links".

1

According to the FAQ on the btrfs wiki, this isn't possible and isn't likely to be implemented.

Can I change metadata block size without recreating the filesytem?

No, the value passed to mkfs.btrfs -n SIZE cannot be changed once the filesystem is created. A backup/restore is needed. Note, that this will likely never be implemented because it would require major updates to the core functionality.

You could the migrate the existing btrfs filesystem to a new one that has a larger -n SIZE. You may even be able to then add it to the existing filesystem using btrfs RAID and balance, then remove the old filesystem.

See also the section about nearly full drives.

etskinner
  • 605
  • 2
    There is a difference between the nodesize (which as you state can only be set once at filesystem creation) and the metadata size, which imho is what btrfs filesystem df / commands output line Metadata, single: total=xxGiB, used=xxx represents. – humanityANDpeace May 26 '17 at 10:29
  • What's your point? Sure there's a difference, but it's not a matter of opinion. -n sets the size of each metadata block, while btrfs filesystem df shows the space used up by those blocks. If the original post author has too much space taken up by metadata vs actual data, they should decrease the size of -n, so that metadata blocks have a lower minimum size (less space taken up but more fragmentation). – etskinner May 26 '17 at 11:51
  • 3
    The OP ask how to "increase / extend the space allocated to the metadata", hence your "impossible to set the nodesize (-n option)" answer is imho, not to the point. Since as he asks further "or [the allocated space for metadata] is it automatically expanded?", hints that its not the nodesize, but the allocated space he's interested in. The answer should therefore be "yes". You are of course correct, hinting that the -n setting influences metadata size indirectly, but the question seems more directed towards the allocated memory to Metadata, not the single metadata note size.... – humanityANDpeace May 26 '17 at 12:36
  • ... say for example that there is 100G of not used space on the backing block device, then in case the alloted/allocated memory for metadata, will get automatically expanded (as the OP rightly assumed). With btrfs however it is not uncommen, that all backing devices/ memory, has already been alloted to Data blockgroups, which means that besides free memory (for data itself) , "no space" errors are generated. The user needs hence to attempt some btrfs balance, which if there is still space in other segments can redistribute that to increase the lacking space of the metadata part. – humanityANDpeace May 26 '17 at 12:40
0

At yestday, df -h show my disk is full, but has 4G free

Filesystem                Size      Used Available Use% Mounted on
/dev/sdb3                29.3G     25.9G      0  100% /mnt/sdb3

so I try btrfs fi df /mnt/sdb3 see Metadata is full, so I try to expand Metadata and it work again

Make a 5G file in other disk and losetup as a drive

dd if=/dev/zero of=/mnt/sda1/tmpBtrfs.img bs=1G count=5
losetup -v -f /mnt/sda1/tmpBtrfs.img

add it to disk, then df -h see 9G free

btrfs device add /dev/loop1 /mnt/sdb3

run a balance to free space

btrfs bal start /mnt/sdb3

and delete the 5g disk just added

btrfs device delete /dev/loop1 /mnt/sdb3
losetup -d /mnt/sda1/tmpBtrfs.img
rm /mnt/sda1/tmpBtrfs.img

then, Metadata will auto expand (about twice as much as used)

zyyme
  • 1