I'm using tar
do make backups of a machine. But, it is using a lot of I/O and slows down the whole machine.
So, is there a way to limit the read speed of tar
?
I know about pv
, but it limit only the write speed. Because I do incremental backups with tar --listed-incremental
, this will work only with the first full backup (subsequent incremental backups will then consume a lot of read I/O if there is only small changes).
I've tried to lower the overall priority of the backup with a combination of nice
and ionice
, but this not really change anything.
Informations: it's Debian 9 machine, and the files are residing on an ext4 file-system on top of a LVM volume.
tar
using something likepv
, as you mentioned? – Andrew Henle Dec 16 '18 at 21:53gtar
in general is unable to restore it's incremental backups. This only works in case that the differences between two incrementals are trivial and do not include renamed directories. – schily Dec 17 '18 at 16:23