I want to run a postgres dump from a remote server which will take a couple of hours.
I am using a bash script on the target to do so and start it with at
Now, in order to prevent things from breaking I use nohup
, but am not sure whether this is even needed (due to starting the script with at
) and whether it would be better to use in in the pg_dump
command directly (see below) or rather starting the script with nohup
and skip it in the command itself.
command in the script currently is:
nohup nice pg_dump -h ${SOURCE} -d ${DB_NAME} -U ${DB_USER} -v -Fd -j 2 \
-f ${DUMPDIR}/${DUMPFILE}_"${NOW}" > ${DUMPDIR}/${DUMPFILE}_"${NOW}".out
nohup
withat
? There's no interactive terminal associated with things run fromat
anyways. Wo, which problem are we actually solving here? – Marcus Müller Jun 08 '23 at 10:08nice
isn't likely to be of much use, either, aspg_dump
will be mostly I/O bound. The real concern is all those unquoted variables in the script. See $VAR vs ${VAR} and to quote or not to quote and Why does my shell script choke on whitespace or other special characters?. Also Security implications of forgetting to quote a variable in bash/POSIX shells – cas Jun 08 '23 at 10:39xz
which isn't really known for its speed, so it might introduce a new bottleneck.gzip
is faster, but still single-threaded (use Adler'spigz
instead whenever possible), but the compression/speed ratio is still pretty bad by modern compressors' standards.zstd -3
typically outperforms gzip/zlib in any aspect and way: faster thangzip -1
(and any higher gzip compression setting), and better at compressing thangzip -9
(and any faster gzip seeing). If you need even faster compression thanzstd
,lz4
is also an option for some requirements. – Marcus Müller Jun 08 '23 at 11:11