I have a very simple script, that will remux all files in all subfolders to mkv
#!/bin/bash
Works with subfolders too
shopt -s nullglob
shopt -s extglob
shopt -s nocaseglob
shopt -s globstar
for file in "${1%/}/"*/(.mp4|.avi); do
mkvmerge -v -M -B --no-chapters --disable-language-ietf --engage no_cue_duration --engage no_cue_relative_position --clusters-in-meta-seek --disable-lacing --engage no_simpleblocks "$file" -o "${file%.*}".mkv &
done
How can I limit the background jobs to batches of 300 ? Meaning I'd like to run this command in batches of 300 files, and wait for them to finish/wait X time and start a new batch
xargs -P
is probably the simplest method. But, why 300? Does your system have 300 CPU cores/threads? If not, you probably want to limit it an maximum of 1 background job per core....and sincemkvmerge
is an I/O intensive process, you will want to run significantly less than that in parallel, otherwise they'll all be fighting each other for disk I/O. Also, if you have more than one drive, it's a good idea to write the output file to a separate drive so that reads aren't contending with writes for I/O. – cas Jun 23 '21 at 07:57find ... -print0
) intoxargs -0 -r -P
, which runs a script that iterates over the filename args and runsmkvmerge
on them. BTW, there is I/O happening, it's just happening on a remote file server....and even the fastest of networks would struggle, at best, to keep up with the I/O performance of a local disk, especially if it's an SSD or ramdisk. Quite often it's better/faster to write to a temporary dir on a fast local fs, thenmv
the file to its final location when finished. – cas Jun 24 '21 at 01:31