I have a process that reads data from a hardware device using DMA transfers at a speed of ~4 * 50MB/s and at the same time the data is processed, compressed and written to a 4TB memory mapped file.
Each DMA transfer should (and do on average) take less than 20ms. However, a few times every 5 minutes the DMA transfers can take up to 300ms which is a huge issue.
We believe this might be related to the kernel flushing dirty memory mapped pages to disk. Since if we stop writing to the mapped memory the DMA transfers durations are just fine. However, we are confused as to how/why this could affect the DMA transfers and whether there is a way to avoid this?
The hardware device has some memory to buffer data but when the DMA transfers are this slow we are loosing data.
Currently we’re doing testing on Arch Linux with a 4.1.10 lts kernel, but we’ve also tried Ubuntu 14.04 with mostly worse results. Hardware is a HP z820 workstation, 32GB RAM and dual Xeon E5-2637 @ 3.50Ghz (http://www8.hp.com/h20195/v2/GetPDF.aspx/c04111177.pdf).
We also tried a Windows version of our software as well that does not suffer from this specific issue but has lots of other issues.
mmap()
'd file withopen()
andwrite()
calls using the O_DIRECT flag on theopen()
. This won't use the page cache and the kernel won't have to flush dirty pages. This does assume your underlying file system supports direct IO. – Andrew Henle Oct 26 '15 at 09:57chrt -f
andionice -c 1
to give realtime scheduling to your process, may help with io queueing. Also, stopcron
and other background jobs you are not interested in (eg temporarily withkill -stop
thenkill -cont
). – meuh Oct 26 '15 at 14:30cat
) while it's being written or if we useO_DIRECT
. The second one I can understand but the first one makes no sense to me. – ronag Oct 26 '15 at 15:29chrt -f
solved our problem. If you create an answer with your suggestion I can accept it. – ronag Oct 28 '15 at 11:43