I am a user of Ubuntu, Kali and lately Parrot for approximately the past 10-15 years (notebook, private use). Every now and then there are small problems to solve, but in general, everything works as it should. Except one fundamental issue that is a massive annoyance - copying files.
Right now I try to copy a single 100GB file from my notebook to an external hard disk connected via USB. The first few GB go quite fast and then the speeds slows down to several hundred kb per second. On good days it remains at 3-4MB. Sometimes the speed increases for shorter periods of time just to slow down again. When copying folders with multiple smaller files the general behaviour is the same. This makes it basically impossible to copy large amounts of data.
Over the years, I experienced this on four different notebooks (e.g. ASUS, DELL, ...) that were new when I got them. By now I probably had more than 10 different USB storage devices in use (e.g. Toshiba, Verbatim, ...). The story is always the same. Since I use Linux this was always like that. It also does not matter if I use the GUI or for example the cp command via terminal.
When looking for this issue online people either say that this is known Linux bug for years and one just has to live with it or they get lost in the details of the hardware involved. Based on my experience over the years this really does not seem to be a problem of the involved hardware. At least some of my devices had Windows installed in parallel and of course I tested the storage devices on Windows machines as well. Everything was fine and the copying speeds were always acceptable. By acceptable I mean that I do not care about "minor differences" in speeds of 40MB vs. 60MB vs. 100MB but rather 40MB vs 800kB - a difference in the order "I have no clue if the copy process every reaches an end".
This might be my last attempt to find a solution for this. What options do I have to reach acceptable file copy speeds in Linux? Has it something to do with the distributions all being Debian based? But why does this seem to affect certain people heavily and others not?
Update: I adjusted the dirty_background_bytes and dirty_bytes as suggested. At the moment I want to copy a 50GB File from Parrot OS to an external 2TB USB hard disk. After approx. an hour, 27GB of the 50GB file are copied. The remaining time is 40min and stays almost constant as the speed keeps decreasing (9.9MB/sec at the moment). Not sure if I ever get a copy. This behaviour affects everything (e.g. pretty much unable to backups files). How can such a basic function of the operating system cause such issues. When I was using Windows in the past this did not happen. What do they implement differently? How does Apple do it? There must be a solution to this problem.