12

When I copy files to and from USB devices (camera, HDD, memory card) my system becomes very slow. For example if I want to close a window I move the mouse but it takes about 2 seconds or more before the mouse cursor moves. When I finally get the cursor over the x and click it nothing happens for 10+ seconds. I've tried this with all desktop effects disabled but the issue persists.

Software: Linux Mint 9 KDE Hardware:

  • Asus SLI motherboard
  • NVidia 6600 GPU
  • 2 GB Ram
  • 2 GB Swap
  • AMD Athlox X2 @ 3800+

To me this hardware should not have any issues running this software and it doesn't until I copy files using USB. Where should I start looking to figure this one out? I'm kind of thinking the graphics driver may be part of the problem but I don't know for sure.

John
  • 223
  • 2
    check that the USB ports are USB 2.0 capable. some USB ports, particularly on the front of desktops used to be USB 1.0 only. Also check that your BIOS settings are optimal for USB performance. There may be some USB speed settings, and/or USB legacy settings that may affect your performance. – Tim Kennedy Oct 20 '11 at 20:06
  • Is the device formatted as NTFS? If it is, I'd try reformatting it as FAT32 (or EXT4 if you're only planning to use it on Linux). – RobinJ Nov 06 '11 at 09:14
  • 3
    There seems to be a problem with huge pages in linux' memory management. It rarely occurs, but sounds like you have observed it. – wnrph Nov 18 '11 at 15:59
  • @artistoex - That article completely sums up the behavior I was experiencing. Too bad there is no concrete fix. Anyone know if this is fixed in later versions? Time for an upgrade anyway. – John Nov 25 '11 at 16:39
  • as the article says, recompile your kernel with the transparent huge pages feature disabled. – wnrph Nov 26 '11 at 11:22
  • @artistoex: You should add this as an answer. Having read the article, I'd say that with high probability this is the cause of his problem. – Faheem Mitha Nov 27 '11 at 22:10
  • After a client gave me his old computer I was able to salvage some extra ram. I'm now up to 3GB of RAM and an NVidia 9500. I haven't been experiencing this issue for a while. I have no desire to recompile the kernel at this point. If it happens again I may do that, but for now all is well. – John Nov 28 '11 at 14:49
  • Could be the same as http://unix.stackexchange.com/questions/107703/why-is-my-pc-freezing-while-im-copying-a-file-to-a-pendrive/107722#107722 ? – Rmano Mar 26 '15 at 08:59

3 Answers3

7

There seems to be a problem with huge pages in linuxes memory management. It rarely occurs, but sounds like you have observed it.

Cause

This is my grossly simplified account of what, according to the article, happens.

If unlucky, a process gets stuck the moment it issues a memory access. That's because when transparent huge pages is enabled, a memory access may trigger synchronous compaction (defragmentation of main memory,) synchronous meaning the memory access does not finish before the compaction does. This in itself is not a bad thing. But if write-back (of, e.g., buffered data to USB) happens to take place at the same time, compaction in turn is likely to stall, waiting for the write-back to finish.

So, any process could end up waiting for a slow device to finish writing buffered data.

Cure

Upgrading main memory, as the OP did, might help delay the problem. But for those who don't consider that an option, there are two obvious workarounds. Both involve recompiling the kernel:

  • disabling the transparent huge pages feature
  • applying Mel's patch as mentioned in the article
wnrph
  • 1,444
2

This sounds similar to my question here (where an answer pointed me to this question):

https://stackoverflow.com/questions/10105203/how-can-i-limit-the-cache-used-by-copying-so-there-is-still-memory-available-for

But the theory is completely different, and the solution I used is unrelated to yours, but works perfectly.

I was using rsync, so all I had to do was use the --drop-cache option. (which makes the copy a bit slower as a side effect)

Peter
  • 1,247
0

The only trick I found that really works: Gnome, nautilus copy files to USB stops at 100% or near

If you want to try some power-user tricks, you can reduce the size of the buffer that Linux uses by setting /proc/sys/vm/dirty_bytes to something like 15728640 (15 MB). This means the application can't get more than 15MB ahead of its actual progress.

A side effect is that your computer might have lower data-writing throughput with this setting, but on the whole, I find it helpful to see that a program is running a long time while it writes lots of data vs. the confusion of having a program appear to be done with its job but the system lagging badly as the kernel does the actual work. Setting dirty_bytes to a reasonably small value can also help prevent your system from becoming unresponsive when you're low on free memory and run a program that suddenly writes lots of data.

But, don't set it too small! I use 15MB as a rough estimate that the kernel can flush the buffer to a normal hard drive in 1/4 of a second or less. It keeps my system from feeling "laggy".