50

I had similar issues before but i don't remember how i solved it.

When i try to copy something to USB stick, with FAT, it stops near the end, sometimes at 100%. And of course, when i transfer the memory stick somewhere else, it doesn't contain complete file. (file is a movie!)

I tried to mount device with mount -o flush, but i get same issue.

Also, i did format the USB stick with new FAT partition...

Any idea what cold i do?

p.s. I believe it's not related to OS, which is Debian, and i believe that coping from SSD drive doesn't make it stuck.

  • 3
    Somewhere I have met the following the matter explanation. In the case copying was made via operating memory, and idicator shows the process of reading data from drive. But wrighting process is much longer especially to USB-stick ( it can be 100 times slowly: like 2Mb/sec wrighting against 200Mb/sec of reading for example) and more over if you use no-native file systems like FAT or NTFS under linux. So try to wait for end transaction even if stopped on 100% but don't closed (which should indicate finish). – Costas Jan 24 '15 at 14:53
  • just wondering if it is possible at all to check the progress in that situation??? –  Jan 24 '15 at 19:34
  • try to format pendrive with option overwrite exiting data with zeros It works on My trancend 8GB pendrive – Akshay Daundkar Jul 06 '17 at 07:37
  • 1
    For anyone coming across this issue, just format your drive to NTFS. – Ricky Boyce Jan 04 '19 at 10:02
  • I experienced very low copy speed to a usb stick formatted as FAT32 and as EXT4. It was solved by formatting it to NTFS – user3804598 Feb 07 '21 at 19:30
  • Seeing the accepted answer's conclusion, I feel encouraged to bring forth my solution: the real-time system-resource monitoring graphs on the desktop. Back in Ubuntu's Unity DE it was called indicator-applet and in contemporary Gnome it's called gnome-shell-extension-system-monitor. The "Disks" graph will clearly show the ongoing write process (in case of SDCard devices too), and reliably provides the clue when the actual writing is complete. An installation guide of the latter (for Gnome desktops): https://askubuntu.com/a/1306383/1157519 – Levente Mar 16 '21 at 01:24

3 Answers3

64

The reason it happens that way is that the program says "write this data" and the linux kernel copies it into a memory buffer that is queued to go to disk, and then says "ok, done". So the program thinks it has copied everything. Then the program closes the file, but suddenly the kernel makes it wait while that buffer is pushed out to disk.

So, unfortunately the program can't tell you how long it will take to flush the buffer because it doesn't know.

If you want to try some power-user tricks, you can reduce the size of the buffer that Linux uses by setting the kernel parameter vm.dirty_bytes to something like 15000000 (15 MB). This means the application can't get more than 15MB ahead of its actual progress. (You can change kernel parameters on the fly with sudo sysctl vm.dirty_bytes=15000000 but making them stay across a reboot requires changing a config file like /etc/sysctl.conf which might be specific to your distro.)

A side effect is that your computer might have lower data-writing throughput with this setting, but on the whole, I find it helpful to see that a program is running a long time while it writes lots of data vs. the confusion of having a program appear to be done with its job but the system lagging badly as the kernel does the actual work. Setting dirty_bytes to a reasonably small value can also help prevent your system from becoming unresponsive when you're low on free memory and run a program that suddenly writes lots of data.

But, don't set it too small! I use 15MB as a rough estimate that the kernel can flush the buffer to a normal hard drive in 1/4 of a second or less. It keeps my system from feeling "laggy".

dataless
  • 1,719
  • I was looking for a fix to this problem for a year or more, I thought it was just a bug in the linux. Thanks a lot. – Sidahmed Jan 13 '17 at 12:50
  • 2
    Linux noob here, could someone post how to change the <dirty_bytes> values? – Brofessor Jun 15 '18 at 09:21
  • @Brofessor Oh, sorry, I should have described it by the official name instead of /proc details. Answer is updated. – dataless Jun 16 '18 at 18:24
  • 2
    This is similar to https://unix.stackexchange.com/questions/107703/why-is-my-pc-freezing-while-im-copying-a-file-to-a-pendrive/107722#107722 --- should have been fixed, but believe me, it's not. I had to add it to Ubuntu 18.04 to stop behaving funny... – Rmano Nov 09 '18 at 19:09
  • 2
    Works on Fedora 30 too. I am surprised to see such stupid behaviour even in modern Linux distros. – sziraqui Sep 09 '19 at 12:32
  • Better read this (https://blog.programster.org/fix-freezes-when-transferring-files)[https://blog.programster.org/fix-freezes-when-transferring-files] – Mohith7548 Nov 04 '19 at 17:30
  • Still works on 'Ubuntu 18.04 LTS' – deFreitas Aug 22 '20 at 15:18
  • @Mohith7548 the relevant information from your referred post is already mentioned on this answer, your link is broken though, the fixed link – deFreitas Aug 22 '20 at 15:21
  • 1
    Anyone interested in developing a complete solution I've made some exploration here: https://github.com/RomuloPBenedetti/SaneFileTransfere it is a crude investigation for this problem in particular. – RomuloPBenedetti May 10 '21 at 22:20
5

Late to the party, but a workaround I use to copy big files to USB stick is rsync.

The basic syntax I always successfully use is the following:

rsync -avh /home/user/Documents /media/user/myusbstick

Warning: if you want to copy the whole Documents folder use the syntax is ok. But if you want to copy only the file and not the folder you have to add a slash. Like this:

rsync -avh /home/user/Documents/ /media/user/myusbstick

Of course if you want to copy a file:

rsync -avh /home/user/Documents/file1 /media/user/myusbstick

for multiple files

rsync -avh /home/user/Documents/file1 /home/user/Documents/file2 /media/user/myusbstick

The syntax works for any folder/file you want to copy.

I'm aware this is not the real solution, but it's an easy and safe way to avoid annoying issues.

AdminBee
  • 22,803
Libero
  • 51
  • 3
    Welcome to the site, and thank you for your contribution. You may want to suggest the --progress option to have a rough equivalent of the progress bar found on graphical tools. – AdminBee Sep 10 '20 at 14:42
  • fyi: might not be a solution. I also use rsync and still have the problem of it stopping at literally 100% .. then took 5-10 minutes, same as the behavior described with GUI file-copy approaches. – digitalextremist Aug 10 '23 at 06:49
3

Old question, but it seems as though the problem still comes up. Setting the buffer to 15MB as suggested here did not work on Ubuntu 19.04, and brought my system to a grinding halt.

I was trying to copy a 1.5GB file onto an empty (newly formatted) FAT32 16GB drive. I let it run for about 10 minutes just to see if it would finish, with no luck.

Reformatting to NTFS let the operation finish in less than 10 seconds. I don't know why this would matter because FAT32 should allow any single file with size under 4GB, but it seemed to work just fine. Not an ideal fix for drives you want to use with MacOS, but an easy workaround for all other use cases. I imagine exFAT would have worked similarly, but I did not test it.

endrias
  • 113
Jacob
  • 172
  • 1
    Same problem with 19.10 and exFAT drive. Unable to copy a 40GB VMDisk over USB3Gen2 to an M.2 drive (Always fails using nautilus even cp/rsync). System is a 3950x 64gb of memory so not short on resources. Have to copy to a network drive (centos box) then mount my usb drive to my network server. This problem has been following Ubuntu like a dog with flees – John Apr 07 '20 at 18:21
  • curious, the quick and dirty change to NTFS is at least giving me an indication tha the files are moving, and its much faster than FAT32. wonder what ext4 would do? – Paul TIKI Aug 08 '20 at 06:31
  • I have seen this problem mentioned across forums that deal with various Debian flavors, the only apparent solution is disabling sync as a mount option – endrias Apr 29 '21 at 13:36
  • This is 2021, I'm using ubuntu 20.04 and still it's impossible to copy a 1 gigabyte file to a usb key. This is crazy. – gianni Sep 05 '21 at 21:56
  • Same problem in 2022, Ubuntu 20.04, try and copy a 350MB file to a FAT usb, takes almost a minute. Format to NTFS and is done in a few seconds... – Devyzr Feb 17 '22 at 15:23