0

I have a directory that I allowed the motion utility to load up with jpeg files until the disk was full. Then I read "Efficiently delete large directory containing thousands of files" and got ideas for deleting the jpeg files, and cleaned up the directory. After this it has still been painfully slow to look at that directory's filenames, even though now the number of files is on the order of 10 or 20.

The question is, how do I make this directory convenient to work with again?

  • that's an unusual phenomenon. What is the file system type? exFAT by any chance? Or something more unix-y like XFS or ext4? – Marcus Müller Dec 04 '21 at 00:42
  • If it is not the root directory of a device, frequently the best thing to do is to create a new directory at the same level as the "old slow" directiry and move all the files from the old slow directory into it, delete the now empty old directory and rename the new directory back to the old name. – icarus Dec 04 '21 at 00:53
  • It's ext4 on spinning metal, /var/lib/motion/. – cardiff space man Dec 04 '21 at 02:38

2 Answers2

5

You can't, not on ext4 anyway.

ext4 never shrinks a directory's size (i.e. the size taken by the directory itself, not its contents) once it has grown. AFAIK, this is true on several (most?) other fs types, but not all fs types. This can seriously impact performance on some filesystems where a directory has had many thousands or millions of files in it, even if those files have been deleted or moved.

BTW, not all filesystems suffer serious performance problems once a directory has had lots of files in it (IIRC, it isn't a problem on xfs, and is less of a problem on ext4 than it used to be with ext3 or ext2).

The solution is to move the contents to a new directory with the same name. For example:

mv dir dir.old
mkdir dir
chmod --reference=dir.old/ dir/
chown --reference=dir.old/ dir/
mv dir.old/* dir/
rmdir dir.old

The chmod and chown ensure that the new directory has exactly the same owner, group, and permissions as the old. If you're using ACLs, you'll have to copy those too.

Note: this won't work if the directory is actually a mount-point for a filesystem. AFAIK, the only way to shrink the top-level dir of a filesystem is to backup, reformat with mkfs, and restore.

Update: the second duplicate link used to close this question says that you can umount the fs and use e2fsck -C0 -f -D /dev/XXX. See man e2fsck for details on what these options do.

Also note: whether using mv, backup-reformat-restore, or fsck, you should do this when the directory and the files in it are not in use by any process. Stop any processes using files in the directory. If necessary, reboot to single-user/emergency mode.

cas
  • 78,579
  • I found that the files in the directory were expendable, as far as the service that made them was concerned. I also recalled that the service that used the directory made the directory if it did not exist already. So I renamed the directory and let the service recreate it. – cardiff space man Dec 06 '21 at 17:51
-3

If you're using a SSD, you should trim it. $ fstrim -vvv -a If it's a HDD, I can't imagine what is happening!

Brian
  • 158
  • A lack of discarding could lead to slowness of the device, but not in only one director, so this is not what OP needs. – Marcus Müller Dec 04 '21 at 10:54
  • Yes, but trim actually is partition-based. If the drive is a single partition, you could be correct. But with Linux that is not usually the case. And, if space is tight, the SSD might try writing to the exact same blocks, which would greatly increase write amplification. So, the best thing is to trim it and be done. – Brian Dec 05 '21 at 20:56