3

I am using Fedora 16. My /dev/sda2, mounted on / (root) with something like 50G got filled 100%:

[foampile@~ 13:13:39]> df
Filesystem     1K-blocks     Used Available Use% Mounted on
rootfs          51606140 49025452         0 100% /
devtmpfs         2988452        0   2988452   0% /dev
tmpfs            2999424       96   2999328   1% /dev/shm
/dev/sda2       51606140 49025452         0 100% /
tmpfs            2999424    51992   2947432   2% /run
tmpfs            2999424        0   2999424   0% /sys/fs/cgroup
tmpfs            2999424        0   2999424   0% /media
/dev/sda1          99150    79569     14461  85% /boot
/dev/sda5      247972844 10782056 224594412   5% /home

Q1: Is there a command, or an option with ls, which will list all the files under a directory recursively and sort them in the descending order by size? I would like to see which files/dirs are hogging the device.

Q2: My /home is relatively unused. is there a way to repartition the disk and switch some disk space from /dev/sda5 (/home) to /dev/sda2?

Thanks

amphibient
  • 12,472
  • 18
  • 64
  • 88
  • 50GB for rootfs is a lot, it is better to check why the filesystem is filled than to blindly extend it. I expect /var, /var/log or /tmp to be filled. – jippie Oct 29 '12 at 21:06
  • I find two entries for / strange. Can you add the output to mount to your question? – jippie Oct 29 '12 at 21:31

3 Answers3

4

50GB for rootfs is huge, it is better to check why the filesystem is filled than to blindly extend it. If you just extend the root filesystem, chances are that it will fill up again in a short time. I expect /var, /var/log or /tmp to be filled.

In my experience this is usually caused by a large collection of relatively small(ish) files and therefore it is best to do it in a manual and structured way. The method below will let you find both a single large file, or a directory with a huge number of small files.

sudo -i
cd /
du -sxh * | sort -h
  • du lists disk usage

    • -h print sizes in human readable format (e.g., 1K 234M 2G)
    • -s for display only a total for each argument
    • -x for skip directories on different file systems
  • sort -h makes the largest directory appear last.

Now investigate the last (few) directories:

cd the_large_diretory

and repeat

du -sxh * | sort -h

Until you found the directory that contains the large file(s). Then finally you can inspect the directory's content with:

cd the_large_directory
ls -hlrS
  • ls to list the contents of the directory
    • -h to show file size in 'human readable form'
    • -l to show file details
    • -r to revers sort (largest file last)
    • -S to sort by file size

Notes

  • If either of the commands du or sort on your system does not support the -h flag, just use du -sxk * | sort -n. Output is similar, just a little harder to read due to long numbers.
  • If ls on your system does not support the -h flag, just skip it, it is not required but improves readability.
  • If ls on your system does not support the -S flag, pipe the output to sort -nk5
jippie
  • 14,086
  • sort -h: that looks cool! I wish my sort had it. It's a shame not being able to use the "human-readable" output on other commands just because you want to sort. – dubiousjim Oct 29 '12 at 22:28
  • Just use the '-k' for du (report in kB) and '-n' for sort (numeric) – jippie Oct 30 '12 at 07:12
3

Q1. Try something like sudo du -a -m -x | sort -k1n -r | head -n40. The -a flag to du says to be recursive. The -m flag displays sizes in MB. The -x stays on a single filesystem. This will list both files and directories, and only the 40 largest (because of the -n40 option to head). Some du implementations have a -t SIZE option to only display entries whose size exceeds SIZE.

To list files only, you could try instead something like: find / -xdev -type f -size +1M -ls. That will list only files on the same filesystem as / whose size exceeds 1 MB.

Q2. Almost certainly. But you should ask about this separately, or search (here or elsewhere) on keywords like "linux" and "repartition" because I've seen it discussed very often. Here are some previous Qs on this site:

dubiousjim
  • 2,698
  • Hibernate filled up the mail spool and I never check mail. The spool file grew to the size of 32G (yes, gigabytes). I deleted the file and now am trying to find out how to stop Hibernate from trying to send email): http://stackoverflow.com/questions/13160296/how-to-get-hibernate-to-stop-attempting-to-send-mail-to-sys-admin – amphibient Oct 31 '12 at 14:34
2

A1: ls -larS / | head -50

A2: yes, but be careful when resizing /home, make sure you force a disk check and backup

e2fsck -f /home

then

resize2fs /home 50G
h3rrmiller
  • 13,235